SOA Series Part 1: The What, the Why, and the Rules of Engagement

This is the first in a series of seven posts on service-oriented architecture derived from a workshop conducted by Cloves Carneiro and Tim Schmelmer at Abril Pro Ruby. The series is called SOA from Day One – A love story in 7 parts.

If you are like me, you have been noticing a large number of blog posts and conference talks in the tech community focussing on how to refactor large, monolithic web applications into smaller, more manageable chunks. It seems like so many start-ups get addicted to the “crack” that the Rails, Djangos, Symfonys, and < insert your language’s web framework >s of this world offer to ship features fast without paying any consideration to the technical debt they accrue.

While this might be arguably justifiable just to get the next round of funding, most shops lose sight of the tipping point where their applications become unmanageable in their complexity, due to this tech debt accrued in the early phases of their business lifecycle.

The costs to address the resulting issues after the fact can become astronomical. Not only will development speed grind to a halt: your tech debt accrual will continue to accelerate and your code will have more customer-visible bugs. You will also start spending huge amounts of very tedious, complex and often frustrating time on pulling apart this ball of yarn that your web application has turned into. We are saying this based on personal experience, and as part of a team of 6 people (and growing … we are hiring! ;–) that has spent multiple person-years to turn a 80k LOC Rails application on top of a database with more than 400 tables into smaller, more manageable applications, gems and services.

This (rather opinionated) blog post is the first in a series of seven, trying to show that there are good, viable, cost-efficient ways to avoid getting into such calamities, while still iterating fast. It is the result of a talk and workshop that my good friend and fellow LivingSocial-ite Cloves Carneiro and I have recently had the pleasure to present on this topic at Abril Pro Ruby 2014.

This first part will explain some terms that will be used in all parts, as well as some opinionated “rules of engagement” that Cloves and I came up with. The remaining parts will show how to put these rules into practice by developing a toy (albeit fully working) SOA-based deals viewing application, including the three backend service-applications the front-end application is based on.

Part 2 will focus on setup of each of these applications locally, as well as deploying them into at least two “fabrics”; one for developing features, and one for running them in a production setting. Part 3 will focus on tools and techniques to generate documentation and code for your APIs. The fourth part will show how to help client-perceived performance via caching service results, while the fifth part will discuss approaches to testing in a SOA environment. Part 6 will focus on service-side approaches to optimize for client performance, while the seventh and final part will show an example of how to introduce versioning into your services.

Each of the parts 2 and onwards will include exercises, and all code for the applications and exercises is available on github, and all applications are deployed on heroku.

A Note on ‘Silver Bullets’

This article does not claim to be an objective discussion of the pros and cons of using a SOA. There are certainly differing views and valid arguments to be made that a SOA might not be the right approach for each and every problem. Even if the authors are convinced that it is superior to having just one single application for any other than non-trivial problem, there simply are no silver bullets.

This series of articles’ main focus and message is that it is much easier, and much more efficient, to start out designing an application with a SOA in mind than it is to rewrite it after the fact. We will get into detailed tips and ‘rules’ about what to take into account to make this endeavor as close to painless as feasibly possible.

What is a SOA anyway?

There are a lot of conflicting ideas out there around what a service oriented architecture is really all about.

For the purpose of this article, let’s define it as an architectural principal in which there is a bias to build systems through separately implemented, deployed, and maintained services. In a set-up like that, any end customer facing “front-end” application then solely rely on a set of (back-end) services to expose the functionalities it exposes.

With that out of the way, then what is a service?

What are services?

To us, the definition of a software service is very close to the general (non-technical) definition that it is “a system supplying a public need”. More concretely, it is a piece of software functionality and its associated set of data for which it is the system of record.

Some good examples of such services available in the cloud are Amazon AWS’ service offerings (S3 / EC2 / MechTurk / SQS / …) or Google’s services portfolio (Maps service / AppEngine /In-App billing / Cloud Messaging / …).

Why use services?

When done right, services have much smaller codebases than monolithic applications. They are therefore easier to maintain, as they are easier to read, reason about, and to upgrade.

Using services can be compared to employing the “Single Responsibility Principle” of object-oriented design on an application design level. They implement a self-contained, well-defined and documented set up functionality, which they expose only via versioned APIs. They are the true system of record for all data they access from a data store (e.g., a Database). No other service or application has direct read or write access to the underlying data store. This will achieve true decoupling and transparency of the implementation details of data storage from information access.

In most cases, the use of services also makes it easier to scale your application. For one, you can try and design your services to be scaled by their traffic pattern: read-intensive API endpoints can be cached independently from write-intensive functionality. You can also more naturally separate cache expiry based on the tolerance for staleness of information based on your individual endpoints, or even by particular clients requesting information (e.g., information about the descriptive text on any given inventory item should in most cases have a higher tolerance for staleness than the number items left to be purchased.)

In an upcoming part of this series, we will show how to implement such caching strategies client-side, as well as taking advantage of the HTTP protocol’s Conditional GET facilities service-side. We explain how this can be paired with the use of reverse proxies, pre-calculating results service-side, as well as employing dedicated DB read-only replicas for resource-intensive queries.

Why should I care?

You should care whenever you are trying to implement a system that you want to be success. If your goal is to have an application that is successful, then there will be a point in the life of your software application where it will have to undergo change.

First things that come to mind for web applications are that you might need an interface for administrators of your application data. What if you decide to add a mobile version of your site (or native mobile apps) after its initial launch? What will you do when you need more business intelligence into the mix and run regular business metrics report. What about the point in time when your customers request to have public API access to your application?

In short, it’s not a question of if you will need to invest in changing your application, but when to make the change and how expensive it will be.

The main point of this series is to show that adopting a service-oriented approach from the beginning does not have to be much more expensive than just going the ordinary (monolithic) “one app to rule them all” way. The investment is very comparable to the “write tests or not?” question: investing in repeatable, automated pre-launch testing will pay off almost immediately and be manifold by not having to address bugs found in this way post-release.

Similarly, starting out with small, more focussed services in the early phases of your applications life cycle will prevent you from involving very large parts of your engineering resources into ripping apart and rewriting your platform in a year or so from now.

The Rules of Engagement

While there is a lot to know and learn about service orientation, the authors have tried to crystallize the ground rules for following such an approach into four rules.

1. Customer-Facing applications cannot directly touch any datastore

Consumer-facing applications will be a mere mashup of data retrieved from authoritative systems-of-record, and they will never have a database. Such systems-of record will always be a service application with well-defined, versionable interfaces. Apart from the fact that some nasty security issues will be easier to address this way (SQL injection should be a thing of the past), different consumer-facing apps that work on the same information exposed by the owning service will be able to evolve independently, regardless of changes in the underlying data schemas of your datastore.

2. No service accesses another service’s data store

Similarly, all inter-service interactions should happen through well defined, versioned APIs. While a service has ownership to of the data for which it is itself the system of record (including direct read and write access), it can only access other information via the dependent authoritative services.

3. Every service has at least a development and a production instance

When developing in SOA, you cannot expect your development and QA teams to run all the various environments for all the services to be deployed on their local development machines. Not only will this slow down their computers (and hence negatively impact their velocity), but they can also not be expected to update potentially several dozens of service applications (and their test data) locally on a regular basis.

No developer should ever need to run any system other than the one(s) under development on their personal development machine. Dependent services should have standard, well-known URIs for server instances to which all consuming applications point by default. We will show some techniques how to achieve this in part 2 of this blog post series.

4. Invest in making spinning up new services trivial

This point, combined with the previous rule, is very important to have both your developers and your TechOps teams embrace the ‘SOA way’ inside of your company. Driven, smart people tend not to tolerate being seemingly held back, and from a business viewpoint, they are too expensive an asset to be slowed down.

Make everyone happy by having a small accepted list of technologies to support. Invest in creating templates for service code. And, last but not least, introduce rules and guidelines for your interfaces: consider the use of an Interface Definition Language to define APIs, and think about the structure of your RESTful interfaces (what consistent error status codes will you expose, what is in the headers, versus URI parameters, how will authentication / authorization work, etc.).

These rules stem from more than a decade of combined personal experience (both from the ground up, as well as via refactoring existing monoliths).

We hope you enjoyed this article, and that you will tune in for future parts of this series.

This post is cross-posted from Tim Schmelmer’s blog.

Comments