SOA Series Part 5: Testing Apps with Service Dependencies

This is the fifth in a series of seven posts on service-oriented architecture derived from a workshop conducted by Cloves Carneiro and Tim Schmelmer at Abril Pro Ruby. The series is called SOA from Day One – A love story in 7 parts.

Writing fast and confident tests for your applications when they rely on one or more external services can be challenging. The fifth part of our “SOA from Day One” series aims to help with practical tips on how to address automated tests in a SOA.

Approaches

There are three general approaches to communicating with dependent services from within code under test:

1. Mocking / Stubbing of service calls

Each test stubs or mocks out every call to each of the services called in the exercised code.

This way, no service calls are ever made.

Results of the service calls are predefined, or ‘faked’, so that the rest of the code under test can be exercised based on these assumptions. There is a large number of such mocking frameworks out there. Projects like mocha or RSpec Mocks can help to prevent service calls by stubbing out Ruby methods of the service clients access objects. Other gems (e.g., Webmock or FakeWeb) tie in at the HTTP request level, letting you define mock responses for all requests to predefined URL patterns.

  • Pros:
    • Very fast – no time will be spent in your tests for making expensive network calls
  • Cons:
    • The full ‘integrated’, round-trip code path between client and service will never be exercised by the tests
    • If the dependent APIs ever change, the code under test will never notice. Tests will still pass, even if the API contract or behavior on the service side changes and you will only find this out in your production environment.
    • Often lots of (boring, boilerplate, distracting) mocking code needs to be written (and maintained) to bootstrap test cases.

2. Tests always call dependencies

In this approach, the code under test calls all dependent services every time the tests run.

  • Pros:
    • The test results are always true, i.e., they exercise the actual dependent service’s APIs and all code paths end-to-end.
  • Cons:
    • Slow – making network calls will be part of all tests that exercise code that relies on external services.
    • The application’s test suite can never run in isolation, as called services (be it their production or development instances, or locally installed versions) always need to be available during test runs
    • Changes in the dependent services’ data can cause ‘false negative’ test failures; If mutating actions are triggered by the code under test, then dependent services’ data state often needs to be reset between test runs.

3. Tests call dependencies once

This last approach is a mixture of the previous two approaches: the code under tests calls dependent services once, records the responses, and then replays them in future runs.

  • Pros:
    • Fast most of the time – network calls to retrieve service responses are only made when the tester chooses to record them
    • The test results truly reflect the actual service environment and APIs, as they are based on actual service responses.
    • Dependent services never need to be run (or even installed) locally if the service responses used in the test runs are recorded from their development or production instances
  • Cons:
    • If your application depends on many service, or requests large amounts of data, the recorded ‘canned responses’ can get very big.
    • The tester will need to find an appropriate frequency for re-recording the canned responses, so that they do not get stale, so it can be ensured that they still reflect the actual services’ APIs and behaviors.

Testing with VCR

While we are maintaining test suites that follow each of the above approach, we recommend using approach number 3 above. The tool of our choice for recording-and-replaying service responses is called VCR.

We like VCR because it is very easy to configure, integrates well with Rails, and it can hook into a large variety of HTTP gems (including Typhoeus, which is used in our sample services).

To serve as an example, we used it for a single test inside a branch of the cities_service repository, performing the following steps:

  • We added the vcr gem to the :test gem group in the Gemfile of cities_service
  • The test_helper.rb was changed to configure VCR to hook into typhoeus, and recorded cassettes (i.e., VCR’s term for the recorded, to be replayed service responses) into a special fixtues/vcr_cassettes directory, like so:
1
2
3
4
  VCR.configure do |c|
    c.hook_into :typhoeus
    c.cassette_library_dir = 'fixtures/vcr_cassettes'
  end
  • Finally, we recorded, and subsequently used, the cassettes in the tests for the RemoteTag class whenever a service call is made in the code under test, i.e.:
1
2
3
  VCR.use_cassette('tags_bacon')  do
    RemoteTag.find_by_name('bacon').name.must_equal 'bacon'
  end

A smarter way to mock

As mentioned above, while mocking has its disadvantages, it certainly helps with increasing the speed of test suites. To take advantage of this, we have recently found ourselves trying to address some of the short-comings of mocking via building mock objects right into a service’s client library.

While we are planning to write up a separate blog post with more details about this approach, here are some key points:

  • When building a client library (or in our case, a Ruby gem), we provide for a way to configure (at least) two backend alternative: one that makes actual HTTP calls to the respective service in order to assemble the library’s response objects; and a second ‘fake’ backend, which never makes actual network calls to retrieve response objects, but instead chooses them from a pool of well-known response objects.
  • These well-known response objects expose the same API as the objects returned from actual network service calls, and they come pre-loaded inside the mock backend’s registry of responses.
  • As part of their test suite set-up, applications under test place the service client library into ‘mock mode’, thereby configuring it to serve responses entirely out of the mock backend’s registry of pre-loaded response objects
  • To serve the needs of special-case, non-standard request situations, the client library allows for creating additional mock response objects, and for adding / removing them to / from the mock backend’s response object registry.
  • When the client application is running in a production environment, the production-specific setup will configure the client library to use the actual HTTP service-based backend instead.

This approach allows the mock objects to evolve in lock-step with the client library version, which will increase the client application’s confidence level that it is testing against the same API as the actual objects returned by the latest service API version.

Additionally, none of the usual cumbersome and boilerplate code to create and register mock objects for the various tests needs to be written: the mock backend comes pre-configured with a variety of standard responses which the application code under test will simply use without any additional configuration.

Exercise “Testing with Dependent Services”

  • Add vcr and all necessary configuration to your fork of the inventory_service repository
  • Write some tests (e.g., unit tests for RemoteTag and / or RemoteCity) that:
    • exercise, and record results for, calling city_service for the city of a given IntentoryItem
    • exercise, and record results for, calling tags_service for all tags given a list of IntentoryItems

Previous articles in this series

Comments