The popularity of JavaScript is continuing to grow at an astronomical rate. Today it is possible and extremely practical to write both client and server side parts of an application in JavaScript or even CoffeeScript. However the need for Automated Testing is higher than ever given the nature of the JavaScript language. This article will discuss a recent testing experience I had working with a large scale JavaScript application.

The NodeJS component was a façade to a series of internal and for the most part, pre-built RESTful API’s external to our team. This NodeJS application’s responsibility was mainly to provide some orthogonal functionality like security and application logging plus some data aggregation and scrubbing.

Application Boundary-1

AngularJS has great support for traditional unit tests. In addition we wanted to be able to adopt a BDD approach which required Feature Level Testing on the application against a stubbed version of those API’s. This was so we could repeatedly and deterministically run our test suite under a Continuous Integration environment.

What do we get with AngularJS?

For both AngularJS and NodeJS we used Jasmine to write our Unit Tests which follows a Describe-It syntax. For the main browser style Feature Tests we adopted Protractor which is built specifically for Angular. There is a Cucumber JS but at the time we felt it wasn’t quite ready. Protractor is advised for more involved Angular applications as there are some complicated timing issues you will not want to resolve yourself.

Protractor contains an AngularJS component you can inject called httpBackend. AngularJS loads this into your browser context allowing you to mock out http calls we would instead be making to NodeJS.

Application Boundary-2

This meant the NodeJS component wouldn’t be covered in these Feature Tests. But we believed this would be quite lightweight and we could mitigate this risk with a few E2E tests alongside good unit testing of the NodeJS component. The http Backend component intercepts http calls and if it matches a request you’ve set up will return the response you specified. As an example, we would have a file containing requests and responses like this:

searchMock = ->
angular.module("searchMock", [
"yourApp"
]).run ($httpBackend) ->
response = [{
formattedAddress: 'BONDI BEACH, NSW, 2026'
suburb: Bondi
...
}]

$httpBackend.whenGET(/.*search=2026/).respond 200, response

And then in your Feature Test you would have a line for each file included like:

SearchMock = require('../mocks/searchMock.js')

So if our test invoked a request with a search param of ‘search=2026’ we would return the JSON array in the ‘response’ variable. The request was being pattern matched so you can wildcard on the url path or, as in this case, just on a parameter.

What happened?

This approach didn’t scale and was not a pleasant experience to develop in for a number of reasons which I’ll go through.

Incomplete tests

The first obvious problem was that we weren’t testing the whole system. As httpBackend was mocking out NodeJS we started seeing issues with bugs not being covered by our testing. On top of that the NodeJS app started to grow meaning our previous plan of mitigation was becoming less mitigated.

Mocking

We also quickly grew to dislike httpBackend. We invested a lot of time trying to improve this but there were a couple of inherent problems that we couldn’t get around.

The first was that it was incredibly brittle and verbose.  We had an advanced search feature to test and there were a lot of different combinations of filters to test. Our first approach started off with one really large file of json and a lot of regex pattern matching at the end for each test. Any change to our REST calls became a problem because you weren’t sure which tests were calling which specified mapping in our json file. AngularJS orders request parameters alphabetically so if they changed you would have to reorder the regex too which wouldn’t be immediately obvious. Also if your request doesn’t match a mocked request, it will actually just pass through to the real backend service which may have non-deterministic results. Either way there would be a long debug cycle involved.

We changed this to multiple files of data to be used on different tests allowing us to better isolate data used in a test. But we had a change in one our backend API’s to the structural JSON response. This cost us a lot of time applying the change to all the different JSON responses. And then we decided we didn’t like this approach either.

Our root problem here was the test data was in a static and verbose form in a different file from the tests. We weren’t able to use factories to dynamically build our data objects. This made the tests hard to read as you had to context switch between the Jasmine test to a large, unreadable JSON file. What I really wanted was Company ‘x’ with some sensible defaults and just override the location to Melbourne from Sydney.

What did we change?

The first thing we did was to remove httpBackEnd. It wasn’t, despite our best efforts, fit for purpose and more importantly it wasn’t covering the NodeJS component. As more and more functionality slipped into our NodeJS component the more obvious a decision this became. This meant we had to find a framework to mock out NodeJS API calls to the REST backend. We found one NodeJS package called Shmock,but there are plenty of options out there. We were using Grunt as our build tool and it was easy enough to set up a test configuration to point to our Shmock end points. This allowed us to do the following (the terse nature of CoffeeScript shines here)

mock.post("/oauth/token").persist().reply 200, expired: false

We created a util file to include in our tests that decorated our mock object like:

mock.paths =
    search: () -> mock.get "/search"

 

This centralized our paths in one utility file and allowed us to do the following in our jasmine beforeEach:

mock.paths.search()

 

Shmock has query and reply methods to match responses to requests so we could now do our setup like this:

mock.paths.search()
    .query({
        postCode: 4000  
    })  
    .reply(200, someResponse)

 

So for a GET query on the search api with a parameter of postCode equal to 4000 we would return a HTTP 200 code and the contents of ‘someResponse’. This was better as we could now see in our test file what conditions we were going to match on. But the ‘someResponse’ variable was still pretty messy and soon became repetitive. However, as we were now running against NodeJS we suddenly had a library of NPM packages available to us. There are quite a few NodeJS Packages out there that allow you to quickly create test fixtures. We settled on Rosie based on being functionally quite complete and having an active community. Then our setup code became:

mock.paths.search()
    .query({
        postCode: 2000
    })
    .reply(200, Factory.build(company, { name: ‘ACME’ })
)

 

Where we were able to override an attribute as we saw fit (in this case we set the name). This meant we were much better protected from changes in our large JSON responses while being very clear what we were actually testing.

What we learnt

An AngularJS/NodeJS stack presents some slightly different challenges to doing BDD. For anything significant, as is, it really doesn’t scale for automated Feature Tests. Saying that, the great strength of this stack is the diverse and vibrant community. With a little bit of investigation and some tweaking you can write a full Javascript stack Web Application with very expressive and powerful specs.