Dynamically generating mocha tests

By: on January 30, 2017

My first project at OW Labs was the development of World Energy Council’s trilemma index tool. This tool’s main purpose is to enable users to see how worldwide countries rank against three variables: energy security, energy equity and environmental sustainability. If you would like to know more about these variables go ahead and check the trilemma index tool above.

Our scenarios engine is a complex script that allows trilemma tool users to see how a certain country’s rank would change if, let’s say, their energy equity score were to increase. You can play around with our scenarios engine on pathway-calculator page of the tool.

Before the actual app development started I was tasked to come up with a way to enable data engineers to validate our scenarios engine against real data.

I decided to go ahead with the design of automated mocha tests on nodejs – a javascript runtime. The Mocha Test Suite would have to pick data from a database (MongoDB), feed it to the scenarios engine, and compare the results to the data in an Excel file. The data in the Excel file was generated by an Excel model, and this would be a way to compare the two sets of results and ensure that the calculations were identical.

Each country is ranked based on 35 metrics. Having 130 countries and 3 years of data resulted in approximately 13,000 tests. The overall process is described in Figure 1.

Mocha Test Suite Sequence

A script would have to be written to generate all these single metric comparisons and then report on it.

The tests suite I wrote ended up being composed of two small scripts – under 100 lines of code in total. The first script was a simple setup and the second was a dynamic tests generation script.

As I started writing the tests, I thought this would be an easy task with very little to think of, however as you see, the asynchronous nature of JS left me wondering why my dynamically generated tests did not even come close to being executed and producing results.

To make sure I had the test template, I first wrote a manual test for a simple metrics comparison with db access and excel data deserialisation.

Issue: On this initial attempt, a Mocha timeout occurred that would not wait for the Mongo connection to be resolved:

Mocha Timeout

Solution: Resolving this issue was a simple matter of adding a timeout flag and setting it to 15 seconds on the mocha command line.

> node ./node_modules/mocha/bin/mocha --timeout=15000 test/data

Dynamically creating the tests

Having made sure the right premises were in place to have at least the basics working, I proceeded to write the generated tests, iterating over both engine generated metrics and excel metrics, and asserting whether both metrics were identical.

  • For array iteration, I used the lodash module that enables easy manipulation of arrays.
  • To handle the Excel files, I used fs and path, which enable file loading, and papaparse, which makes it easy to transform CSV data into JSON objects.

Here’s my initial dynamic tests script:

Mocha Dynamic Tests Creation

Issue: To my surprise, nothing would execute. The only console output was the following:

Absence of Results

It appears that mocha was not receiving any indication that these tests would have to be performed asynchronously, since at program start, none of the it() test cases were passed a done callback, signalling an asynchronous test to be expected.

Solution: To solve this issue I simply added a manual async test in a separate test case, before all the remaining dynamically generated tests would get executed;

Manual Async Test

And voilà, all 13000 tests were successfully executed. The results give valuable feedback to data engineers for fine tuning the scenarios engine.

1300 Tests Generated

Conclusion

Over this short article I have shown how we can dynamically generate mocha tests.

One obstacle encountered was that mocha did not wait for the async tests to be executed because, at the start of its execution, there were no signs of asynchrony.  We easily fixed this by declaring an async manual test before executing the script – which ultimately creates these tests on the fly.

Hope this will help you whenever you need to write a big amount of tests to do simple checks.

Resources

The two scripts I created for this test suite can be found here.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*