My first project at OW Labs was the development of World Energy Council’s trilemma index tool. This tool’s main purpose is to enable users to see how worldwide countries rank against three variables: energy security, energy equity and environmental sustainability. If you would like to know more about these variables go ahead and check the trilemma index tool above.
Our scenarios engine is a complex script that allows trilemma tool users to see how a certain country’s rank would change if, let’s say, their energy equity score were to increase. You can play around with our scenarios engine on pathway-calculator page of the tool.
Before the actual app development started I was tasked to come up with a way to enable data engineers to validate our scenarios engine against real data.
Each country is ranked based on 35 metrics. Having 130 countries and 3 years of data resulted in approximately 13,000 tests. The overall process is described in Figure 1.
A script would have to be written to generate all these single metric comparisons and then report on it.
The tests suite I wrote ended up being composed of two small scripts – under 100 lines of code in total. The first script was a simple setup and the second was a dynamic tests generation script.
As I started writing the tests, I thought this would be an easy task with very little to think of, however as you see, the asynchronous nature of JS left me wondering why my dynamically generated tests did not even come close to being executed and producing results.
To make sure I had the test template, I first wrote a manual test for a simple metrics comparison with db access and excel data deserialisation.
Issue: On this initial attempt, a Mocha timeout occurred that would not wait for the Mongo connection to be resolved:
Solution: Resolving this issue was a simple matter of adding a timeout flag and setting it to 15 seconds on the mocha command line.
> node ./node_modules/mocha/bin/mocha --timeout=15000 test/data
Dynamically creating the tests
Having made sure the right premises were in place to have at least the basics working, I proceeded to write the generated tests, iterating over both engine generated metrics and excel metrics, and asserting whether both metrics were identical.
- For array iteration, I used the lodash module that enables easy manipulation of arrays.
- To handle the Excel files, I used fs and path, which enable file loading, and papaparse, which makes it easy to transform CSV data into JSON objects.
Here’s my initial dynamic tests script:
Issue: To my surprise, nothing would execute. The only console output was the following:
It appears that mocha was not receiving any indication that these tests would have to be performed asynchronously, since at program start, none of the it() test cases were passed a done callback, signalling an asynchronous test to be expected.
Solution: To solve this issue I simply added a manual async test in a separate test case, before all the remaining dynamically generated tests would get executed;
And voilà, all 13000 tests were successfully executed. The results give valuable feedback to data engineers for fine tuning the scenarios engine.
Over this short article I have shown how we can dynamically generate mocha tests.
One obstacle encountered was that mocha did not wait for the async tests to be executed because, at the start of its execution, there were no signs of asynchrony. We easily fixed this by declaring an async manual test before executing the script – which ultimately creates these tests on the fly.
Hope this will help you whenever you need to write a big amount of tests to do simple checks.
The two scripts I created for this test suite can be found here.