Creating JavaScript tests

There are two large SpiderMonkey test suites: jstests (in js/src/tests) and jit-tests (in js/src/jit-test). See Running Automated JavaScript Tests for details.

In which test suite does your new test belong?

  1. jit-tests are intended to test the implementation of the JIT. Only add tests to these suites which test jit correctness or functionality.
  2. jstests is intended for tests of language-visible functionality. Please put tests of functionality into jstests even if related tests are in jit-tests, since jstests are closer to (and more easily converted to) test262 tests. (In fact, the test262 test suite is run as part of jstests.)

Practical differences between the two test suites:

jstest

  1. New jstest files should be put the code in the appropriate subdirectory of js/src/tests/non262/, or, under some scenarios, contributed directly to the test262 repository.
  2. jstests run in both the shell and the browser (although you can specify that the test should be run in only one of the two locations).
  3. jstests automatically load js/src/tests/shell.js before they run, which creates a ton of functions.
  4. Read more advice on jstests here.

jit-test

  1. New jit-test files should be put in js/src/jit-test/tests/basic or one of the other appropriate subdirectories of jit-test/tests.
  2. jit-tests run only in the shell.
  3. jit-tests do not load extra test functionality automatically.

Writing a new test file

Have a look at the existing files and follow what they do. All tests, in both suite, can use the assertEq function.

assertEq(v1, v2[, message])

Check that v1 and v2 are the same value. If they're not, throw an exception (which will cause the test to fail).

If you are writing a jstests, additional testing functionality is provided for you in shell.js files. You can read about them here.

Performance testing and general advice

Do not attempt to test the performance of engine features in the test suite.

Please keep in mind that the JavaScript test suite is run on a wide variety of wildly varying hardware plaforms, from phones all the way up to servers. Even tests that check for polynomial time complexity will start to fail in a few years when they have sped up enough to run faster than the granularity of the OS scheduler or when run on platforms with higher latencies than your development workstation. These tests will also show up as infrequent oranges on our heavily loaded test machines, lowering the value of our test suite for everyone. Just don't do it, it's never worth it.

Do not add performance tests to the test suite.

It is not generally even possible to tell if the speed of any particular feature is going to be important in the real world without running a real-world benchmark. It is very hard to write a good real-world benchmark. For this reason, the best place to find out if a change is performance sensitive is on arewefastyet.com.

Focus on writing fast, light tests that cover a single feature. There is basically no cost to adding a new test, so add as many feature tests as needed to cover each feature orthogonally. Remember that whenever a test fails, someone -- probably you -- is going to have to figure out what went wrong.

Testing your test

Run your new test locally before checking it in (or posting it for review). Nobody likes patches that include failing tests!

See Running Automated Javascript Tests for instructions on how to run jstests or jit-tests.

It's also a good sanity check to run each new test against an unpatched shell or browser. The test should fail if it's working properly.

Checking in completed tests

Tests are usually reviewed and pushed just like any other code change. Just include the test in your patch.

Security-sensitive tests should not be committed until the corresponding bug has been made public. Instead, ask a SpiderMonkey peer how to proceed.

It is OK under certain circumstances to push new tests to certain repositories without a code review. Don't do this unless you know what you're doing. Ask a SpiderMonkey peer for details.