Creating JavaScript jstest reftests

The js/src/tests directories test files are run by the jstest harness in the shell and the reftest harness in the browser. This directory contains tests of SpiderMonkey conformance to ECMAScript as well as SpiderMonkey non-standard extenstions to ECMAScript.

Directory Overview

In the js/src/tests directory, there are a few important subdirectories.

non262 tests

The directory js/src/tests/non262/ should contain all tests of the following type:

  • Regressions of SpiderMonkey
  • Non-standard SpiderMonkey extensions to the JavaScript language
  • Test of "implementation-defined" details of the ECMAScript Standard

You should not add tests of the following type:

  • Tests of the JIT
    • Test of JIT correctness belong in the jit-test suite, read more here.
  • Performance tests or stress tests
  • Tests of SpiderMonkey's comformance to the ECMAScript Standard

A brief history: In 2017, SpiderMonkey started comsuming Test262, a comprehensive tests suite for ECMAScript implementations. Before using Test262, SpiderMonkey had a fair number of internal tests of conformance to ECMAScript, and many of those tests remain in the js/src/non262 directory as regressions.

test262 tests

Test262 is the implementation conformance test suite for the latest drafts of ECMAScript Language Specification, as well as Internationalization API Specification and The JSON Data Interchange Format. It is maintained by TC39, the ECMAScript Standard's technical committee. Mozilla manually imports these tests into the js/src/tests/test262 directory.

You should contribute directly to Test262 in the following scenarios:

  • You are writing tests for the implementation of a new feature or feature proposal for ECMAScript.
    • If tests for the new feature proposal do not yet exist in Test262 (they might!), contributing to Test262 will allow all other JavaScript implementors to use your tests as well.
  • You would like to contribute tests of an ECMAScript, Intl or JSON feature insufficiently covered by Test262.

At the time of this writing, Mozilla imports Test262 tests in to the directory js/src/tests/test262. When importing Test262, the test file's in-file metadata is translated from Test262 format to a format readibly by the jstest harness. If you are contributing directly to Test262, you must submit the tests in the Test262 format, which you can see in the Test262 git repository and read about here.

An update script exists in the js/src/tests directories to fetch and reformat the Test262 suite, see test262-update.py.

Writing a new test file

Please read the high level advice for test writing in the parent article here.

jstests has a special requirement:

  • The call to reportCompare in every jstest is required by the test harness. Except in old tests or super strange new tests, it should be the last line of the test.

Comparison functions and shared test functionality

The jstest runner loads the code in js/src/tests/shell.js for every test. Additionally, it loads every shell.js and broswer.js file in the subdirectories on the path from js/src/tests to the location of your test. If you have functionality several tests share in a given folder, you can add the functionality to the shell.js or broswer.js file in the directory.

assertEq

assertEq(v1, v2[, message]) checks that v1 and v2 are the same value. If they're not, throw an exception (which will cause the test to fail).

reportCompare

reportCompare(expected, actual, description) is somewhat like assertEq(actual, expected, description) except that the first two arguments are swapped, failures are reported via stdout rather than by throwing exceptions, and the matching is fuzzy in an unspecified way. For example, reportCompare sometimes considers numbers to be the same if they are "close enough" to each other, even if the == operator would return false.

expected = 3;
actual   = 1 + 2;
reportCompare(expected, actual, '3==1+2');

compareSource

compareSource(expected, actual, description) is used to test if the decompilation of a JavaScript object (conversion to source code) matches an expected value. Note that tests which use compareSource should be located in the decompilation sub-suite of a suite. For example, to test the decompilation of a simple function you could write:

var f  = (function () { return 1; });
expect = 'function () { return 1; }';
actual = f + '';
compareSource(expect, actual, 'decompile simple function');

Handling shell or browser specific features

jstests run both in the browser and in the JavaScript shell.

If your test needs to use browser-specific features, either:

  • make the test silently pass if those features aren't present; or
  • write a mochitest instead (preferred); or
  • at the top of the test, add the comment // skip-if(xulRuntime.shell), so that it only runs in the browser.

If your test needs to use shell-specific features, like gc(), either:

  • make the test silently pass if those features aren't present; or
  • make it a jit-test (so that it never runs in the browser); or
  • at the top of the test, add the comment // skip-if(!xulRuntime.shell), so that it only runs in the shell.

It is easy to make a test silently pass; anyone who has written JS code for the Web has written this kind of if-statement:

if (typeof gc === 'function') {
    var arr = [];
    arr[10000] = 'item';
    gc();
    assertEq(arr[10000], 'item', 'gc must not wipe out sparse array elements');
} else {
    print('Test skipped: no gc function');
}
reportCompare(0, 0, 'ok');

Handling abnormal test terminations

Some tests can terminate abnormally even though the test has technically passed. Earlier we discussed the deprecated approach of using the -n naming scheme to identify tests whose PASSED, FAILED status is flipped by the post test processing code in jsDriver.pl and post-process-logs.pl. A different approach is to use the expectExitCode(exitcode) function which outputs a string:

--- NOTE: IN THIS TESTCASE, WE EXPECT EXIT CODE <exitcode> ---

that tells the post-processing scripts jsDriver.pl or post-process-logs.pl that the test passes if the shell or browser terminates with that exit code. Multiple calls to expectExitCode will tell the post-processing scripts that the test actually passed if any of the exit codes are found when the test terminates.

This approach has limited use, however. In the JavaScript shell, an uncaught exception or out of memory error will terminate the shell with an exit code of 3. However, an uncaught error or exception will not cause the browser to terminate with a non-zero exit code. To make the situation even more complex, newer C++ compilers will abort the browser with a typical exit code of 5 by throwing a C++ exception when an out of memory error occurs. Simply testing the exit code does not allow you to distinguish the variety of causes a particular abnormal exit may have.

In addition, some tests pass if they do not crash; however, they may not terminate unless killed by the test driver.

A modification will soon be made to the JavaScript tests to allow an arbitrary string to be output which will be used to post process the test logs to better determine if a test has passed regardless of its exit code.