Basic Concepts and Usage of Unittest
Test and Test Case
Tests are entities marked with the @Test
macro and are executed during the testing process. There are two types of tests in the Cangjie unittest framework: test class and test function. Test functions are simpler. Each function contains the full code for test running. Test classes are suitable for scenarios where deeper test structures are needed or where the test life cycle behavior needs to be covered.
Each test class consists of several test cases, each marked with the @TestCase
macro. Each test case is a function within the test class. The same test from the previous section can be rewritten as a test class like this:
func add(a:Int64, b:Int64) {
a + b
}
@Test
class AddTests {
@TestCase
func addTest() {
@Expect(add(2, 3), 5)
}
@TestCase
func addZero() {
@Expect(add(2, 0), 2)
}
}
A test function contains a single test case. In this case, the @TestCase
macro is not required.
Running this new test class in cjpm test
would generate the following output:
--------------------------------------------------------------------------------------------------
TP: example/example, time elapsed: 67369 ns, Result:
TCS: AddTests, time elapsed: 31828 ns, RESULT:
[ PASSED ] CASE: addTest (25650 ns)
[ PASSED ] CASE: addZero (4312 ns)
Summary: TOTAL: 2
PASSED: 2, SKIPPED: 0, ERROR: 0
FAILED: 0
--------------------------------------------------------------------------------------------------
cjpm test success
Assertion
Assertions are individual condition checks executed within the body of a test case function to determine whether the code is functioning properly. There are two types of assertions: @Expect
and @Assert
. Here is an example of a failed test to illustrate their difference:
func add(a:Int64, b:Int64) {
a + b
}
@Test
func testAddIncorrect() {
@Expect(add(3, 3), 5)
}
Running this test will fail and generate the following output (only relevant parts displayed):
TCS: TestCase_testAddIncorrect, time elapsed: 4236 ns, RESULT:
[ FAILED ] CASE: testAddIncorrect (3491 ns)
Expect Failed: `(add ( 3 , 3 ) == 5)`
left: 6
right: 5
In this case, replacing @Expect
with @Assert
would not change much. Add a check item and run the test again:
func add(a:Int64, b:Int64) {
a + b
}
@Test
func testAddIncorrect() {
@Expect(add(3, 3), 5)
@Expect(add(5, 3), 9)
}
Running this test will fail and generate the following output (only relevant parts displayed):
TCS: TestCase_testAddIncorrect, time elapsed: 5058 ns, RESULT:
[ FAILED ] CASE: testAddIncorrect (4212 ns)
Expect Failed: `(add ( 3 , 3 ) == 5)`
left: 6
right: 5
Expect Failed: `(add ( 5 , 3 ) == 9)`
left: 8
right: 9
Both checks are reported in the output. However, if @Expect
is replaced with @Assert
:
func add(a:Int64, b:Int64) {
a + b
}
@Test
func testAddIncorrectAssert() {
@Assert(add(3, 3), 5)
@Assert(add(5, 3), 9)
}
The output will be:
TCS: TestCase_testAddIncorrectAssert, time elapsed: 31653 ns, RESULT:
[ FAILED ] CASE: testAddIncorrectAssert (30893 ns)
Assert Failed: `(add ( 3 , 3 ) == 5)`
left: 6
right: 5
Here, only the first @Assert
check fails, and no further tests are executed. This is because the @Assert
macro follows a fail-fast mechanism. Once the first assertion fails, the entire test case fails, and subsequent assertions are not checked.
This is important in large tests with many assertions, especially in loops. It allows users to be notified of the first failure without waiting for all to fail.
Choosing between @Assert
and @Expect
depends on the complexity of the test scenario and whether the fail-fast mechanism is required.
Below are the usage forms for using the two assertion macros provided by unittest
:
- Equality assertions:
@Assert(a, b)
or@Expect(a, b)
checks whether the values ofa
andb
are equal. Ifa
is of typeA
, andb
is of typeB
,A
must implement Equatable<B>. - Boolean assertions:
@Assert(c)
or@Expect(c)
takes aBool
parameterc
, checking whether it istrue
orfalse
.
The second form @Assert(c)
can be considered shorthand for @Assert(c, true)
.
Failure Assertion
Failure assertions cause the test case to fail. @Fail
triggers a fail-fast mechanism. Execution of this assertion fails the test case immediately, skipping all subsequent assertions. @FailExpect
causes the test case to fail, but subsequent assertions will still be checked. The parameter of these macros is a string that describes the cause of failure. The return type of @Fail
is Nothing
, and the return type of @FailExpect
is Unit
.
An example is as follows:
@Test
func validate_even_number_generator() {
let even = generateRandomEven()
if (even % 2 == 1) {
@Fail("Not even number was generated: ${even}")
}
}
The following error information is output:
[ FAILED ] CASE: validate_even_number_generator (54313 ns)
Assert Failed: `(Not even number was generated: 111)`
Expected Exception Assertion
If the expected exception type is not thrown at the assertion poi, the test case fails. @AssertThrows
stops further checks, whereas @ExpectThrows
continues checking. The parameters of these macros include a list of expected exception types, separated by |
.If no input parameter is provided, the base class Exception
is expected. The input parameter is an expression or a code block expected to throw the exception.
An example is as follows:
// No.1
@AssertThrows(throw Exception())
// Semantically equivalent to No.1
@AssertThrows[Exception](throw Exception())
@AssertThrows[IllegalStateException | NoneValueException](random.seed = 42u64)
@ExpectThrows[OutOfMemoryError](foo())
@ExpectThrows({
foo()
boo()
})
@ExpectThrows[OutOfMemoryError]({
for (i in list) {
foo(i)
}
})
Returned Type of @AssertThrows
If no more than one exception is specified, the returned type matches the expected exception type.
let e: NoneValueException = @AssertThrows[NoneValueException](foo())
If more than one exception is specified, the return type is the least common supertype of the expected exception types.
// A <: C
// B <: C
let e: C = @AssertThrows[A | B](foo())
Returned Type of @ExpectThrows
@ExpectThrows
continues execution after a failure. If the number of specified exceptions does not exceed one, the returned type is Option<T>, where T
is the expected exception type.
let e: ?NoneValueException = @ExpectThrows[NoneValueException](foo())
If more than one exception is specified, the return type is ?Exception:
let e: ?Exception = @ExpectThrows[NoneValueException | IllegalMemoryException](foo())
Test Life Cycle
Test cases sometimes share setup or cleanup code. The test framework supports four life cycle steps, each being set with corresponding macros. Life cycle steps can be specified only for @Test
test classes, not for @Test
top-level functions.
Macro | Life cycle |
---|---|
@BeforeAll | Runs before all test cases. |
@BeforeEach | Runs once before each test case. |
@AfterEach | Runs once after each test case. |
@AfterAll | Runs after all test cases are completed. |
These macros must be applied to members or static functions of a @Test
test class. The @BeforeAll
and @AfterAll
functions cannot declare any parameters. The @BeforeEach
and @AfterEach
functions can declare one String
type parameter (or none at all).
@Test
class FooTest {
@BeforeAll
func setup() {
// Code to run before the test is executed
}
}
Each macro can be applied to multiple functions within a single test class, and multiple life cycle macros can be configured on a single function. However, life cycle macros cannot be applied to functions marked with @TestCase
or similar macros.
If multiple functions are marked as the same life cycle step, they can be executed in the order they are declared in the code (from top to bottom).
The test framework ensures that:
- Steps marked as Before all are executed at least once before all test cases.
- For each test case
TC
in the test class: (1) Steps marked as Before each are executed once beforeTC
. (2)TC
is executed. (3) Steps marked as After each are executed once afterTC
. - Runs the step marked with After all after all test cases in the test class.
Note:
If no test case is run, the above steps do not apply.
In the simple scenarios, steps marked as Before all and After all are executed only once. However, there are exceptions:
- For a type-parameterized test, the steps marked as before/after all will run once for each combination of type parameters.
- If multiple test cases are executed in parallel in different processes, the steps marked as before/after all are executed once in each process.
@BeforeEach
and @AfterEach
can access test cases being created or removed by specifying a String
type parameter in the corresponding function.
@Test
class Foo {
@BeforeEach
func prepareData(testCaseName: String) {
// The name of the test case function is passed as a parameter.
// In this example, the name would be "bar".
}
@AfterEach
func cleanup() {
// Can be used without specifying a parameter.
}
@TestCase
func bar() {}
}
When configuring the life cycle for a parameterized test or parameterized performance test, note that the steps marked as before each or after each are executed only once for all parameters before or after the test case or benchmark. That is, from the perspective of the life cycle, a test body that is executed multiple times with different parameters is considered as a single test case.
If each parameter of the parameterized test requires separate setup or cleanup, the corresponding code needs to be placed in the test case body itself. Additionally, the parameters themselves can be accessed.
Test Configuration
Additional configuration may be required for other more advanced features in the unit test framework. There are three ways to configure tests:
- Using the
@Configure
macro - Using command line arguments directly during test execution or with the
cjpm test
command - Using a
cjpm
configuration file
Running Configuration
Usage
Run the test executable compiled by cjc, adding parameter options:
./test --bench --filter MyTest.*Test,-stringTest
--bench
By default, only the functions marked with @TestCase
are executed. Only @Bench
macro-qualified cases are executed when --bench
is used.
--filter
To filter out a subset of tests by test class and test case, you can use --filter=test class name.test case name. For example:
--filter=*
: Matches all test classes.--filter=*.*
: Matches all test cases of all test classes (same result as *).--filter=*.*Test,*.*case*
: Matches all test cases ending with Test or containing case in their name in all test classes.--filter=MyTest*.*Test,*.*case*,-*.*myTest
: Matches all test cases ending with Test or containing case in their name in test classes starting with MyTest, but excludes test cases containing myTest.
In addition, --filter
is supported whether with =
or without =
.
--timeout-each=timeout
Using the --timeout-each=timeout
options is equivalent to applying @Timeout[timeout]
to all test classes. If @Timeout[timeout]
is already specified in the code, the option value will be overridden by the timeout specified in the code. That is, the timeout configuration specified in the option has a lower priority than that set in the code.
The timeout
value must comply with the following syntax: number ('millis' | 's' | 'm' | 'h')
For example: 10s
, and 9millis
etc.
- millis: millisecond
- s: second
- m: minute
- h: hour
--parallel
The --parallel
option allows the test framework to run different test classes in parallel across multiple separate processes. Test classes should be independent of each other and not rely on shared mutable state. Static initialization of the program may occur multiple times. This option cannot be used together with --bench
. This is because performance test cases are sensitive to underlying resources, so whether test cases are executed in parallel can affect the result of performance test cases.
--parallel=<BOOL>
:<BOOL>
can betrue
orfalse
. If it istrue
, test classes can run in parallel, and the number of parallel processes is controlled by the number of CPU cores on the running system. In addition,--parallel
may be used without=true
.--parallel=nCores
: Specifies that the number of parallel test processes must be equal to the number of available CPU cores.--parallel=NUMBER
: Specifies the number of parallel test processes. The value must be a positive integer.--parallel=NUMBERnCores
: Specifies the number of parallel test processes as a multiple of the number of available CPU cores. The value must be a positive number (floating point number or integer).
--option=value
Any options provided in the --option=value
format that are not listed above are processed and converted to configuration parameters according to the following rules (similar to how the @Configure
macro processes parameters) and applied in sequence:
option
and value
are custom key-value pairs for runtime configuration options. The option
can consist of any English characters connected by hyphens (-
) and converted to the lower camel case format when transforming into a @Configure
parameter. The rules for value
formatting are as follows:
Note: The validity of option
and value
is not currently checked, and the priority of these options is lower than that of parameters set in the code using @Configure
.
- If the
=value
part is omitted, the option is treated asBool
valuetrue
. For example,--no-color
generates the configuration entrynoColor = true
. - If the
value
is strictlytrue
orfalse
, the option is treated as aBool
value having the corresponding meaning:--no-color=false
generates the configuration entrynoColor = false
. - If the
value
is a valid decimal integer, the option is treated as anInt64
value. For example,--random-seed=42
generates the configuration entryrandomSeed = 42
. - If the
value
is a valid decimal fraction, the option is treated as aFloat64
value. For example,--angle=42.0
generates the configuration entryangle = 42
. - If the
value
is string literal enclosed in quotes ("
), the option is treated as aString
, and the value is generated by decoding the string between the quotes ("
) with escape symbols like\n
,\t
, and\"
handled as the corresponding characters. For example, the--mode="ABC \"2\""
option generates a configuration entrymode = "ABC \"2\""
. - In all other cases, the
value
is treated as aString
, with the value taken exactly as provided in the option. For example,--mode=ABC23[1,2,3]
generates a configuration entrymode = "ABC23[1,2,3]"
.
--report-path=path
This option specifies the directory where the test report is generated after execution. By default, no report is generated if the option is not explicitly specified.
--report-format=value
This option specifies the format of the report generated after the test execution.
Currently, unit testing supports only the default XML format.
For benchmark tests, the following formats are supported:
csv
: The CSV report contains statistical data.csv-raw
: The CSV-raw report contains only the raw batch measurements.
The default format for benchmark tests is:
csv