To run tests and generate snapshots, run npm install
and npm test <path-to-linter-subdir>
.
We ask that all new linter definitions in this repository add some basic testing. This should be a straightforward and simple process, with minimal overhead, but let us know if you need help! Please start by following the instructions below:
Please create a directory structure in your linter/formatter definition analogous to the following:
linters/
└─my-linter/
│ plugin.yaml
│ my_linter.test.ts
│ README.md (optional)
│ my-config.json (optional)
└─test_data/
└─basic.in.py (with appropriate extension)
-
Specify a
README.md
if your linter integration requires additional explanation or configuration. -
Specify a
my-config.json
(or whateverdirect_configs
item applies) ONLY if providing this config file is sufficient to enable your linter in ALL cases. This will be created whenever someone enables your linter. -
Specify a typescript test file that calls
linterCheckTest
orlinterFmtTest
with the name of your linter and (optionally) the prefixes of your input files and any special callbacks. -
Inside of
test_data/
, provide at least one input file.-
For linters, specify a sample input file (with an appropriate file extension). For reference, the tests will run the following command against your input file:
trunk check <input-file> --force --filter=<my-linter> --output=json
-
For formatters, specify a sample input file (with an appropriate file extension). For reference, the tests will essentially run the following command against your input file:
trunk fmt <input-file> --force --filter=<my-linter>
-
Refer to sqlfluff or pragma-once as testing examples.
To run all tests, run npm install
and then run:
npm test
To run an individual test, run:
npm test <path-to-linter-subdir>
Then, verify that the generated snapshot file includes the results you would expect (e.g. an Object with several fileIssues, no taskFailures).
For context, the general test execution is as follows:
- Create a sandbox testing directory by copying a linter's subdirectory and its
test_data
. - Initialize a base .trunk/trunk.yaml in the sandbox with the version in .trunk/trunk.yaml.
- Run
trunk check enable <linter>
. - Run
trunk check
ortrunk fmt
on files with the<name>.in.<extension>
syntax.
The first time a test runs, it will attempt to run against a linter's known_good_version
. This
snapshot mirrors the behavior in CI and is used to validate that a linter runs as expected across
multiple versions. Subsequent test runs will only run against its latest version unless otherwise
specified (See Environment Overrides).
If this causes the test to fail when run with the latest version, this is most likely because there
are discrepancies in the linter output across versions. Rather than running npm test -- -u
,
instead run PLUGINS_TEST_UPDATE_SNAPSHOTS=true npm test <path-to-failing-test>
. This will
create an additional snapshot for the latest version and is used to track historical test behavior
and ensure compatibility with trunk across multiple linter versions.
If you need to run tests for all the existing snapshots, run
PLUGINS_TEST_LINTER_VERSION=Snapshots npm test
.
The process of resolving snapshots for asserting output correctness is as follows:
- If the linter being tested has no version (e.g.
pragma-once
), the same snapshot is used in all cases. - If
PLUGINS_TEST_UPDATE_SNAPSHOTS
is truthy, the enabled version of the linter is used, and if a snapshot with this version does not exist, a new snapshot is created. - Otherwise, use the most recent snapshot version that precedes the enabled version of the linter. If no such snapshot exists, a new snapshot is created with the enabled version of the linter (use debug logging to see what version was enabled).
The reasoning for this setup is threefold:
- Linters can update their arguments or outputs on occasion, which can lead to a different trunk output. We would like to be aware of these changes, particularly if they require trunk to accept a different output format entirely.
- We want to ensure we can support older versions of linters when possible. Thus, when changes are
introduced, set
PLUGINS_TEST_UPDATE_SNAPSHOTS
rather than running with the-u
flag. This preserves the older snapshots. - We don't want to require a snapshot for every version of every linter. This is overkill, pollutes the test data, and causes friction with in progress PRs when new linter versions are released. Therefore, by default we resolve to the most recent snapshot version and assume that its output will match, unless otherwise specified.
Trunk is compatible with Linux and macOS. Trunk is also in beta on Windows. If your linter only runs on certain OSs, refer to the example of stringslint to skip OS-dependent test runs.
linterCheckTest
or linterFmtTest
should be sufficient for most linters and formatters. If your
test requires additional setup, follow the example of preCheck
in
sqlfluff_test.ts.
Additional configuration can be passed by prepending npm test
with environment variables. Options
include:
PLUGINS_TEST_CLI_VERSION
replaces the repo-widetrunk.yaml
's specified cli-versionPLUGINS_TEST_CLI_PATH
specifies an alternative path to a trunk launcherPLUGINS_TEST_LINTER_VERSION
specifies a linter version semantic (KnownGoodVersion | Latest | Snapshots | version). Latest is the default.PLUGINS_TEST_UPDATE_SNAPSHOTS
iftrue
, tells tests to use an exact match of the linter version when checking the output. Only set this if a linter has introduced a output variation with a version change.SANDBOX_DEBUG
iftrue
, prevents sandbox test directories from being deleted, and logs their path for additional debugging.
PRs will run 5 types of tests across all platforms as applicable:
- Enable and test all linters with their
known_good_version
, if applicable. To replicate this behavior, run:PLUGINS_TEST_LINTER_VERSION=KnownGoodVersion npm test
. If theknown_good_version
is different from the version enabled when you defined the linter, you will need to first run this locally to generate a snapshot file. - Enable and test all linters with their latest version, if applicable. To replicate this behavior,
run:
npm test
. - Assert that all linters pass config validation. This is also validated while running:
npm test
. - Assert that all linters have test coverage.
- Assert that all linters are included in the
README.md
.
Individual tests normally complete in less than 1 minute. They may take up to 5 minutes or so if the dependency cache is empty (linters need to be downloaded and installed to run the linter tests). Subsequent runs will not experience this delay.
Errors encountered during test runs are reported through the standard console
, but additional
debugging is provided using debug. The namespace convention
used is <Location>:<linter>:<#>
, where Location is Driver | Tests
, linter is the name of the
linter being tested (alternatively test<#>
if no linter is specified), and # is a counter used to
distinguish between multiple tests with the same linter.
Accordingly, in order to view debug logs for all sqlfluff tests, you can set the environment variable:
DEBUG=*:sqlfluff*
To just see debug logs from the test driver, use:
DEBUG=Driver:*