~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/dev-tools/kunit/running_tips.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 .. SPDX-License-Identifier: GPL-2.0
  2 
  3 ============================
  4 Tips For Running KUnit Tests
  5 ============================
  6 
  7 Using ``kunit.py run`` ("kunit tool")
  8 =====================================
  9 
 10 Running from any directory
 11 --------------------------
 12 
 13 It can be handy to create a bash function like:
 14 
 15 .. code-block:: bash
 16 
 17         function run_kunit() {
 18           ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run "$@" )
 19         }
 20 
 21 .. note::
 22         Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
 23         the kernel root, hence the use of a subshell and ``cd``.
 24 
 25 Running a subset of tests
 26 -------------------------
 27 
 28 ``kunit.py run`` accepts an optional glob argument to filter tests. The format
 29 is ``"<suite_glob>[.test_glob]"``.
 30 
 31 Say that we wanted to run the sysctl tests, we could do so via:
 32 
 33 .. code-block:: bash
 34 
 35         $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
 36         $ ./tools/testing/kunit/kunit.py run 'sysctl*'
 37 
 38 We can filter down to just the "write" tests via:
 39 
 40 .. code-block:: bash
 41 
 42         $ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
 43         $ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'
 44 
 45 We're paying the cost of building more tests than we need this way, but it's
 46 easier than fiddling with ``.kunitconfig`` files or commenting out
 47 ``kunit_suite``'s.
 48 
 49 However, if we wanted to define a set of tests in a less ad hoc way, the next
 50 tip is useful.
 51 
 52 Defining a set of tests
 53 -----------------------
 54 
 55 ``kunit.py run`` (along with ``build``, and ``config``) supports a
 56 ``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
 57 regular basis (especially if they have other dependencies), you can create a
 58 specific ``.kunitconfig`` for them.
 59 
 60 E.g. kunit has one for its tests:
 61 
 62 .. code-block:: bash
 63 
 64         $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig
 65 
 66 Alternatively, if you're following the convention of naming your
 67 file ``.kunitconfig``, you can just pass in the dir, e.g.
 68 
 69 .. code-block:: bash
 70 
 71         $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
 72 
 73 .. note::
 74         This is a relatively new feature (5.12+) so we don't have any
 75         conventions yet about on what files should be checked in versus just
 76         kept around locally. It's up to you and your maintainer to decide if a
 77         config is useful enough to submit (and therefore have to maintain).
 78 
 79 .. note::
 80         Having ``.kunitconfig`` fragments in a parent and child directory is
 81         iffy. There's discussion about adding an "import" statement in these
 82         files to make it possible to have a top-level config run tests from all
 83         child directories. But that would mean ``.kunitconfig`` files are no
 84         longer just simple .config fragments.
 85 
 86         One alternative would be to have kunit tool recursively combine configs
 87         automagically, but tests could theoretically depend on incompatible
 88         options, so handling that would be tricky.
 89 
 90 Setting kernel commandline parameters
 91 -------------------------------------
 92 
 93 You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.
 94 
 95 .. code-block:: bash
 96 
 97         $ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false
 98 
 99 
100 Generating code coverage reports under UML
101 ------------------------------------------
102 
103 .. note::
104         TODO(brendanhiggins@google.com): There are various issues with UML and
105         versions of gcc 7 and up. You're likely to run into missing ``.gcda``
106         files or compile errors.
107 
108 This is different from the "normal" way of getting coverage information that is
109 documented in Documentation/dev-tools/gcov.rst.
110 
111 Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:
112 
113 .. code-block:: none
114 
115         CONFIG_DEBUG_KERNEL=y
116         CONFIG_DEBUG_INFO=y
117         CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
118         CONFIG_GCOV=y
119 
120 
121 Putting it together into a copy-pastable sequence of commands:
122 
123 .. code-block:: bash
124 
125         # Append coverage options to the current config
126         $ ./tools/testing/kunit/kunit.py run --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
127         # Extract the coverage information from the build dir (.kunit/)
128         $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
129 
130         # From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
131         # E.g. can generate an HTML report in a tmp dir like so:
132         $ genhtml -o /tmp/coverage_html coverage.info
133 
134 
135 If your installed version of gcc doesn't work, you can tweak the steps:
136 
137 .. code-block:: bash
138 
139         $ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
140         $ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6
141 
142 Alternatively, LLVM-based toolchains can also be used:
143 
144 .. code-block:: bash
145 
146         # Build with LLVM and append coverage options to the current config
147         $ ./tools/testing/kunit/kunit.py run --make_options LLVM=1 --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
148         $ llvm-profdata merge -sparse default.profraw -o default.profdata
149         $ llvm-cov export --format=lcov .kunit/vmlinux -instr-profile default.profdata > coverage.info
150         # The coverage.info file is in lcov-compatible format and it can be used to e.g. generate HTML report
151         $ genhtml -o /tmp/coverage_html coverage.info
152 
153 
154 Running tests manually
155 ======================
156 
157 Running tests without using ``kunit.py run`` is also an important use case.
158 Currently it's your only option if you want to test on architectures other than
159 UML.
160 
161 As running the tests under UML is fairly straightforward (configure and compile
162 the kernel, run the ``./linux`` binary), this section will focus on testing
163 non-UML architectures.
164 
165 
166 Running built-in tests
167 ----------------------
168 
169 When setting tests to ``=y``, the tests will run as part of boot and print
170 results to dmesg in TAP format. So you just need to add your tests to your
171 ``.config``, build and boot your kernel as normal.
172 
173 So if we compiled our kernel with:
174 
175 .. code-block:: none
176 
177         CONFIG_KUNIT=y
178         CONFIG_KUNIT_EXAMPLE_TEST=y
179 
180 Then we'd see output like this in dmesg signaling the test ran and passed:
181 
182 .. code-block:: none
183 
184         TAP version 14
185         1..1
186             # Subtest: example
187             1..1
188             # example_simple_test: initializing
189             ok 1 - example_simple_test
190         ok 1 - example
191 
192 Running tests as modules
193 ------------------------
194 
195 Depending on the tests, you can build them as loadable modules.
196 
197 For example, we'd change the config options from before to
198 
199 .. code-block:: none
200 
201         CONFIG_KUNIT=y
202         CONFIG_KUNIT_EXAMPLE_TEST=m
203 
204 Then after booting into our kernel, we can run the test via
205 
206 .. code-block:: none
207 
208         $ modprobe kunit-example-test
209 
210 This will then cause it to print TAP output to stdout.
211 
212 .. note::
213         The ``modprobe`` will *not* have a non-zero exit code if any test
214         failed (as of 5.13). But ``kunit.py parse`` would, see below.
215 
216 .. note::
217         You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
218         work and thus some tests might break. Ideally tests would specify they
219         depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
220         most test authors won't think about.
221         As of 5.13, the only difference is that ``current->kunit_test`` will
222         not exist.
223 
224 Pretty-printing results
225 -----------------------
226 
227 You can use ``kunit.py parse`` to parse dmesg for test output and print out
228 results in the same familiar format that ``kunit.py run`` does.
229 
230 .. code-block:: bash
231 
232         $ ./tools/testing/kunit/kunit.py parse /var/log/dmesg
233 
234 
235 Retrieving per suite results
236 ----------------------------
237 
238 Regardless of how you're running your tests, you can enable
239 ``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:
240 
241 .. code-block:: none
242 
243         CONFIG_KUNIT=y
244         CONFIG_KUNIT_EXAMPLE_TEST=m
245         CONFIG_KUNIT_DEBUGFS=y
246 
247 The results for each suite will be exposed under
248 ``/sys/kernel/debug/kunit/<suite>/results``.
249 So using our example config:
250 
251 .. code-block:: bash
252 
253         $ modprobe kunit-example-test > /dev/null
254         $ cat /sys/kernel/debug/kunit/example/results
255         ... <TAP output> ...
256 
257         # After removing the module, the corresponding files will go away
258         $ modprobe -r kunit-example-test
259         $ cat /sys/kernel/debug/kunit/example/results
260         /sys/kernel/debug/kunit/example/results: No such file or directory
261 
262 Generating code coverage reports
263 --------------------------------
264 
265 See Documentation/dev-tools/gcov.rst for details on how to do this.
266 
267 The only vaguely KUnit-specific advice here is that you probably want to build
268 your tests as modules. That way you can isolate the coverage from tests from
269 other code executed during boot, e.g.
270 
271 .. code-block:: bash
272 
273         # Reset coverage counters before running the test.
274         $ echo 0 > /sys/kernel/debug/gcov/reset
275         $ modprobe kunit-example-test
276 
277 
278 Test Attributes and Filtering
279 =============================
280 
281 Test suites and cases can be marked with test attributes, such as speed of
282 test. These attributes will later be printed in test output and can be used to
283 filter test execution.
284 
285 Marking Test Attributes
286 -----------------------
287 
288 Tests are marked with an attribute by including a ``kunit_attributes`` object
289 in the test definition.
290 
291 Test cases can be marked using the ``KUNIT_CASE_ATTR(test_name, attributes)``
292 macro to define the test case instead of ``KUNIT_CASE(test_name)``.
293 
294 .. code-block:: c
295 
296         static const struct kunit_attributes example_attr = {
297                 .speed = KUNIT_VERY_SLOW,
298         };
299 
300         static struct kunit_case example_test_cases[] = {
301                 KUNIT_CASE_ATTR(example_test, example_attr),
302         };
303 
304 .. note::
305         To mark a test case as slow, you can also use ``KUNIT_CASE_SLOW(test_name)``.
306         This is a helpful macro as the slow attribute is the most commonly used.
307 
308 Test suites can be marked with an attribute by setting the "attr" field in the
309 suite definition.
310 
311 .. code-block:: c
312 
313         static const struct kunit_attributes example_attr = {
314                 .speed = KUNIT_VERY_SLOW,
315         };
316 
317         static struct kunit_suite example_test_suite = {
318                 ...,
319                 .attr = example_attr,
320         };
321 
322 .. note::
323         Not all attributes need to be set in a ``kunit_attributes`` object. Unset
324         attributes will remain uninitialized and act as though the attribute is set
325         to 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.
326         These unset attributes will not be reported and may act as a default value
327         for filtering purposes.
328 
329 Reporting Attributes
330 --------------------
331 
332 When a user runs tests, attributes will be present in the raw kernel output (in
333 KTAP format). Note that attributes will be hidden by default in kunit.py output
334 for all passing tests but the raw kernel output can be accessed using the
335 ``--raw_output`` flag. This is an example of how test attributes for test cases
336 will be formatted in kernel output:
337 
338 .. code-block:: none
339 
340         # example_test.speed: slow
341         ok 1 example_test
342 
343 This is an example of how test attributes for test suites will be formatted in
344 kernel output:
345 
346 .. code-block:: none
347 
348           KTAP version 2
349           # Subtest: example_suite
350           # module: kunit_example_test
351           1..3
352           ...
353         ok 1 example_suite
354 
355 Additionally, users can output a full attribute report of tests with their
356 attributes, using the command line flag ``--list_tests_attr``:
357 
358 .. code-block:: bash
359 
360         kunit.py run "example" --list_tests_attr
361 
362 .. note::
363         This report can be accessed when running KUnit manually by passing in the
364         module_param ``kunit.action=list_attr``.
365 
366 Filtering
367 ---------
368 
369 Users can filter tests using the ``--filter`` command line flag when running
370 tests. As an example:
371 
372 .. code-block:: bash
373 
374         kunit.py run --filter speed=slow
375 
376 
377 You can also use the following operations on filters: "<", ">", "<=", ">=",
378 "!=", and "=". Example:
379 
380 .. code-block:: bash
381 
382         kunit.py run --filter "speed>slow"
383 
384 This example will run all tests with speeds faster than slow. Note that the
385 characters < and > are often interpreted by the shell, so they may need to be
386 quoted or escaped, as above.
387 
388 Additionally, you can use multiple filters at once. Simply separate filters
389 using commas. Example:
390 
391 .. code-block:: bash
392 
393         kunit.py run --filter "speed>slow, module=kunit_example_test"
394 
395 .. note::
396         You can use this filtering feature when running KUnit manually by passing
397         the filter as a module param: ``kunit.filter="speed>slow, speed<=normal"``.
398 
399 Filtered tests will not run or show up in the test output. You can use the
400 ``--filter_action=skip`` flag to skip filtered tests instead. These tests will be
401 shown in the test output in the test but will not run. To use this feature when
402 running KUnit manually, use the module param ``kunit.filter_action=skip``.
403 
404 Rules of Filtering Procedure
405 ----------------------------
406 
407 Since both suites and test cases can have attributes, there may be conflicts
408 between attributes during filtering. The process of filtering follows these
409 rules:
410 
411 - Filtering always operates at a per-test level.
412 
413 - If a test has an attribute set, then the test's value is filtered on.
414 
415 - Otherwise, the value falls back to the suite's value.
416 
417 - If neither are set, the attribute has a global "default" value, which is used.
418 
419 List of Current Attributes
420 --------------------------
421 
422 ``speed``
423 
424 This attribute indicates the speed of a test's execution (how slow or fast the
425 test is).
426 
427 This attribute is saved as an enum with the following categories: "normal",
428 "slow", or "very_slow". The assumed default speed for tests is "normal". This
429 indicates that the test takes a relatively trivial amount of time (less than
430 1 second), regardless of the machine it is running on. Any test slower than
431 this could be marked as "slow" or "very_slow".
432 
433 The macro ``KUNIT_CASE_SLOW(test_name)`` can be easily used to set the speed
434 of a test case to "slow".
435 
436 ``module``
437 
438 This attribute indicates the name of the module associated with the test.
439 
440 This attribute is automatically saved as a string and is printed for each suite.
441 Tests can also be filtered using this attribute.
442 
443 ``is_init``
444 
445 This attribute indicates whether the test uses init data or functions.
446 
447 This attribute is automatically saved as a boolean and tests can also be
448 filtered using this attribute.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php