Skip to content

Commit

Permalink
Fix multiprocessing.Process tests not collected by coverage and gcov (
Browse files Browse the repository at this point in the history
#486)

* Fix `multiprocessing.Process` tests not collected by coverage and gcov

* fix --concurrency=multiprocessing
  • Loading branch information
lljbash authored and KevinfromTJ committed Dec 4, 2023
1 parent 0ba8d77 commit 2a3b0c8
Show file tree
Hide file tree
Showing 5 changed files with 27 additions and 4 deletions.
12 changes: 12 additions & 0 deletions dipu/tests/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,15 @@
如果需要自动化检测 C++ 库内部的输出,可以使用 `test.python.utils.stdout_redirector.stdout_redirector` 来捕获。

独立测例可以包含 print。不过,在自动生成的单元测试中,独立测例中的输出会在测试通过的情况下被消除。

#### 子进程的 coverage 收集

使用 `multiprocessing.Process` 创建的子进程在 CI 上跑 coverage 时不会被统计,因此使用这种测试方式(e.g. `test_allocator.py`)的独立测例需要一些特别的处理。

#### C++ `gcov`

在调用 `multiprocessing.Process` 之前,**必须**调用 `multiprocessing.set_start_method('spawn', force=True)` 修改 multiprocessing 的默认进程生成方式。

##### Python `coverage`

代码**无需**额外修改,而是需要在 coverage run 中加入额外的 flag。目前在 `run_tests.sh` 中已经配置好了。
3 changes: 2 additions & 1 deletion dipu/tests/python/individual_scripts/test_allocator.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import os
from multiprocessing import Process
from multiprocessing import Process, set_start_method


def test_allocator(max_allocate, step, algorithm, log_mask, test_pin_memory=True):
Expand Down Expand Up @@ -67,6 +67,7 @@ def test_allocator(max_allocate, step, algorithm, log_mask, test_pin_memory=True


if __name__ == "__main__":
set_start_method('spawn', force=True)
max_allocate = 1 << 15
p1 = Process(
target=test_allocator,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Copyright (c) 2023, DeepLink.
from multiprocessing import Process
from multiprocessing import Process, set_start_method
from tests.python.utils.local_eviron import local_eviron


Expand All @@ -15,6 +15,7 @@ def _test_op_register(mode):


if __name__ == "__main__":
set_start_method('spawn', force=True)
p1 = Process(
target=_test_op_register,
args=(0,),
Expand Down
3 changes: 2 additions & 1 deletion dipu/tests/python/individual_scripts/test_memory_stats.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import os
from multiprocessing import Process
from multiprocessing import Process, set_start_method


def test_mem_stats(algorithm, log_mask):
Expand Down Expand Up @@ -61,6 +61,7 @@ def test_mem_stats(algorithm, log_mask):


if __name__ == "__main__":
set_start_method('spawn', force=True)
p1 = Process(
target=test_mem_stats,
args=("BF", 0),
Expand Down
10 changes: 9 additions & 1 deletion dipu/tests/python/run_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,15 @@

function run_coverage {
if [ "$USE_COVERAGE" == "ON" ]; then
coverage run --source="$TORCH_DIPU_DIR" -p "$@"
# We need to add "--concurrency=multiprocessing" to collect coverage data from
# `multiprocessing.Process`. This flag requires all other configurations in
# `.coveragerc`.
cat << EOF > .coveragerc
[run]
source = ${TORCH_DIPU_DIR}
parallel = True
EOF
coverage run --concurrency=multiprocessing "$@"
else
python "$@"
fi
Expand Down

0 comments on commit 2a3b0c8

Please sign in to comment.