RT Testing


Chris Paterson
 

Hello Pavel, Hayashi-san, Jan, Daniel,

Addressing this email to all of you as both RT and CIP Core are involved.

I started to look into RT testing in more detail today.

I've created an RT configuration for the RZ/G1 boards:
https://gitlab.com/patersonc/cip-kernel-config/blob/chris/add_renesas_rt_configs/4.4.y-cip-rt/arm/renesas_shmobile-rt_defconfig
I'll do something similar for the RZ/G2 boards soon.

Built it with linux-4.4.y-cip-rt and run cyclic test:
https://lava.ciplatform.org/scheduler/job/9828
Times look okay to an rt-untrained eye:
T: 0 ( 1169) P:98 I:1000 C: 59993 Min: 13 Act: 16 Avg: 16 Max: 33

Compared to a run with linux-4.4.y-cip:
https://lava.ciplatform.org/scheduler/job/9829
T: 0 ( 938) P:98 I:1000 C: 6000 Min: 1618 Act: 9604 Avg: 9603 Max: 14550

Pavel, does the above look okay/useful to you? Or is cyclictest not worth running unless there is some load on the system?

Currently there is an issue with the way that the cyclic test case results are shown (i.e. they aren't) in LAVA due to a change [0] made to Linaro's cyclictest.sh.
That means that the test parsing now depends on Python, which isn't included in the cip-core RFS [1] that is currently being used.

Do either of the CIP Core profiles include Python support?

Linaro test-definitions [2] have the following tests marked within the preempt-rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml

Which of the above would be valuable to run on CIP RT Kernels?

A while back Daniel Wagner also did some work on a Jitterdebugger test [3], but it hasn't been merged yet and I'm not sure what the current status is. Any updates Daniel?

Is anyone able to provide RT config/defconfigs for the x86 and arm boards in the Mentor lab? Or BBB, QEMU etc.? (assuming that the hardware is suitable).


[0] https://github.com/Linaro/test-definitions/commit/4b5c46f275632932b3045f2ee16ad9cae5bb482d#diff-c724b852b75aefda2cc3505c4517828dR50
[1] https://s3-us-west-2.amazonaws.com/download.cip-project.org/cip-core/iwg20m/core-image-minimal-iwg20m.tar.gz
[2] https://github.com/Linaro/test-definitions/blob/master/automated/linux
[3] https://github.com/igaw/test-definitions/blob/preempt-rt/automated/linux/jitterdebugger/jitterdebugger.yaml

Kind regards, Chris


Kazuhiro Hayashi
 

Hello Chris,

Thank you for your updates!

Hello Pavel, Hayashi-san, Jan, Daniel,
[...]

Currently there is an issue with the way that the cyclic test case results are shown (i.e. they aren't) in LAVA due to
a change [0] made to Linaro's cyclictest.sh.
That means that the test parsing now depends on Python, which isn't included in the cip-core RFS [1] that is currently
being used.

Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I cannot clearly say Yes.
However, at least, the both profiles can create an image including python only for testing
because the python packages are already provided in upstream projects (isar, meta-debian).

Whether CIP Core provides Python packages or not depends on
what kind of packages will be proposed (requested) by CIP WGs in future.
Currently, several packages which depend on Python packages would be
included in the next proposal from security WG (under review now).

BTW, it would be better to confirm which Python version (2.7 or 3) that cyclictest.sh depends on.
Do you know anything about this?

Kazu


Linaro test-definitions [2] have the following tests marked within the preempt-rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml

Which of the above would be valuable to run on CIP RT Kernels?

A while back Daniel Wagner also did some work on a Jitterdebugger test [3], but it hasn't been merged yet and I'm not
sure what the current status is. Any updates Daniel?

Is anyone able to provide RT config/defconfigs for the x86 and arm boards in the Mentor lab? Or BBB, QEMU etc.? (assuming
that the hardware is suitable).


[0]
https://github.com/Linaro/test-definitions/commit/4b5c46f275632932b3045f2ee16ad9cae5bb482d#diff-c724b852b75aefda2cc3
505c4517828dR50
[1] https://s3-us-west-2.amazonaws.com/download.cip-project.org/cip-core/iwg20m/core-image-minimal-iwg20m.tar.gz
[2] https://github.com/Linaro/test-definitions/blob/master/automated/linux
[3] https://github.com/igaw/test-definitions/blob/preempt-rt/automated/linux/jitterdebugger/jitterdebugger.yaml

Kind regards, Chris


Daniel Wagner <wagi@...>
 

Hi,

On Thu, Jan 16, 2020 at 06:54:54AM +0000, kazuhiro3.hayashi@toshiba.co.jp wrote:
BTW, it would be better to confirm which Python version (2.7 or 3) that cyclictest.sh depends on.
Do you know anything about this?
cyclictest.sh uses automated/lib/parse_rt_tests_results.py to parse
the results. The python code runs with either Python2 or Python3

$ python3 -m py_compile ./automated/lib/parse_rt_tests_results.py
$ python2 -m py_compile ./automated/lib/parse_rt_tests_results.py

Thanks,
Daniel


Daniel Wagner <wagi@...>
 

Hi Chris,

On Tue, Jan 14, 2020 at 05:01:54PM +0000, Chris Paterson wrote:
Hello Pavel, Hayashi-san, Jan, Daniel,
Addressing this email to all of you as both RT and CIP Core are involved.
I started to look into RT testing in more detail today.
Welcome on board :)

I've created an RT configuration for the RZ/G1 boards:
https://gitlab.com/patersonc/cip-kernel-config/blob/chris/add_renesas_rt_configs/4.4.y-cip-rt/arm/renesas_shmobile-rt_defconfig
I'll do something similar for the RZ/G2 boards soon.
I am using merge_config.sh to build the configuration. There are some
catches but generally it works good. I've hacked a small tool for
automization around it. With this the configuration is allways
generated from scratch using kconfig.

Built it with linux-4.4.y-cip-rt and run cyclic test:
https://lava.ciplatform.org/scheduler/job/9828
Times look okay to an rt-untrained eye:
T: 0 ( 1169) P:98 I:1000 C: 59993 Min: 13 Act: 16 Avg: 16 Max: 33
Compared to a run with linux-4.4.y-cip:
https://lava.ciplatform.org/scheduler/job/9829
T: 0 ( 938) P:98 I:1000 C: 6000 Min: 1618 Act: 9604 Avg: 9603 Max: 14550
Pavel, does the above look okay/useful to you? Or is cyclictest not worth running unless there is some load on the system?
Without load, it's not that interesting.

Currently there is an issue with the way that the cyclic test case
results are shown (i.e. they aren't) in LAVA due to a change [0]
made to Linaro's cyclictest.sh.
My current test suite for LAVA contains these here:

rt_suites = ['0_jd-hackbench',
'0_jd-compile',
'0_jd-stress_ptrace',
'0_cyclicdeadline',
'0_cyclictest',
'0_pi-stress',
'0_pmqtest',
'0_ptsematest',
'0_rt-migrate-test',
'0_signaltest',
'0_sigwaittest',
'0_svsematest']


That means that the test parsing now depends on Python, which isn't included in the cip-core RFS [1] that is currently being used.
Sorry about that. I am changed this so that the test are marked failed
if a value is seen higher as the maximum. Again, I am using my hack
script to read out the results from the test and use coloring for it:

$ srt-build c2d jobs result
0_jd-hackbench t1-max-latency : pass 11.00
0_jd-hackbench t1-avg-latency : pass 2.37
0_jd-hackbench t1-min-latency : pass 1.00
0_jd-hackbench t0-max-latency : pass 10.00
0_jd-hackbench t0-avg-latency : pass 2.31
0_jd-hackbench t0-min-latency : pass 1.00
0_jd-compile t1-max-latency : pass 14.00
0_jd-compile t1-avg-latency : pass 3.37
0_jd-compile t1-min-latency : pass 1.00
0_jd-compile t0-max-latency : pass 14.00
0_jd-compile t0-avg-latency : pass 3.37
0_jd-compile t0-min-latency : pass 1.00
0_jd-stress_ptrace t1-max-latency : pass 7.00
0_jd-stress_ptrace t1-avg-latency : pass 2.19
0_jd-stress_ptrace t1-min-latency : pass 2.00
0_jd-stress_ptrace t0-max-latency : pass 9.00
0_jd-stress_ptrace t0-avg-latency : pass 2.22
0_jd-stress_ptrace t0-min-latency : pass 2.00
0_cyclicdeadline t1-max-latency : fail 3462.00
0_cyclicdeadline t1-avg-latency : pass 1594.00
0_cyclicdeadline t1-min-latency : pass 1.00
0_cyclicdeadline t0-max-latency : fail 3470.00
0_cyclicdeadline t0-avg-latency : pass 1602.00
0_cyclicdeadline t0-min-latency : pass 8.00
0_cyclictest t1-max-latency : pass 11.00
0_cyclictest t1-avg-latency : pass 2.00
0_cyclictest t1-min-latency : pass 1.00
0_cyclictest t0-max-latency : pass 13.00
0_cyclictest t0-avg-latency : pass 3.00
0_cyclictest t0-min-latency : pass 1.00
0_pi-stress pi-stress : pass 0.00
0_pmqtest t3-2-max-latency : pass 13.00
0_pmqtest t3-2-avg-latency : pass 4.00
0_pmqtest t3-2-min-latency : pass 2.00
0_pmqtest t1-0-max-latency : pass 23.00
0_pmqtest t1-0-avg-latency : pass 4.00
0_pmqtest t1-0-min-latency : pass 2.00
0_ptsematest t3-2-max-latency : pass 11.00
0_ptsematest t3-2-avg-latency : pass 3.00
0_ptsematest t3-2-min-latency : pass 2.00
0_ptsematest t1-0-max-latency : pass 14.00
0_ptsematest t1-0-avg-latency : pass 3.00
0_ptsematest t1-0-min-latency : pass 2.00
0_rt-migrate-test t2-p98-avg : pass 11.00
0_rt-migrate-test t2-p98-tot : pass 581.00
0_rt-migrate-test t2-p98-min : pass 9.00
0_rt-migrate-test t2-p98-max : pass 28.00
0_rt-migrate-test t1-p97-avg : pass 13.00
0_rt-migrate-test t1-p97-tot : pass 654.00
0_rt-migrate-test t1-p97-min : pass 8.00
0_rt-migrate-test t1-p97-max : pass 34.00
0_rt-migrate-test t0-p96-avg : pass 19213.00
0_rt-migrate-test t0-p96-tot : pass 960652.00
0_rt-migrate-test t0-p96-min : pass 13.00
0_rt-migrate-test t0-p96-max : pass 20031.00
0_signaltest t0-max-latency : pass 29.00
0_signaltest t0-avg-latency : pass 8.00
0_signaltest t0-min-latency : pass 4.00
0_sigwaittest t3-2-max-latency : pass 16.00
0_sigwaittest t3-2-avg-latency : pass 4.00
0_sigwaittest t3-2-min-latency : pass 2.00
0_sigwaittest t1-0-max-latency : pass 24.00
0_sigwaittest t1-0-avg-latency : pass 4.00
0_sigwaittest t1-0-min-latency : pass 2.00
0_svsematest t3-2-max-latency : pass 13.00
0_svsematest t3-2-avg-latency : pass 4.00
0_svsematest t3-2-min-latency : pass 2.00
0_svsematest t1-0-max-latency : pass 13.00
0_svsematest t1-0-avg-latency : pass 4.00
0_svsematest t1-0-min-latency : pass 2.00
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass


Trying to figure out from the web interface of LAVA is a bit
combersome.

Do either of the CIP Core profiles include Python support?
Linaro test-definitions [2] have the following tests marked within the preempt-rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml
See above.

Which of the above would be valuable to run on CIP RT Kernels?
Basically you want to run all of the tests in rt-tests.

A while back Daniel Wagner also did some work on a Jitterdebugger
test [3], but it hasn't been merged yet and I'm not sure what the
current status is. Any updates Daniel?
Yeah, I was holding a bit back until I am happy with my setup and
workflow. One of the major limitations with the current
test-definitions is the difficulties to setup background workload. To
make it short, I am not too happy with my current version but it
works.

Is anyone able to provide RT config/defconfigs for the x86 and arm
boards in the Mentor lab? Or BBB, QEMU etc.? (assuming that the
hardware is suitable).
If you are interested in my configs I have configs for ARMv7 (bbb),
ARMv8 (RPi3) and x86_64 via my hacktool. It also builds the kernel
because I didn't setup kernelci for it, so in short I get a complete
config, build, test setup via 'srt-build bbb lava'

Thanks,
Daniel


Chris Paterson
 

Hello,

From: kazuhiro3.hayashi@toshiba.co.jp <kazuhiro3.hayashi@toshiba.co.jp>
Sent: 16 January 2020 06:55

Hello Chris,

Thank you for your updates!

Hello Pavel, Hayashi-san, Jan, Daniel,
[...]

Currently there is an issue with the way that the cyclic test case results are
shown (i.e. they aren't) in LAVA due to
a change [0] made to Linaro's cyclictest.sh.
That means that the test parsing now depends on Python, which isn't included
in the cip-core RFS [1] that is currently
being used.

Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I
cannot clearly say Yes.
However, at least, the both profiles can create an image including python only
for testing
because the python packages are already provided in upstream projects (isar,
meta-debian).
Okay. I'm thinking that we aren't going to be able to escape having a separate 'testing' version of CIP-Core, based on ISAR on the assumption that it has more supported packages then Deby.

Or not use CIP-Core for testing the Kernel as punit suggested ??


Whether CIP Core provides Python packages or not depends on
what kind of packages will be proposed (requested) by CIP WGs in future.
Currently, several packages which depend on Python packages would be
included in the next proposal from security WG (under review now).

BTW, it would be better to confirm which Python version (2.7 or 3) that
cyclictest.sh depends on.
Do you know anything about this?
Either version (thanks Daniel).

Presumably we'd want to target Python 3 though if that's what the world is moving to?

Kind regards, Chris


Kazu


Linaro test-definitions [2] have the following tests marked within the preempt-
rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml

Which of the above would be valuable to run on CIP RT Kernels?

A while back Daniel Wagner also did some work on a Jitterdebugger test [3],
but it hasn't been merged yet and I'm not
sure what the current status is. Any updates Daniel?

Is anyone able to provide RT config/defconfigs for the x86 and arm boards in
the Mentor lab? Or BBB, QEMU etc.? (assuming
that the hardware is suitable).


[0]
https://github.com/Linaro/test-
definitions/commit/4b5c46f275632932b3045f2ee16ad9cae5bb482d#diff-
c724b852b75aefda2cc3
505c4517828dR50
[1] https://s3-us-west-2.amazonaws.com/download.cip-project.org/cip-
core/iwg20m/core-image-minimal-iwg20m.tar.gz
[2] https://github.com/Linaro/test-definitions/blob/master/automated/linux
[3] https://github.com/igaw/test-definitions/blob/preempt-
rt/automated/linux/jitterdebugger/jitterdebugger.yaml

Kind regards, Chris


Kazuhiro Hayashi
 

Hello Daniel,

Hi,

On Thu, Jan 16, 2020 at 06:54:54AM +0000,
kazuhiro3.hayashi@toshiba.co.jp wrote:
BTW, it would be better to confirm which Python version (2.7 or 3) that cyclictest.sh depends on.
Do you know anything about this?
cyclictest.sh uses automated/lib/parse_rt_tests_results.py to parse
the results. The python code runs with either Python2 or Python3

$ python3 -m py_compile ./automated/lib/parse_rt_tests_results.py
$ python2 -m py_compile ./automated/lib/parse_rt_tests_results.py
Thank you for the information.
Relieved to know that no dependency on specific python version :)

Thanks,
Kazu


Thanks,
Daniel


Chris Paterson
 

Hello Daniel,

From: Daniel Wagner <wagi@monom.org>
Sent: 16 January 2020 09:25

Hi Chris,

On Tue, Jan 14, 2020 at 05:01:54PM +0000, Chris Paterson wrote:
Hello Pavel, Hayashi-san, Jan, Daniel,

Addressing this email to all of you as both RT and CIP Core are involved.

I started to look into RT testing in more detail today.
Welcome on board :)

I've created an RT configuration for the RZ/G1 boards:
https://gitlab.com/patersonc/cip-kernel-
config/blob/chris/add_renesas_rt_configs/4.4.y-cip-rt/arm/renesas_shmobile-
rt_defconfig
I'll do something similar for the RZ/G2 boards soon.
I am using merge_config.sh to build the configuration. There are some
catches but generally it works good. I've hacked a small tool for
automization around it. With this the configuration is allways
generated from scratch using kconfig.
Not a bad idea. I'll give it a go. Do you have an 'RT' config fragment you can share?
I guess it depends on what approach the Kernel maintainers would rather take.

Maintainers: Do we want dedicated RT configurations for CIP? Or just try and apply RT to all configurations (or a sub-selection) in cip-kernel-configs?


Built it with linux-4.4.y-cip-rt and run cyclic test:
https://lava.ciplatform.org/scheduler/job/9828
Times look okay to an rt-untrained eye:
T: 0 ( 1169) P:98 I:1000 C: 59993 Min: 13 Act: 16 Avg: 16 Max: 33

Compared to a run with linux-4.4.y-cip:
https://lava.ciplatform.org/scheduler/job/9829
T: 0 ( 938) P:98 I:1000 C: 6000 Min: 1618 Act: 9604 Avg: 9603 Max: 14550

Pavel, does the above look okay/useful to you? Or is cyclictest not worth
running unless there is some load on the system?

Without load, it's not that interesting.

Currently there is an issue with the way that the cyclic test case
results are shown (i.e. they aren't) in LAVA due to a change [0]
made to Linaro's cyclictest.sh.
My current test suite for LAVA contains these here:

rt_suites = ['0_jd-hackbench',
'0_jd-compile',
'0_jd-stress_ptrace',
'0_cyclicdeadline',
'0_cyclictest',
'0_pi-stress',
'0_pmqtest',
'0_ptsematest',
'0_rt-migrate-test',
'0_signaltest',
'0_sigwaittest',
'0_svsematest']


That means that the test parsing now depends on Python, which isn't included
in the cip-core RFS [1] that is currently being used.

Sorry about that. I am changed this so that the test are marked failed
if a value is seen higher as the maximum. Again, I am using my hack
script to read out the results from the test and use coloring for it:

$ srt-build c2d jobs result
0_jd-hackbench t1-max-latency : pass 11.00
0_jd-hackbench t1-avg-latency : pass 2.37
0_jd-hackbench t1-min-latency : pass 1.00
0_jd-hackbench t0-max-latency : pass 10.00
0_jd-hackbench t0-avg-latency : pass 2.31
0_jd-hackbench t0-min-latency : pass 1.00
0_jd-compile t1-max-latency : pass 14.00
0_jd-compile t1-avg-latency : pass 3.37
0_jd-compile t1-min-latency : pass 1.00
0_jd-compile t0-max-latency : pass 14.00
0_jd-compile t0-avg-latency : pass 3.37
0_jd-compile t0-min-latency : pass 1.00
0_jd-stress_ptrace t1-max-latency : pass 7.00
0_jd-stress_ptrace t1-avg-latency : pass 2.19
0_jd-stress_ptrace t1-min-latency : pass 2.00
0_jd-stress_ptrace t0-max-latency : pass 9.00
0_jd-stress_ptrace t0-avg-latency : pass 2.22
0_jd-stress_ptrace t0-min-latency : pass 2.00
0_cyclicdeadline t1-max-latency : fail 3462.00
0_cyclicdeadline t1-avg-latency : pass 1594.00
0_cyclicdeadline t1-min-latency : pass 1.00
0_cyclicdeadline t0-max-latency : fail 3470.00
0_cyclicdeadline t0-avg-latency : pass 1602.00
0_cyclicdeadline t0-min-latency : pass 8.00
0_cyclictest t1-max-latency : pass 11.00
0_cyclictest t1-avg-latency : pass 2.00
0_cyclictest t1-min-latency : pass 1.00
0_cyclictest t0-max-latency : pass 13.00
0_cyclictest t0-avg-latency : pass 3.00
0_cyclictest t0-min-latency : pass 1.00
0_pi-stress pi-stress : pass 0.00
0_pmqtest t3-2-max-latency : pass 13.00
0_pmqtest t3-2-avg-latency : pass 4.00
0_pmqtest t3-2-min-latency : pass 2.00
0_pmqtest t1-0-max-latency : pass 23.00
0_pmqtest t1-0-avg-latency : pass 4.00
0_pmqtest t1-0-min-latency : pass 2.00
0_ptsematest t3-2-max-latency : pass 11.00
0_ptsematest t3-2-avg-latency : pass 3.00
0_ptsematest t3-2-min-latency : pass 2.00
0_ptsematest t1-0-max-latency : pass 14.00
0_ptsematest t1-0-avg-latency : pass 3.00
0_ptsematest t1-0-min-latency : pass 2.00
0_rt-migrate-test t2-p98-avg : pass 11.00
0_rt-migrate-test t2-p98-tot : pass 581.00
0_rt-migrate-test t2-p98-min : pass 9.00
0_rt-migrate-test t2-p98-max : pass 28.00
0_rt-migrate-test t1-p97-avg : pass 13.00
0_rt-migrate-test t1-p97-tot : pass 654.00
0_rt-migrate-test t1-p97-min : pass 8.00
0_rt-migrate-test t1-p97-max : pass 34.00
0_rt-migrate-test t0-p96-avg : pass 19213.00
0_rt-migrate-test t0-p96-tot : pass 960652.00
0_rt-migrate-test t0-p96-min : pass 13.00
0_rt-migrate-test t0-p96-max : pass 20031.00
0_signaltest t0-max-latency : pass 29.00
0_signaltest t0-avg-latency : pass 8.00
0_signaltest t0-min-latency : pass 4.00
0_sigwaittest t3-2-max-latency : pass 16.00
0_sigwaittest t3-2-avg-latency : pass 4.00
0_sigwaittest t3-2-min-latency : pass 2.00
0_sigwaittest t1-0-max-latency : pass 24.00
0_sigwaittest t1-0-avg-latency : pass 4.00
0_sigwaittest t1-0-min-latency : pass 2.00
0_svsematest t3-2-max-latency : pass 13.00
0_svsematest t3-2-avg-latency : pass 4.00
0_svsematest t3-2-min-latency : pass 2.00
0_svsematest t1-0-max-latency : pass 13.00
0_svsematest t1-0-avg-latency : pass 4.00
0_svsematest t1-0-min-latency : pass 2.00
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass
0_smoke-tests linux-posix-lsb_release: pass
0_smoke-tests linux-posix-lscpu : pass
0_smoke-tests linux-posix-ifconfig: pass
0_smoke-tests linux-posix-vmstat : pass
0_smoke-tests linux-posix-uname : pass
0_smoke-tests linux-posix-pwd : pass


Trying to figure out from the web interface of LAVA is a bit
combersome.

Do either of the CIP Core profiles include Python support?

Linaro test-definitions [2] have the following tests marked within the preempt-
rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml
See above.

Which of the above would be valuable to run on CIP RT Kernels?
Basically you want to run all of the tests in rt-tests.
Does the RT project currently do (or have plans to do) the same?
https://ci-rt.linutronix.de/RT-Test/ seems to indicate that only cyclictest is run (with and without hackbench).


A while back Daniel Wagner also did some work on a Jitterdebugger
test [3], but it hasn't been merged yet and I'm not sure what the
current status is. Any updates Daniel?
Yeah, I was holding a bit back until I am happy with my setup and
workflow. One of the major limitations with the current
test-definitions is the difficulties to setup background workload. To
make it short, I am not too happy with my current version but it
works.
Okay.


Is anyone able to provide RT config/defconfigs for the x86 and arm
boards in the Mentor lab? Or BBB, QEMU etc.? (assuming that the
hardware is suitable).
If you are interested in my configs I have configs for ARMv7 (bbb),
ARMv8 (RPi3) and x86_64 via my hacktool. It also builds the kernel
because I didn't setup kernelci for it, so in short I get a complete
config, build, test setup via 'srt-build bbb lava'
Maybe just the 'RT' config fragment will be enough.

Thanks, Chris


Thanks,
Daniel


Pavel Machek
 

Hi!

Addressing this email to all of you as both RT and CIP Core are involved.

I started to look into RT testing in more detail today.

I've created an RT configuration for the RZ/G1 boards:
https://gitlab.com/patersonc/cip-kernel-config/blob/chris/add_renesas_rt_configs/4.4.y-cip-rt/arm/renesas_shmobile-rt_defconfig
I'll do something similar for the RZ/G2 boards soon.

Built it with linux-4.4.y-cip-rt and run cyclic test:
https://lava.ciplatform.org/scheduler/job/9828
Times look okay to an rt-untrained eye:
T: 0 ( 1169) P:98 I:1000 C: 59993 Min: 13 Act: 16 Avg: 16 Max: 33

Compared to a run with linux-4.4.y-cip:
https://lava.ciplatform.org/scheduler/job/9829
T: 0 ( 938) P:98 I:1000 C: 6000 Min: 1618 Act: 9604 Avg: 9603 Max: 14550

Pavel, does the above look okay/useful to you? Or is cyclictest not worth running unless there is some load on the system?
Even basic cyclictest is good to have, but it would be really good to
have some kind of background load. I'd expect latency to raise by
factor of 3 in make -j 5 of kernel.

I tried to come up with simpler test reproducing comparable latencies,
but I did not get there. This creates cca factor of two for me:

cat /dev/urandom | head -c 10000000 | gzip -9 - > delme.random.gz
echo "Initial phase done"
for A in `seq 222`; do
( zcat delme.random.gz | gzip -9 - | zcat > /dev/null; echo -n [$A done] ) &
done

What is easily available on the test systems? gzip? python? bzip2?

Best regards,
Pavel
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany


Daniel Wagner <wagi@...>
 

On Thu, Jan 16, 2020 at 02:25:27PM +0100, Pavel Machek wrote:
Even basic cyclictest is good to have, but it would be really good to
have some kind of background load. I'd expect latency to raise by
factor of 3 in make -j 5 of kernel.
I've been experimenting with compiling the kernel as workload to
trigger latency spikes. So far this test didn't trigger one. It is
quite good to see if there are performance rgressions (it takes
suddenly longer). As long there is enough space on the device and all
tools are available it's okay to run it.

I found it a bit difficult to use it in lava because the rootfs
doesn't bring all the tools and the kernel source it takes ages to
setup.

I tried to come up with simpler test reproducing comparable latencies,
but I did not get there. This creates cca factor of two for me:

cat /dev/urandom | head -c 10000000 | gzip -9 - > delme.random.gz
echo "Initial phase done"
for A in `seq 222`; do
( zcat delme.random.gz | gzip -9 - | zcat > /dev/null; echo -n [$A done] ) &
done

What is easily available on the test systems? gzip? python? bzip2?
I am using rt-tests and plan to run more stressors from
stress-ng. This seems to give a good reproducable workload and really
stresses the system.

Thanks,
Daniel


punit1.agrawal@...
 

Hi Chris,

Just thinking out loud below so please bear with me if I'm just stating
the obvious.

Chris Paterson <Chris.Paterson2@renesas.com> writes:

Hello,

From: kazuhiro3.hayashi@toshiba.co.jp <kazuhiro3.hayashi@toshiba.co.jp>
Sent: 16 January 2020 06:55

Hello Chris,

Thank you for your updates!

Hello Pavel, Hayashi-san, Jan, Daniel,
[...]

Currently there is an issue with the way that the cyclic test case results are
shown (i.e. they aren't) in LAVA due to
a change [0] made to Linaro's cyclictest.sh.
That means that the test parsing now depends on Python, which isn't included
in the cip-core RFS [1] that is currently
being used.

Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I
cannot clearly say Yes.
However, at least, the both profiles can create an image including python only
for testing
because the python packages are already provided in upstream projects (isar,
meta-debian).
Okay. I'm thinking that we aren't going to be able to escape having a
separate 'testing' version of CIP-Core, based on ISAR on the
assumption that it has more supported packages then Deby.

Or not use CIP-Core for testing the Kernel as punit suggested ??
I think CIP-core shouldn't be thought of as a complete filesystem, but
the base components that should exist as part of any CIP system.

As it is, it's unlikely that any real product will need only the
functionality provided by CIP core.

If that makes sense, then we should make it easy to create derived
images / filesystems that include additional software as needed for the
task at hand, e.g., testing CIP kernels.

In addition, there is a need to test the functionality provided by the
core components, i.e., CIP-core packages along with relevant kernels on
supported devices. As yet, I don't think these tests exist (or even
defined).

But I'm new to the project so am very likely missing things.

Thanks,
Punit

[...]


Kazuhiro Hayashi
 

Hello Chris,

Hello,
[...]
Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I
cannot clearly say Yes.
However, at least, the both profiles can create an image including python only
for testing
because the python packages are already provided in upstream projects (isar,
meta-debian).
Okay. I'm thinking that we aren't going to be able to escape having a separate 'testing' version of CIP-Core, based on
ISAR on the assumption that it has more supported packages then Deby.
The IRC log of yesterday: https://irclogs.baserock.org/cip/%23cip.2020-01-16.log.html

Starting from isar-cip-core would be better if we need to install
various packages for the testing like build-essential.
Deby also can generate an image including enough dependencies
of simple basic suites like LTP, cyclictest, etc. though.

Also, I guess LAVA can install additional packages for testing
(from Debian apt repository or some package directories on S3)
to their standard image after they boot on the target device.
We don't need to provide a separate testing image in this case.


Or not use CIP-Core for testing the Kernel as punit suggested ??
Personally, I would like to use CIP-Core image(s) for the kernel testing.
I have no concern about providing packages only for testing
as long as CIP is not responsible to maintain them.



Whether CIP Core provides Python packages or not depends on
what kind of packages will be proposed (requested) by CIP WGs in future.
Currently, several packages which depend on Python packages would be
included in the next proposal from security WG (under review now).

BTW, it would be better to confirm which Python version (2.7 or 3) that
cyclictest.sh depends on.
Do you know anything about this?
Either version (thanks Daniel).
Thank you for your confirmation.


Presumably we'd want to target Python 3 though if that's what the world is moving to?
I think this is one of the important topics to be discussed in CIP Core.
I agree with your idea (target on Python 3), but there are still
some packages in Debian buster that have a run-time dependency on python 2.7, unfortunately.
I would like to create another thread to think about this topic later :)

Best regards,
Kazu



Kind regards, Chris


Kazu


Linaro test-definitions [2] have the following tests marked within the preempt-
rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml

Which of the above would be valuable to run on CIP RT Kernels?

A while back Daniel Wagner also did some work on a Jitterdebugger test [3],
but it hasn't been merged yet and I'm not
sure what the current status is. Any updates Daniel?

Is anyone able to provide RT config/defconfigs for the x86 and arm boards in
the Mentor lab? Or BBB, QEMU etc.? (assuming
that the hardware is suitable).


[0]
https://github.com/Linaro/test-
definitions/commit/4b5c46f275632932b3045f2ee16ad9cae5bb482d#diff-
c724b852b75aefda2cc3
505c4517828dR50
[1] https://s3-us-west-2.amazonaws.com/download.cip-project.org/cip-
core/iwg20m/core-image-minimal-iwg20m.tar.gz
[2] https://github.com/Linaro/test-definitions/blob/master/automated/linux
[3] https://github.com/igaw/test-definitions/blob/preempt-
rt/automated/linux/jitterdebugger/jitterdebugger.yaml

Kind regards, Chris


Chris Paterson
 

Hello Punit,

From: Punit Agrawal <punit1.agrawal@toshiba.co.jp>
Sent: 17 January 2020 01:02

Hi Chris,

Just thinking out loud below so please bear with me if I'm just stating
the obvious.
Please do ??


Chris Paterson <Chris.Paterson2@renesas.com> writes:

Hello,

From: kazuhiro3.hayashi@toshiba.co.jp <kazuhiro3.hayashi@toshiba.co.jp>
Sent: 16 January 2020 06:55

Hello Chris,

Thank you for your updates!

Hello Pavel, Hayashi-san, Jan, Daniel,
[...]

Currently there is an issue with the way that the cyclic test case results are
shown (i.e. they aren't) in LAVA due to
a change [0] made to Linaro's cyclictest.sh.
That means that the test parsing now depends on Python, which isn't
included
in the cip-core RFS [1] that is currently
being used.

Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I
cannot clearly say Yes.
However, at least, the both profiles can create an image including python
only
for testing
because the python packages are already provided in upstream projects (isar,
meta-debian).
Okay. I'm thinking that we aren't going to be able to escape having a
separate 'testing' version of CIP-Core, based on ISAR on the
assumption that it has more supported packages then Deby.

Or not use CIP-Core for testing the Kernel as punit suggested ??
I think CIP-core shouldn't be thought of as a complete filesystem, but
the base components that should exist as part of any CIP system.

As it is, it's unlikely that any real product will need only the
functionality provided by CIP core.
True.


If that makes sense, then we should make it easy to create derived
images / filesystems that include additional software as needed for the
task at hand, e.g., testing CIP kernels.
Good point, agreed.


In addition, there is a need to test the functionality provided by the
core components, i.e., CIP-core packages along with relevant kernels on
supported devices. As yet, I don't think these tests exist (or even
defined).
This is something we do indeed need to do.

Thanks, Chris


But I'm new to the project so am very likely missing things.

Thanks,
Punit

[...]


Chris Paterson
 

Hello Hayashi-san,

From: kazuhiro3.hayashi@toshiba.co.jp <kazuhiro3.hayashi@toshiba.co.jp>
Sent: 17 January 2020 01:41

Hello Chris,

Hello,
[...]
Do either of the CIP Core profiles include Python support?
At the moment, we've just started creating the supported package list, so I
cannot clearly say Yes.
However, at least, the both profiles can create an image including python
only
for testing
because the python packages are already provided in upstream projects (isar,
meta-debian).
Okay. I'm thinking that we aren't going to be able to escape having a separate
'testing' version of CIP-Core, based on
ISAR on the assumption that it has more supported packages then Deby.
The IRC log of yesterday: https://irclogs.baserock.org/cip/%23cip.2020-01-
16.log.html

Starting from isar-cip-core would be better if we need to install
various packages for the testing like build-essential.
Deby also can generate an image including enough dependencies
of simple basic suites like LTP, cyclictest, etc. though.

Also, I guess LAVA can install additional packages for testing
(from Debian apt repository or some package directories on S3)
to their standard image after they boot on the target device.
We don't need to provide a separate testing image in this case.
This is an option. I guess I've spent too long in the oe world without apt.
A key dependency here though is to make sure that the boards have a direct connection to the internet.



Or not use CIP-Core for testing the Kernel as punit suggested ??
Personally, I would like to use CIP-Core image(s) for the kernel testing.
I have no concern about providing packages only for testing
as long as CIP is not responsible to maintain them.
Okay.

I'll try and run a test using CIP-Core + apt and go from there.

Kind regards, Chris




Whether CIP Core provides Python packages or not depends on
what kind of packages will be proposed (requested) by CIP WGs in future.
Currently, several packages which depend on Python packages would be
included in the next proposal from security WG (under review now).

BTW, it would be better to confirm which Python version (2.7 or 3) that
cyclictest.sh depends on.
Do you know anything about this?
Either version (thanks Daniel).
Thank you for your confirmation.


Presumably we'd want to target Python 3 though if that's what the world is
moving to?

I think this is one of the important topics to be discussed in CIP Core.
I agree with your idea (target on Python 3), but there are still
some packages in Debian buster that have a run-time dependency on python
2.7, unfortunately.
I would like to create another thread to think about this topic later :)

Best regards,
Kazu



Kind regards, Chris


Kazu


Linaro test-definitions [2] have the following tests marked within the
preempt-
rt scope:
cyclicdeadline/cyclicdeadline.yaml
pmqtest/pmqtest.yaml
rt-migrate-test/rt-migrate-test.yaml
cyclictest/cyclictest.yaml
svsematest/svsematest.yaml
pi-stress/pi-stress.yaml
signaltest/signaltest.yaml
ptsematest/ptsematest.yaml
sigwaittest/sigwaittest.yaml
hackbench/hackbench.yaml
ltp-realtime/ltp-realtime.yaml

Which of the above would be valuable to run on CIP RT Kernels?

A while back Daniel Wagner also did some work on a Jitterdebugger test
[3],
but it hasn't been merged yet and I'm not
sure what the current status is. Any updates Daniel?

Is anyone able to provide RT config/defconfigs for the x86 and arm boards
in
the Mentor lab? Or BBB, QEMU etc.? (assuming
that the hardware is suitable).


[0]
https://github.com/Linaro/test-
definitions/commit/4b5c46f275632932b3045f2ee16ad9cae5bb482d#diff-
c724b852b75aefda2cc3
505c4517828dR50
[1] https://s3-us-west-2.amazonaws.com/download.cip-project.org/cip-
core/iwg20m/core-image-minimal-iwg20m.tar.gz
[2] https://github.com/Linaro/test-
definitions/blob/master/automated/linux
[3] https://github.com/igaw/test-definitions/blob/preempt-
rt/automated/linux/jitterdebugger/jitterdebugger.yaml

Kind regards, Chris


punit1.agrawal@...
 

Chris Paterson <Chris.Paterson2@renesas.com> writes:

Hello Punit,

From: Punit Agrawal <punit1.agrawal@toshiba.co.jp>
Sent: 17 January 2020 01:02

Hi Chris,

Just thinking out loud below so please bear with me if I'm just stating
the obvious.
Please do ??
Apologies for the confusion - I was referring to the inline comments
further down in the previous mail.

[...]