Date   

Re: script for health check (or any script which involves uploading kernel to BBB)

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi,

On 10/05/17 14:20, Robert Marshall wrote:
The kernelci-build uploads the kernel (for a BBB build) to

http://localhost:8010/cip-bbb1/cip_v4.4.27/v4.4.27/arm/omap2plus_defconfig/

So that's

http://localhost:8010/$TREE_NAME/$BRANCH/$TAG/$ARCH/omap2plus_defconfig/

I think the ARCH can be embedded in the script (otherwise we'd need to
embed omap2plus_defconfig too..) so we just need a script using the
other 3 values which can then sed a base yaml file creating the
correct urls outputting it to a new file and then running lava-tool on
that will upload and test that kernel build to the BBB

TREE_NAME can be taken from the environment variable (if it's not set -
if for example the kernel wasn't build on this login) request the user
to set it before running the script

BRANCH and TAG can be retrieved via git commands much as build.py does

So my intention is to create a script in /vagrant/scripts which is run

/vagrant/scripts/create_test.sh input.yaml output.yaml which takes a
yaml file - which could be in the release or a new one created by a B@D
user and outputs a file (the second parameter) which can then be used in
a lava-tool command


Thoughts? Counter suggestions?
In phase two of the CIP testing project, we will need to sign the builds and provide them in a repo (or a place easily downloadable) so any tester can pick it up and test it, sending the report to a specific mailing list.

So the script will need to consider that repo as the external source for the build for "validation purposes".

At the same time, kernel maintainers will probably want the capability to create their own builds locally to be tested in their own machine, taking full advantage of the VM we are about to release.

Hence I believe these two scenarios will need to be considered in phase two. It would be good that somehow you prepare this future by documenting the script for this release, so it is easier to adapt it in the near future.



Robert
_______________________________________________
cip-dev mailing list
cip-dev@...
https://lists.cip-project.org/mailman/listinfo/cip-dev
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


script for health check (or any script which involves uploading kernel to BBB)

Robert Marshall <robert.marshall@...>
 

The kernelci-build uploads the kernel (for a BBB build) to

http://localhost:8010/cip-bbb1/cip_v4.4.27/v4.4.27/arm/omap2plus_defconfig/

So that's

http://localhost:8010/$TREE_NAME/$BRANCH/$TAG/$ARCH/omap2plus_defconfig/

I think the ARCH can be embedded in the script (otherwise we'd need to
embed omap2plus_defconfig too..) so we just need a script using the
other 3 values which can then sed a base yaml file creating the
correct urls outputting it to a new file and then running lava-tool on
that will upload and test that kernel build to the BBB

TREE_NAME can be taken from the environment variable (if it's not set -
if for example the kernel wasn't build on this login) request the user
to set it before running the script

BRANCH and TAG can be retrieved via git commands much as build.py does

So my intention is to create a script in /vagrant/scripts which is run

/vagrant/scripts/create_test.sh input.yaml output.yaml which takes a
yaml file - which could be in the release or a new one created by a B@D
user and outputs a file (the second parameter) which can then be used in
a lava-tool command


Thoughts? Counter suggestions?


Robert


CIP TSC meeting minutes (08 May 2017)

Yoshitake Kobayashi
 

Hi all,

I have uploaded meeting minutes for CIP TSC conference call on 8th May.
https://wiki.linuxfoundation.org/civilinfrastructureplatform/tsc-meetings/tsc_mm_may082017

As Agustin announced in this mail list, CIP workshop will be held on 30th May.
https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences/cipwsossj2017

This workshop is open to all of you.
If you plan to attend the workshop, please feel free contact us by email.
If you aren't a CIP member, I will update above wiki page for you.

Best regards,
Yoshi


Re: CIP open workshop at OSSJ: May 30th from 15:00 to 17:00 hours

Annie Fisher <afisher@...>
 

Greetings all,

The Room Location has been updated on the wiki.

Cheers,
Annie

Annie Fisher, MPA CSM
Program Manager, The Linux Foundation
Location & Time-zone: San Francisco, CA, PST
email: afisher@...


On Tue, May 9, 2017 at 8:11 AM, Agustin Benito Bethencourt <agustin.benito@...> wrote:
Hi,


On 03/05/17 11:16, Agustin Benito Bethencourt wrote:
Dear CIP friends,

the day before the OSSJ starts, on May 30th, CIP will have a full day of
meetings.

We are organising an open workshop to talk, present, demo and work on
technical topics that might be of interest to this Initiative.

The workshop will take place from 15:00 hours to 17:00 hours on May 30th
in a room (to be determined) at the OSSJ venue.

Please follow up the news about this activity in our wiki page[1]

++ Open workshop. Who can participate?

The workshop is open to all of you who are interested, no matter if you
are CIP Members or not. For those of you who are not yet part of CIP, it
will be a great opportunity to know a little more about who we are and
what we do.

++ How can I register?

If you are attending, it is very important that you tell us in advance
since we need to plan the room capacity accordingly. There are two ways
in which you can confirm your participation:

1.- Through this mailing list, by providing the following information:
    * Name.
    * Company.
    * Position/role.
    * Topics you are interested on.

2.- Directly adding your name to the participants table on the wiki[1].

Please check the wiki page anyway for updates. Again, this is an open
workshop. Feel free to join.

++ Can I propose topics?

Yes. Please send your suggestions to this cip-dev mailing list so we
create the agenda as soon as possible.

[1]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences#workshops-and-meetings

[1] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences/cipwsossj2017

The content of the OSSJ workshop has been moved.




Best Regards


--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...
_______________________________________________
cip-dev mailing list
cip-dev@...
https://lists.cip-project.org/mailman/listinfo/cip-dev


Re: CIP open workshop at OSSJ: May 30th from 15:00 to 17:00 hours

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi,

On 03/05/17 11:16, Agustin Benito Bethencourt wrote:
Dear CIP friends,

the day before the OSSJ starts, on May 30th, CIP will have a full day of
meetings.

We are organising an open workshop to talk, present, demo and work on
technical topics that might be of interest to this Initiative.

The workshop will take place from 15:00 hours to 17:00 hours on May 30th
in a room (to be determined) at the OSSJ venue.

Please follow up the news about this activity in our wiki page[1]

++ Open workshop. Who can participate?

The workshop is open to all of you who are interested, no matter if you
are CIP Members or not. For those of you who are not yet part of CIP, it
will be a great opportunity to know a little more about who we are and
what we do.

++ How can I register?

If you are attending, it is very important that you tell us in advance
since we need to plan the room capacity accordingly. There are two ways
in which you can confirm your participation:

1.- Through this mailing list, by providing the following information:
* Name.
* Company.
* Position/role.
* Topics you are interested on.

2.- Directly adding your name to the participants table on the wiki[1].

Please check the wiki page anyway for updates. Again, this is an open
workshop. Feel free to join.

++ Can I propose topics?

Yes. Please send your suggestions to this cip-dev mailing list so we
create the agenda as soon as possible.

[1]
https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences#workshops-and-meetings
[1] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences/cipwsossj2017

The content of the OSSJ workshop has been moved.



Best Regards
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


[Lava-announce] 2017.5 production release

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi Don and Robert,

please check below the updates from the LAVA team. Since we are about to release, we will inspect these updates and their impact in B@D after the release.


-------- Forwarded Message --------
Subject: [Lava-announce] 2017.5 production release
Date: Mon, 8 May 2017 14:15:38 +0100
From: Neil Williams <neil.williams@...>
Reply-To: linaro-validation@...
To: Lava Announce Mailman List <lava-announce@...>

jessie-backports
============

New package update required from jessie-backports.

sudo apt -t jessie-backports install python-voluptuous

This package is used to check the submission schema of V2 test jobs.
It is already installed for V2 support but 2017.5 requires an updated
version which already exists in jessie-backports. For a smooth upgrade
ensure this package is upgraded as above before installing 2017.5.

As with previous releases since 2016.12, the 2017.5 release is only
available from the lava repositories:
https://validation.linaro.org/static/docs/v2/installing_on_debian.html#lava-repositories

New device-type templates
===================

2017.5 includes a lot of new device-type templates and a number of V1
device-type configurations. These have been contributed to LAVA, as
part of the support for KernelCI, by community members. Many thanks to
all those who have contributed device-type support for this release.

Action names and effects on timeouts
===========================

2017.5 makes the individual names of actions in the source code
consistent across all classes and deployments. Where previously
power_off was available as an action name which could be specified in
timeouts and protocols, that action name now uses a hyphen instead of
the underscore, power-off. The same change has been made for all
classes across the V2 lava-dispatcher codebase. Future changes will
only be accepted using a hyphen to separate words, not underscores or
other characters.

This change affects V2 jinja2 templates as well as V2 test jobs (if
those jobs override the action timeout or call an action using
protocols.

The full list of changes is below:

lava-server
========

8ec2d6736 Fix jinja2 templates for default string handling
eb39ec0fd Extend bbb template check for ssh_host support.
55a251b87 Fix documentation example test job and remove unused
7769be718 Update docs for change in submit behaviour.
c10bf6197 Add more index items and detail on namespaces
b9686e5b1 Mark V1 XML-RPC functions as deprecated.
a0ac9894b Prevent health check warning when disabled
78f796eee more silencing of unit test logging
064bfabb9 Add a template for frdm-kw41z and delete a duplicate for k64f
a08df108d Silence logging in more unit tests
97dbd95b1 Extend power-off timeout for b2260
0e39966d4 device-types: add Hardkernel meson8b-odroidc1 board
b6151b003 Add support for aliases in device-type management
15589b8d1 Fix some typos in development documentation.
f476a351c device-types: base-uboot: use run bootcmd
6911b8502 Expand notes on reviews
cbbdcfcf4 Expand notes on code analysis around reviews
970e52230 Drop confirmation page on job submit for V2 jobs.
4a5050691 Device commands are allowed to be lists
cc60805db Adjust hikey template to allow target_mac and ip support
9d4878c40 Avoid forcing the date path immediately
68497b557 Remove the character_delay block override for d03
5aeed4aef Tags: fix HTML syntax errors
a0ae8a62d Update doc for adding a pipeline worker.
2f6287e71 Add Raspberry pi devices
69a8f40c4 Add collection of Exynos 4 and 5 devices
e31953519 Add more Tegra124 based devices
b843bfaec Add more r-car generation 3 devices
24d21cdb0 Add a note on https repositories and apt-transport-https package.
c3b852798 Expand notes on portability
c57c9f111 Fix doc to explain unprivileged containers and DUT interaction.
2018f705c Show the requested device tags in the job log page
96999537e Fix Action names (use - instead of _)
f0a272b4b Extend recent job support to requested device type
9040de9ca Add XML-RPC call to obtain job level metadata
a04086dd7 api: add get_recent_jobs_for_device
8bbd298f7 Set the documented flash_cmds_order for hikey
c9b9453d1 Migrate many U-boot devices to v2 configuration
613d17fb5 Tweak the developer workflow to skip devices/
c59cc6c31 Add a unit test for some of the new UBoot support.
e43e244b0 Remove unused imports and unused variables.
e2b3784f4 Fix pep8 error
af3cc6d42 Schema: Allow boolean variables in parameters
4d5861607 Allow is_valid check to operate correctly.
294bb62bd ensure device_type is checked
b8ce5ba49 Add a note on developer branches
11725bc3e doc: fix a small gap about test suites
6280b94d7 Add "sd" for removable media
21a5ecf6e Add "command" action to schema and device template
8db43278a Add schema validation for test/monitor/name in job definition.
0a5b1eaa3 tweak gitignore
e3f003f63 templates: remove duplicated blocks
f1331ef2e Exclude retired devices from Device Health table
790b39b9d master: use yaml.CLoader that is way faster
749a1081c Add notes on load balancing different bootloaders
26b822d67 Add note on how pyudev replaces / with _
cc9e4ea2c Allow to override U-boot bootcmd command
9a007665c Fix 500 when output.yaml is invalid
b3c2d162b Make it easier to spot incomplete test jobs
fe6af63a7 Improve job and device schema validation
4a568e261 Fix directory and file permissions
f33661ae4 env: fix comments about default values
fd6fe12b6 Fix scheduling when putting a device into looping
99b35ba79 Export the full lava-server version
da87efaa6 base-uboot.jinja2: add support for append_dtb and use_xip
babaef51d lava_scheduler_app: api: Add pipeline information to get_device_status
abe787872 Add a note on installing lava-dev
2b6ff1dca MASTER_CERT of lava-master should use secret key
d9e7e2c08 add recipients in notifications.yaml
44eb1b75b Move job outputs to sub-dirs based on submit-time
839b3ff19 Create directories with 0o755 by default
9459aae36 lava-master: call job.output_dir to get the path
8c9af4897 Fix description for devices and workers
541478930 Fix health-check tests by testing None and ''
68b33d572 Use job.output_dir whenever possible
d93df5f4f Add a management command to remove old jobs
bfd57121d Move unnecessary constants into base jinja template.

lava-dispatcher
===========

3be27df4 lxc protocol: simplify the tests
a5d9ebad Fix detection of missing ssh_host value in validate
234eef91 Fix calling of protocols after LXC change
477a36b7 tests: make ShellCommand.logger a DummyLogger
62867f49 Use images.validation.linaro.org files for unit tests
1868936c Silence logging in more unit tests
609c783c Fix download action name
b1701978 More tweaks to silence messages from the unit tests
29efa263 Adjust for pep8 checks in jessie
0733f89f Make sure the test_character_delay is used for all
commands in test shell
78d2509f Account for empty environment string.
ea09b19a Take namespace into account when counting test stages
734688c5 Declare namespace of the test suite in results
49a6b92c Drop noisy info log message.
4fd5d4af Drop unused imports and unused variables
678b216c Rework the removable action to allow sd cards
3b42122a Fix bug #2975 after qemu-nfs introduction
7462ea62 Expand LXC support to add devices from all sybsystems
b853865f Fix missing check for u-boot commands parameters
8293693c Replace invalid characters in test_case_id.
a94e0f24 Fix preseed/late_command appending
db03483a Fix Action names (use - instead of _)
9e836e64 Add a Command Action
bbd4ba25 Make the parser stricter about the block names
65b3a563 Allow to pass integers to run_command
b41326bc Raise JobError instead of NotImplementedError
073f3b34 update gitignore
00f5d33f log: limit the length of lines send other zmq
5b2b0746 LAVA-889 Fix handling of multiple test blocks
3bfdcdb6 Add support for use_xip
8df17dd7 Add support for append_dtb
3c110cae Remove unused imports
176a4d49 Add new utility function infrastructure_error_multi_paths
dc008eb5 Export the full version string.
daebfad1 Extend secondary connection fix to support primary
fae0ced5 Move unnecessary constants into base jinja template.
18f745bc device-types: add kirkwood-db-88f6282.conf
815e518d device-types: add at91sam9m10g45ek.conf
6ca0731f device-types: add at91rm9200ek.conf
003b53dc device-types: add at91-sama5d4_xplained.conf
9b2371cf device-types: add armada-3720-db.conf
5a767331 device-types: add armada-xp-gp.conf
8b050938 device-types: add kirkwood-openblocks_a7.conf
f5801d38 device-types: add alpine-v2-evp.conf
19e20276 device-types: add sama5d34ek.conf
49da704f device-types: add armada-385-db-ap.conf
ed7aa6b1 device-types: add armada-370-db.conf
9a104327 device-types: add at91-sama5d2_xplained.conf
ddb34cea device-types: add alpine-db.conf
f7642f58 device-types: add armada-375-db.conf
0d70c7dd device-types: add at91sam9x35ek.conf
a642d451 device-types: add armada-xp-db.conf
2a5ff9e4 device-types: add armada-7040-db.conf
1a439995 device-types: add orion5x-rd88f5182-nas.conf
ad855a62 device-types: add armada-xp-linksys-mamba.conf
58308ca1 device-types: add armada-388-gp.conf
6a2898bf device-types: add armada-370-rd.conf
cd1b4f47 device-types: add armada-398-db.conf
51925fb1 device-types: add sun8i-a83t-allwinner-h8homlet-v2.conf
4976d068 device-types: add sun8i-a33-sinlinx-sina33.conf
681480d5 device-types: add sama5d35ek.conf
db74d60e device-types: add sun5i-r8-chip.conf
40736b95 device-types: add sama5d36ek.conf
433be0a6 device-types: add imx6q-nitrogen6x.conf
ed4c82e6 device-types: add at91sam9x25ek.conf
ce7a488e device-types: add at91sam9261ek.conf
f87cdb49 device-types: add armada-xp-openblocks-ax3-4.conf
634442ca device-types: add armada-388-clearfog.conf


--

Neil Williams
=============
neil.williams@...
http://www.linux.codehelp.co.uk/
_______________________________________________
Lava-announce mailing list
Lava-announce@...
https://lists.linaro.org/mailman/listinfo/lava-announce

--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Creating a Linux kernel from the stock linux-cip repo

Don Brown <don.brown@...>
 

Hi Ben,

As I was working with the Beaglebone Black trying to get the health check to pass, it occurred to me that I don't know how to create a kernel specifically for the BBB using just the stock CIP Kernel from https://gitlab.com/cip-project/linux-cip

The linux-cip kernel doesn't have a defconfig for the Beaglebone Black, or for the AM335x CPU as far as I can tell.

The Linaro BBB Health Check pulls the am335x-boneblack.dtb from their repository, but it isn't clear what we need to add to the linux-cip to have KernelCI build the kernel from scratch. Can you please take a look and let us know?

The official repo is at:
https://github.com/beagleboard/linux

It has a bb.org_defconfig here: https://github.com/beagleboard/linux/blob/4.4/arch/arm/configs/bb.org_defconfig
Is that all we need?


Thank you!



--
Don Brown
Codethink, Ltd.
Software Engineering Consultant
Indianapolis, IN USA
Email: don.brown@...
Mobile: +1 317-560-0513


CIP open workshop at OSSJ: May 30th from 15:00 to 17:00 hours

Agustin Benito Bethencourt <agustin.benito@...>
 

Dear CIP friends,

the day before the OSSJ starts, on May 30th, CIP will have a full day of meetings.

We are organising an open workshop to talk, present, demo and work on technical topics that might be of interest to this Initiative.

The workshop will take place from 15:00 hours to 17:00 hours on May 30th in a room (to be determined) at the OSSJ venue.

Please follow up the news about this activity in our wiki page[1]

++ Open workshop. Who can participate?

The workshop is open to all of you who are interested, no matter if you are CIP Members or not. For those of you who are not yet part of CIP, it will be a great opportunity to know a little more about who we are and what we do.

++ How can I register?

If you are attending, it is very important that you tell us in advance since we need to plan the room capacity accordingly. There are two ways in which you can confirm your participation:

1.- Through this mailing list, by providing the following information:
* Name.
* Company.
* Position/role.
* Topics you are interested on.

2.- Directly adding your name to the participants table on the wiki[1].

Please check the wiki page anyway for updates. Again, this is an open workshop. Feel free to join.

++ Can I propose topics?

Yes. Please send your suggestions to this cip-dev mailing list so we create the agenda as soon as possible.

[1] https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipconferences#workshops-and-meetings

Best Regards

--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Re: Update 2017wk17

Robert Marshall <robert.marshall@...>
 

Agustin Benito Bethencourt <agustin.benito@...> writes:


* Instructions for downloading the VM box #74
https://gitlab.com/cip-project/testing/issues/74 ---> Feel free to
download the VM and test it!
....
* Prior to Health Check, you must telnet into the BBB as root, then
quit telnet without logging out. Fixed #28
https://gitlab.com/cip-project/testing/issues/28
I've updated https://gitlab.com/cip-project/testing/issues/74 with a
comment so that the VM box can be issued with the fix for #28 as the
current box was exported before the fix was made.

Robert


Update 2017wk17

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi,

++ CIP testing

Release plan

* The tentative release date is May 16th.
** There is a label called "release" that will make easier to follow the release specific tasks.

* High level release plan #37 https://gitlab.com/cip-project/testing/issues/37

* Currently Robert Marshall and Don Brown are focusing on the release. I will take care of the some non-technical together with the management tasks.

* Some improvements needed in the wiki has been identified before the release:
** Wiki content improvements #44 https://gitlab.com/cip-project/testing/issues/44
** v0.9 release: feature page #38 https://gitlab.com/cip-project/testing/issues/38
** v0.9 release: Technical documentation #39 https://gitlab.com/cip-project/testing/issues/39
** Version numbers of software in wiki incorrect #72 https://gitlab.com/cip-project/testing/issues/72

Download service

* We have an issue with the certificates of the S3 service. Codethink operations team is looking into it.
** The service works but it would be ideal to fix the issue before the release #69 https://gitlab.com/cip-project/testing/issues/69

* Instructions for downloading the VM box #74 https://gitlab.com/cip-project/testing/issues/74 ---> Feel free to download the VM and test it!

kernelci update

* As reported, we updated kernelci to match upstream which impacted us. We have fixed all the issues so we will ship the VM with the latest version, specially the blocker described in the previous report:
** Web server issues after the update fixed #65 https://gitlab.com/cip-project/testing/issues/65

Vagrant / VirtualBox

* Lachaln joined the team for a few weeks to help us with some issues related with VirtualBox and Vagrant (provisioning/deployment).

* Some issues fixed by the team in this area has been:
** Import of box fails if vagrant instance already running #71 https://gitlab.com/cip-project/testing/issues/71
** Write script to create the supporting files necessary to run the vm from the downloaded machine #50 https://gitlab.com/cip-project/testing/issues/50
** Import of box fails if vagrant instance already running #71 https://gitlab.com/cip-project/testing/issues/71
** Vagrant bring-up of VM slow or stalls #66 https://gitlab.com/cip-project/testing/issues/66

ser2net and health check: connection between LAVA and the board

* Fixed integration script fixed #58 https://gitlab.com/cip-project/testing/issues/58

* Prior to Health Check, you must telnet into the BBB as root, then quit telnet without logging out. Fixed #28 https://gitlab.com/cip-project/testing/issues/28

* Different Shutdown Messages between sysvinit and systemd #73 https://gitlab.com/cip-project/testing/issues/73

Testing

We are starting to face the last stage which is running tests on the board. We are working on this issue:

* Integrate lava-ci tool into board-at-desk-single-dev VM #20 https://gitlab.com/cip-project/testing/issues/20

Decision to take

* We have been discussing which version of Vagrant/Virtualbox will we recommend to users.
** We had conflicting requirements between what Ben H. recommends, based on what is available in Debian, and what the rest of the team consider ideal in order to support other users.
** The discussion is being held in #34 https://gitlab.com/cip-project/testing/issues/34
** We will come to a conclusion during our next team meeting this coming Thursday May 4th.

OSSJ

We will work on demoing the VM at OSSJ
* High level overview of the work needed for the demo #70 https://gitlab.com/cip-project/testing/issues/70

* A new group label has been created in gitlab to take care about events/conferences related topics, like demos: event: https://gitlab.com/groups/cip-project/labels

++ Kernel maintenance

* cip-kernel-sec project created to keep track relevant CVEs across mainline and stable branches: https://gitlab.com/cip-project/cip-kernel-sec

* Keep reviewing features from kernel configs sent by members.

* Reviewing 4.4.60 changes before upgrading the CIP kernel.

Best Regards
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Re: -rt for CIP kernel

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi,

On 28/04/17 08:22, Daniel Wagner wrote:
Hi,

I would like to announce that I am starting getting my hands dirty with
maintaining the -rt series for the CIP kernel. I've done a few things
with the -rt patch set in the past so I am not completely lost. Though I
haven't done any maintenance work so far.
This is very good news.

Best Regards

--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Re: [Fuego] Discussion about Fuego unified results format

Milo Casagrande <milo.casagrande@...>
 

On Fri, Apr 28, 2017 at 5:08 AM, Daniel Sangorrin
<daniel.sangorrin@...> wrote:

Currently most of the documentation is outdated and we are still fixing
the original proof-of-concept code written by Cogent embedded.

For the topic of this conversation, the closest to the current status is the results.json section at
http://bird.org/fuego/Unified_Results_Format_Project
Thanks! That is a good starting point for me.

Ok, so my understanding now is that there are multiple schemas (batch, boot, boot_regressions,
build, build_logs, build_logs_summary, compare, job, lab, report, send, token), some of them
containing two sub-schemas (GET and POST) but for non kernel build/boot tests we would only
need to care about the 3+3+3 schemas at https://api.kernelci.org/schema-test.html.
Is that correct?
Yes, you are correct.
For the general tests, you would only need those schemas.

The POST and GET schemas are slightly different because when you POST
something, we extract data from the database and include what we can
infer into the data you GET. That is true if the test data includes
references to build and/or boot reports.

How many individual JSON files would be needed to be generated/POST'ed for a multi test case test suite like LTP.
# For example, suppose 1 testsuite made of 6 test sets with 100 test cases each
# Note: in Fuego we only generate 1 JSON file.
You can send 1, 2 or 3: it depends on how you want to send the data.
You should have a more general overview here of how it can work:
https://api.kernelci.org/collection-test.html

In case of just 1 JSON payload, everything else is "embedded": you
would include into the test_set array everything else (and the
test_set will itself include all the other tests in the test_case
array). Or you could split it up into 3 separate payloads.

If we make Fuego and KernelCI interact together, Fuego would mainly POST results but
the reporting tool would also GET them.
Right now though, to extract the same data, you would need at least 2
GET requests: once you upload everything, in "embedded" mode or even
with 3 separate POSTs, we only include the references to the other
data.

Due to lack of support on the database side, we couldn't do the over
way around and re-embed the data back. Now the database supports that,
but we haven't got around to actually implement it.

By the way, where can I find more information about the "non special" tests?
# I can only see kernel build/boot test tabs at https://kernelci.org.
That's the missing part on kernelci.org: we never had the time to
implement a visualization for the general tests.
I have a branch on github where I started to play around with that,
but it never saw the light of day.

I have prepared a virtual machine with KernelCI and I want to start making experiments
by POST'ing Fuego results from different tests (not just build/boot tests) to KernelCI.
Is that supposed to work out of the box?
More or less yes: you would need to create a token for yourself first
in order to be able to POST the results into the database.

It's good to have that decoupling and possibly (?) default to local host when
the user doesn't have a separate storage server.
It is possible yes, but that is a detail on the visualization side, at
least from our POV.

If nothing is defined, we default to storage.kernelci.org, but if you
are running locally, you can tweak the frontend config files and make
it so that it will point to "localhost". It will probably need more
logic to build the correct path, but if using the kernelci API to
upload files it has a fixed location.

I think the CIP project with their VMs are already doing something similar.

"kvm_guest": {
"type": "string",
"description": "The name of the KVM guest this test case has been executed on"
},

Do you think it could be changed to something more generic such as "the board" or "the node"?
There should already be a "board" field, at least at the "test_suite" level.

We were asked to include those as well in the "test_set" and
"test_case" schemas, but never got around doing it: we don't have that
much traffic on the test data.

We only included it at the "test_suite" level because for us, a suite
with all its sets and test cases will be run for (almost) each
successfully booted board.

By the way, is KernelCI a community project with for example a mailing list where I can send patches and there is a reviewer etc..?
We don't have a mailing list, but the code is publicly available on
github [1] [2], and you can submit pull requests there.
We will look at them.

That's great.

Actually, it's already virtualized here (the previous link was outdated).
https://gitlab.com/cip-project/board-at-desk-single-dev
Yeah, I know the project. :-)
They are doing a slightly different thing from what we have in mind
(plus they include LAVA in the VM). What we wanted to achieve is a
"docker pull" style of packaging.

We have a basic wireframe of how this should be done here:
https://github.com/kernelci/kernelci-docker
We never really tested it, and it might be a little bit more complex
than a simple "docker pull" command: there are many moving parts, but
with Docker compose that should be achievable.

Let me summarize some action items
- I will try POST'ing Fuego's kernel_build results to KernelCI (I will use CIP's board-at-desk-single-dev VM)
- Is the generic test interface ready to use out of the box?
It should, yes, but it definitely is not bugs or quirks free.
The more we use it the more we will discover things to tweak and fix.

+ If not, is the KernelCI project willing to (or have time/resources for) patching or reviewing patches?
We are willing to review patches, it might take a little bit of time
to do though.

+ if yes, I will try POST'ing Fuego's Dhrystone and Bonnie results
- Will the KernelCI project collaborate on the board-at-desk-single-dev VM or create a new container?
+ If creating a new one, do you have enough resources or can you give us an approximate date?
We don't really collaborate with them on that, strictly speaking, but
we try to help them whenever we can and whenever they need help.

Our approach with the container is slightly different compared to the
VM one: we are looking at how to "containerize" the system in order to
deploy it, but more "on the cloud" side than "on the desk" side
(although, with Docker, it should be fairly portable).

Ciao.

[1] https://github.com/kernelci/kernelci-frontend
[2] https://github.com/kernelci/kernelci-backend

--
Milo Casagrande
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs


-rt for CIP kernel

Daniel Wagner <daniel.wagner@...>
 

Hi,

I would like to announce that I am starting getting my hands dirty with
maintaining the -rt series for the CIP kernel. I've done a few things
with the -rt patch set in the past so I am not completely lost. Though I
haven't done any maintenance work so far.

My first goal is to get familiar with Steven's scripts doing -rt stable
releases and redoing what Steven is currently doing for the 4.4-rt
stable tree. If this works out I might even propose myself maintaining
the 4.4-rt if Steven is agreeing on this. Though first I need to get
those scripts under control :)

There is certainly a need for discussing on what the cip-rt tree should
contain. I stress this because -rt stable trees follow the latest -rt
stable tree. That is everything which goes in latest stable will be back
ported to any stable -rt tree. I guess this is not what cip-rt is
targeting.

Thanks,
Daniel


Re: [Fuego] Discussion about Fuego unified results format

Daniel Sangorrin <daniel.sangorrin@...>
 

Hi Milo,

-----Original Message-----
From: Milo Casagrande [mailto:milo.casagrande@...]
Sent: Thursday, April 27, 2017 5:02 PM
Hi Daniel,

Kevin pointed me to this discussion and I wanted to reply to a few of
the points below.
Thanks to both of you.

As a little bit of background: I'm one of the developer behind
kernelci.org, and I've done most of the work on API and web UI.
I might be lacking some information or getting some terms not
correctly, so please bear with me, and in case I would appreciate some
pointers to specifications/schemas/docs/README that can help me out.
Currently most of the documentation is outdated and we are still fixing
the original proof-of-concept code written by Cogent embedded.

For the topic of this conversation, the closest to the current status is the results.json section at
http://bird.org/fuego/Unified_Results_Format_Project

On Fri, Apr 21, 2017 at 4:37 AM, Daniel Sangorrin
<daniel.sangorrin@...> wrote:

Thanks, I checked it a few months ago but not in depth yet. At the time I came
to the conclusion that there was a separate schema for each type of test (build,
boot,..). Has that changed or is it a misunderstanding from my side?.
Ref: https://api.kernelci.org/schema.html
Ref: https://api.kernelci.org/schema-boot.html
Ref: https://api.kernelci.org/schema-build.html

[Note] I think we would rather have a single generic format for all tests.
For kernelci.org, builds and boots are a special kind of "test",
that's why we have always been keeping them separate from everything
else. Builds and boots are what we started building kernelci.org on.
After the build and boot phase, a "test" can be reduced to whatever
else can be run - and gives an output - on a board after it booted
successfully.
Ok, so my understanding now is that there are multiple schemas (batch, boot, boot_regressions,
build, build_logs, build_logs_summary, compare, job, lab, report, send, token), some of them
containing two sub-schemas (GET and POST) but for non kernel build/boot tests we would only
need to care about the 3+3+3 schemas at https://api.kernelci.org/schema-test.html.
Is that correct?

How many individual JSON files would be needed to be generated/POST'ed for a multi test case test suite like LTP.
# For example, suppose 1 testsuite made of 6 test sets with 100 test cases each
# Note: in Fuego we only generate 1 JSON file.

If we make Fuego and KernelCI interact together, Fuego would mainly POST results but
the reporting tool would also GET them.

By the way, where can I find more information about the "non special" tests?
# I can only see kernel build/boot test tabs at https://kernelci.org.
I have prepared a virtual machine with KernelCI and I want to start making experiments
by POST'ing Fuego results from different tests (not just build/boot tests) to KernelCI.
Is that supposed to work out of the box?

Actually, the current JSON output goes as follows:

testsuite (e.g.: Functional.LTP)
--board (e.g. Beaglebone black)
----kernel version (e.g.: CIP kernel 4.4.55 ...)
------spec (e.g.: default or quick)
--------build number (like KernelCI build id)
----------groupname <-- we do have groups! (e.g.: 2048b_sector_size)
------------test1 (e.g.: reads)
-------------- measurement
-------------- reference value (e.g. a threshold of Mb/s)
------------test2 (e.g. writes)
------------test3 (e.g.: re-writes)

[Note] We also have the concept of testplans where you can group testsuites
and their specs for a specific board. This is quite useful.

Apart from this information we also store timestamps, the test raw output,
fuego variables (this needs improvements but it will be useful for "replaying" tests),
and a developers log (including syslog errors, cat /proc/cpuinfo, ps output etc..).
We don't store raw outputs or logs directly in the schema, if that's
what you meant.
Yeah, we don't either. We just package them (something like LAVA's bundles) and
(proof-of-concept work) send them to a central server.

The test schema includes an "attachment" sub-schema that can be used
to define where those outputs/files are stored. We have a separate
system (storage.kernelci.org) that is used to "archive" artifacts from
the build/boot and potentially from the test phases.
It's good to have that decoupling and possibly (?) default to local host when
the user doesn't have a separate storage server.

We don't rely on the build/boot/test system (Jenkins in this case) to
handle that: we extract what we need and store it where we need it.
You could even store it somewhere else and point the attachment to the
correct URL, then it's up to a visualization implementation to handle
that.

I am checking Kernel CI's output schema(s) from the link you sent:

1) parameters: seems to be the equivalent to our specs
I'm not sure what the "spec" is for Fuego, but the "parameters" for us
is used to store something like the environment variables set and
their values, command line options passed...
Yes, exactly the same. But we are not storing the spec in the results yet, just
the name of the spec. We will have to send the spec as well somehow
when we want to share the results with a centralized server.

2) minimum, maximum, number of samples, samples_sum, samples_swr_sum: we don't store
information that can be inferred from the data at the moment, just calculate it when making a report.
I don't remember when we introduced those (keep in mind that they are
not required fields), but the idea was to store some statistical
analysis directly into the tests.
I think the "samples_sqr_sum" description is a little bit off though.

5) kvm_guest: this would be just another board name in Fuego, so we don't include such specific parameter.
It's not required field, but needed for us since we might run tests on
KVM and need to keep track where exactly they ran.
"kvm_guest": {
"type": "string",
"description": "The name of the KVM guest this test case has been executed on"
},

Do you think it could be changed to something more generic such as "the board" or "the node"?
By the way, is KernelCI a community project with for example a mailing list where I can send patches and there is a reviewer etc..?

6) definition_uri: the URI is inferred from the data in our case right now. In other words, the folder where the
data is stored is a combination of the board's name, testname, spec, build number etc..
7) time: this is stored by jenkins, but not in the json output. We will probably have to analyze the
Jenkins build output XML, extract such information and add it to the JSON output. I think this work is already
done by Cai Song, so I want to merge that.
From what I could see and understand, Fuego is tightly coupled with
Jenkins: kernelci.org is not (or at least tries not to as much as it
can).
kernelci.org doesn't know where the builds are running, nor where the
boots are happening and which systems are being used to do all that.
The same can be extended to the test phase: they can be run anywhere
on completely different systems.

Potentially we can swap Jenkins out and use another build system,
that's why we need to keep track of measurements like this one because
we don't rely on the other systems.
Sorry, I was wrong about '7', we already measure build duration and the
whole test duration in our scripts.

From the architecture point of view, Fuego does not depend on Jenkins anymore.
There are some quirks that need to be fixed in the implementation, but basically
we are going to be decoupled like KernelCI. In fact we will be able to run and
report results from the command line without GUIs or web applications.

8) attachments: we have something similar (success_links and fail_links in the spec) that are used to present a link on
the jenkins interface. This way the user can download the results (e.g.: excel file, a tar.gz file, a log file, a png..).
See above for the "attachment". I'm not sure it's the same as
"[success|fail]_links", but I'm lacking some background info here.
It's kind of similar but at the moment we are assuming that the files are stored in the host so the links
are to local files. This is fine for most people, but probably we should support external links like KernelCI does with
the storage server in the future.

9) metadata: we don't have this at the moment, but I think it's covered by the testlog, devlog, and links.
10) kernel: we have this as fwver (we use the word firmware since it doesn't need to be the linux kernel)
11) defconfig: we do not store this at the moment. In the kernel_build test the spec has a "config" parameter that
has similar functionality though.
12) arch: this is stored as part of the board parameters (the board files contain other variables
such as the toolchain used, path used for tests, etc..)
We extract all those values from either the build or the boot data,
it's mostly needed for searching/indexing capabilities.
The test schemas are probably a little bit tightly coupled with our
concepts of build and boot.

13) created_on: this information is probably stored inside jenkins.
Actually, this will be stored in the variable FUEGO_HOST.

14) lab_name: this seems similar to the information that Tim wants to add for sharing tests.
15) test_set: this looks similar to fuego's groupnames.
16) test_case: we have test cases (called test in Fuego, although there is a naming inconsistency
issue in Fuego at the moment) support. However I want to add the ability to define or "undefine"
which test cases need to be run.
Hmmm... not sure I get what you meant here.
Sorry, the "undefine" thing was somehow unrelated. I was just mentioning that
specs (Kernel CI's parameters) should allow blacklisting some of the test cases.
Nothing related to the schema.

All the test schemas in kernelci.org are used to provide the results
from test runs, not to define which tests need to be run or not.
In our case that is up to the test definitions, the test runners or
whoever runs the tests. What we are interested in, at least for
kernelci.org, are the results of those runs.

The reporting part in Fuego needs to be improved as well, I will be working on this soon.
I think that reports should be based on templates, so that the user can prepare his/her
own template (e.g.: in Japanese) and Fuego will create the document filling the gaps
with data.
The email we send out are based on some custom templates (built with
Jinja) that potentially could be translated into different languages:
we are using gettext to implement plurals/singular, and most of the
strings in the email template are marked for translation.

We never had the use case for that (nor the time/resources do do
that), but with some work - and some translations - it could be done.
Actually, what you have is fine I think. KernelCI has a GET interface, so the local reporting
tool would just download the necessary results and create the report according
to a template provided by the user. If we have the same GET interface, we can share
the reporting tool.

+ Option 2: Use the KernelCI web app
-> KernelCI web app is a strong option but we may need to extend
some parts. In that case, I would like to work with you and the KernelCI
maintainers because it is too complex to maintain a fork.
Fuego could have a command like "ftc push -f kernelci -s 172.34.5.2" where the
internal custom format would be converted to KernelCI schema and POST'ed
-> The web app must be portable and easy to deploy. We don't want only one
single server on the whole Internet. This work at the CIP project is very
valuable in this sense: https://github.com/cip-project/cip-kernelci
We have plans to move everything to a container based approach, and
that should be more portable than it is now.
That's great.

Actually, it's already virtualized here (the previous link was outdated).
https://gitlab.com/cip-project/board-at-desk-single-dev

I already got KernelCI working with that, but I had to make a small modification to
KernelCI because I work behind a proxy.

Let me summarize some action items
- I will try POST'ing Fuego's kernel_build results to KernelCI (I will use CIP's board-at-desk-single-dev VM)
- Is the generic test interface ready to use out of the box?
+ If not, is the KernelCI project willing to (or have time/resources for) patching or reviewing patches?
+ if yes, I will try POST'ing Fuego's Dhrystone and Bonnie results
- Will the KernelCI project collaborate on the board-at-desk-single-dev VM or create a new container?
+ If creating a new one, do you have enough resources or can you give us an approximate date?

Thanks,
Daniel


CIP testing team meetings

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi,

since we are geographically distributed, the guys from Codethink involved in the Board at Desk development meet through Google Hangout Tuesdays and Thursdays.

If any of you is interested in asking questions to the developers directly or simply follow up where we are at, feel free to join on Thursdays at 11:00 UK time (12:00 CEST). That is in a few minutes today.

Link to the hangout: https://plus.google.com/hangouts/_/codethink.co.uk/team-event-cip?hceid=YWd1c3Rpbi5iZW5pdG9AY29kZXRoaW5rLmNvLnVr.6nlrt0lrh4sf9dd71qruh3qbm4&authuser=1

We will approve your join request right away.

Best Regards

--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Re: [Fuego] Discussion about Fuego unified results format

Milo Casagrande <milo.casagrande@...>
 

Hi Daniel,

Kevin pointed me to this discussion and I wanted to reply to a few of
the points below.

As a little bit of background: I'm one of the developer behind
kernelci.org, and I've done most of the work on API and web UI.
I might be lacking some information or getting some terms not
correctly, so please bear with me, and in case I would appreciate some
pointers to specifications/schemas/docs/README that can help me out.

On Fri, Apr 21, 2017 at 4:37 AM, Daniel Sangorrin
<daniel.sangorrin@...> wrote:

Thanks, I checked it a few months ago but not in depth yet. At the time I came
to the conclusion that there was a separate schema for each type of test (build,
boot,..). Has that changed or is it a misunderstanding from my side?.
Ref: https://api.kernelci.org/schema.html
Ref: https://api.kernelci.org/schema-boot.html
Ref: https://api.kernelci.org/schema-build.html

[Note] I think we would rather have a single generic format for all tests.
For kernelci.org, builds and boots are a special kind of "test",
that's why we have always been keeping them separate from everything
else. Builds and boots are what we started building kernelci.org on.
After the build and boot phase, a "test" can be reduced to whatever
else can be run - and gives an output - on a board after it booted
successfully.

Actually, the current JSON output goes as follows:

testsuite (e.g.: Functional.LTP)
--board (e.g. Beaglebone black)
----kernel version (e.g.: CIP kernel 4.4.55 ...)
------spec (e.g.: default or quick)
--------build number (like KernelCI build id)
----------groupname <-- we do have groups! (e.g.: 2048b_sector_size)
------------test1 (e.g.: reads)
-------------- measurement
-------------- reference value (e.g. a threshold of Mb/s)
------------test2 (e.g. writes)
------------test3 (e.g.: re-writes)

[Note] We also have the concept of testplans where you can group testsuites
and their specs for a specific board. This is quite useful.

Apart from this information we also store timestamps, the test raw output,
fuego variables (this needs improvements but it will be useful for "replaying" tests),
and a developers log (including syslog errors, cat /proc/cpuinfo, ps output etc..).
We don't store raw outputs or logs directly in the schema, if that's
what you meant.

The test schema includes an "attachment" sub-schema that can be used
to define where those outputs/files are stored. We have a separate
system (storage.kernelci.org) that is used to "archive" artifacts from
the build/boot and potentially from the test phases.

We don't rely on the build/boot/test system (Jenkins in this case) to
handle that: we extract what we need and store it where we need it.
You could even store it somewhere else and point the attachment to the
correct URL, then it's up to a visualization implementation to handle
that.

I am checking Kernel CI's output schema(s) from the link you sent:

1) parameters: seems to be the equivalent to our specs
I'm not sure what the "spec" is for Fuego, but the "parameters" for us
is used to store something like the environment variables set and
their values, command line options passed...

2) minimum, maximum, number of samples, samples_sum, samples_swr_sum: we don't store
information that can be inferred from the data at the moment, just calculate it when making a report.
I don't remember when we introduced those (keep in mind that they are
not required fields), but the idea was to store some statistical
analysis directly into the tests.
I think the "samples_sqr_sum" description is a little bit off though.

5) kvm_guest: this would be just another board name in Fuego, so we don't include such specific parameter.
It's not required field, but needed for us since we might run tests on
KVM and need to keep track where exactly they ran.

6) definition_uri: the URI is inferred from the data in our case right now. In other words, the folder where the
data is stored is a combination of the board's name, testname, spec, build number etc..
7) time: this is stored by jenkins, but not in the json output. We will probably have to analyze the
Jenkins build output XML, extract such information and add it to the JSON output. I think this work is already
done by Cai Song, so I want to merge that.
From what I could see and understand, Fuego is tightly coupled with
Jenkins: kernelci.org is not (or at least tries not to as much as it
can).
kernelci.org doesn't know where the builds are running, nor where the
boots are happening and which systems are being used to do all that.
The same can be extended to the test phase: they can be run anywhere
on completely different systems.

Potentially we can swap Jenkins out and use another build system,
that's why we need to keep track of measurements like this one because
we don't rely on the other systems.

8) attachments: we have something similar (success_links and fail_links in the spec) that are used to present a link on
the jenkins interface. This way the user can download the results (e.g.: excel file, a tar.gz file, a log file, a png..).
See above for the "attachment". I'm not sure it's the same as
"[success|fail]_links", but I'm lacking some background info here.

9) metadata: we don't have this at the moment, but I think it's covered by the testlog, devlog, and links.
10) kernel: we have this as fwver (we use the word firmware since it doesn't need to be the linux kernel)
11) defconfig: we do not store this at the moment. In the kernel_build test the spec has a "config" parameter that
has similar functionality though.
12) arch: this is stored as part of the board parameters (the board files contain other variables
such as the toolchain used, path used for tests, etc..)
We extract all those values from either the build or the boot data,
it's mostly needed for searching/indexing capabilities.
The test schemas are probably a little bit tightly coupled with our
concepts of build and boot.

13) created_on: this information is probably stored inside jenkins.
14) lab_name: this seems similar to the information that Tim wants to add for sharing tests.
15) test_set: this looks similar to fuego's groupnames.
16) test_case: we have test cases (called test in Fuego, although there is a naming inconsistency
issue in Fuego at the moment) support. However I want to add the ability to define or "undefine"
which test cases need to be run.
Hmmm... not sure I get what you meant here.

All the test schemas in kernelci.org are used to provide the results
from test runs, not to define which tests need to be run or not.
In our case that is up to the test definitions, the test runners or
whoever runs the tests. What we are interested in, at least for
kernelci.org, are the results of those runs.

The reporting part in Fuego needs to be improved as well, I will be working on this soon.
I think that reports should be based on templates, so that the user can prepare his/her
own template (e.g.: in Japanese) and Fuego will create the document filling the gaps
with data.
The email we send out are based on some custom templates (built with
Jinja) that potentially could be translated into different languages:
we are using gettext to implement plurals/singular, and most of the
strings in the email template are marked for translation.

We never had the use case for that (nor the time/resources do do
that), but with some work - and some translations - it could be done.

+ Option 2: Use the KernelCI web app
-> KernelCI web app is a strong option but we may need to extend
some parts. In that case, I would like to work with you and the KernelCI
maintainers because it is too complex to maintain a fork.
Fuego could have a command like "ftc push -f kernelci -s 172.34.5.2" where the
internal custom format would be converted to KernelCI schema and POST'ed
-> The web app must be portable and easy to deploy. We don't want only one
single server on the whole Internet. This work at the CIP project is very
valuable in this sense: https://github.com/cip-project/cip-kernelci
We have plans to move everything to a container based approach, and
that should be more portable than it is now.
Ciao.

--
Milo Casagrande
Linaro.org <www.linaro.org> │ Open source software for ARM SoCs


CVE tracking for the kernel

Ben Hutchings <ben.hutchings@...>
 

This week I have done more work on scripts to track CVEs across mainline
and stable branches. I now have scripts to incrementally import data
from Debian and Ubuntu's trackers (descriptions, comments, commit
hashes) and to combine this with git commit logs to work out which
issues affect each branch. I've had to make some manual fixes to the
data, but mostly this Just Works - out of 164 issues imported, I
corrected errors in 3 and added missing information to 32.

I thought about using a database and web forms for this, but that would
complicate hosting. For now, the issues are stored in the git
repository in YAML format, one issue per file. There's validation code
that checks the format quite strictly, so it should be possible to
convert to a database schema at some later date.

The scripts and issue data are now at:
https://gitlab.com/cip-project/cip-kernel-sec

Ben.

--
Ben Hutchings
Software Developer, Codethink Ltd.


Re: Problems installing "Board at desk"

Robert Marshall <robert.marshall@...>
 

Daniel

Thanks for this - comments below...

"Daniel Sangorrin" <daniel.sangorrin@...> writes:

Hi,

I am trying the "board-at-desk-single-dev" project (sorry to do it so late).
I managed to build the CIP kernel and see the job results using KernelCI.
However, I am having problems with LAVA health checks (see at the end).

First, I would like to report on a few problems I had to solve to get the kernel built:

*************************************************
1) Although now I know that the current development occurs at
https://gitlab.com/cip-project/board-at-desk-single-dev.git
googling "CIP kernelci" gives also the following outdated (?) sites which can
be confusing:
https://github.com/cip-project/cip-kernelci.git
https://gitlab.com/cip-project/kernelci-debian.git

Q: are they necessary? or is it some misunderstanding from my side?
I think this is largely addressed by Wolfgang's comment

2) Problems behind a proxy

a) I did the following to setup proxy settings for vagrant on Ubuntu 16.04 Xenial:

$ sudo apt-get remove vagrant <-- gives errors when installing vagrant-proxyconf
$ dpkg -i vagrant_1.9.4_x86_64.deb
$ vagrant plugin install vagrant-proxyconf
$ vi Vagrantfile
+ if Vagrant.has_plugin?("vagrant-proxyconf")
+ config.proxy.http = "http://xxx:yyyy/"
+ config.proxy.https = "https://xxx:yyy/"
+ config.proxy.no_proxy = "127.0.0.1,localhost,xxxx."
+ end

Q: maybe it would be good to add this to the tutorial
The tutorial recommends 1.8.1 but that's the default version for 16.04
so any tutorial change needs to address both those concerns


b) I got an error during vagrant up

==> default: fatal: [kernel-ci-backend]: FAILED! => {"changed": false, "cmd": "/usr/bin/apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv EA312927", "failed": true,
- Solved it by adding port 80 for apt-key
$ vagrant ssh
guest$ vi kernelci-backend/roles/install-deps/tasks/install-mongodb.yml
- hkp://keyserver.ubuntu.com
+hkp://keyserver.ubuntu.com:80
[Alt] sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

Q: if that works without proxies, maybe it should be set to 80 by default?
Will test and see, if so there'll probably need to be a change in install_backend.sh

c) I got a lot of warnings like these ones

- Warning 1 (ignore)
GetPassWarning: Can not control echo on the terminal.” or “Warning: Password input may be echoed.” - These do not affect the operation of the KernelCI VM.
- Warning 2 (ignore)
==> default: lava_scheduler_app.Notification.job_status_trigger: (fields.W901) CommaSeparatedIntegerField has been deprecated. Support for it (except in historical migrations) will be removed in Django 2.0.
==> default: HINT: Use CharField(validators=[validate_comma_separated_integer_list]) instead.

Q: The tutorial mentions Warning 1, but not Warning 2. Maybe adding that would be a good idea.
The tutorial is going to be reorganised before the release so we will
take this on board.


3) Modifying the 8080 port (very commonly used port, e.g. Fuego ;_+)

I solved this by
$ vi Vagrantfile
+ config.vm.network :forwarded_port, guest: 8081, host: 8081
$ sudo vi /etc/apache2/ports.conf
-> change to 8081
$ sudo vi /etc/apache2/sites-enabled/lava-server.conf
-> change to 8081
$ sudo service apache2 restart
$ sudo /etc/init.d/lava-server restart

Q: maybe this could be automated (?)
This would be the right solution - for the moment the best approach is
to edit integration-scripts/install_lava.sh on the host before creating
the VM and change 8080 to the desired port there (as well as the
Vagrantfile mod)

*************************************************

Second, regarding to LAVA health checks I think this is again a problem with being behind a
proxy but I'm not sure how to debug it. These are the error messages that I get
with QEMU's health check (/vagrant/tests/qemu-health-check.yaml)

- log:
Root tmp directory created at /var/lib/lava/dispatcher/tmp/7
start: 0 validate
Validating that https://images.validation.linaro.org/kvm/standard/stretch-2.img.gz exists
no device environment specified
Invalid job definition
Invalid job data: ["HTTPSConnectionPool(host='images.validation.linaro.org', port=443): Max retries exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f430c9b9c50>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))"]
validate duration: 0.02
Cleanup: removing /var/lib/lava/dispatcher/tmp/7
- traceback
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py", line 88, in run_pipeline_job
job.validate(simulate=validate_only)
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/job.py", line 173, in validate
self.pipeline.validate_actions()
File "/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/action.py", line 205, in validate_actions
raise JobError("Invalid job data: %s\n" % self.errors)
JobError: Invalid job data: ["HTTPSConnectionPool(host='images.validation.linaro.org', port=443): Max retries exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f430c9b9c50>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))"]

Looking at this at the moment


If someone has a clue about this please let me know.
# http_proxy-like variables are all defined in the VM (/etc/environment), and I can use wget and download
stretch-2.img.gz without problems.
Thanks for your report and suggestions

Robert


Re: Problems installing "Board at desk"

Agustin Benito Bethencourt <agustin.benito@...>
 

Hi Daniel,

On 26/04/17 10:04, Wolfgang Mauerer wrote:
Hi Yoshi,

On 26.04.2017 11:00, Yoshitake Kobayashi wrote:
Hi Daniel and Wolfgang,

Daniel>
As Wolfgang mentioned, please use Gitlab repositories.

I kept cip-project on Github to create a mirror from Gitlab, because of
network bandwidth reason.
Currently, only linux-cip repository is automatically synchronized with
CIP official repository on Gitlab.
If others are OK, I will work to create exactly same repository set from
Gitlab.
that's of course also fine for me.
Check this wiki page: https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptesting

There is a link to this repo in gitlab.com: https://gitlab.com/cip-project/board-at-desk-single-dev/tree/master

Download page: https://wiki.linuxfoundation.org/civilinfrastructureplatform/cipdownload

Feature page: https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptestingboardatdesksingledevfeaturepage

Hopefully tomorrow I will send the report of the previous couple of weeks, including the link to the first VM for testing.


Thanks, Wolfgang

Best regards,
Yoshi

2017年4月26日(水) 17:36 Wolfgang Mauerer <wolfgang.mauerer@...
<mailto:wolfgang.mauerer@...>>:

Hi Daniel,

On 26.04.2017 10:27, Daniel Sangorrin wrote:
> Hi,
>
> I am trying the "board-at-desk-single-dev" project (sorry to do it
so late).
> I managed to build the CIP kernel and see the job results using
KernelCI.
> However, I am having problems with LAVA health checks (see at the
end).
>
> First, I would like to report on a few problems I had to solve to
get the kernel built:
>
> *************************************************
> 1) Although now I know that the current development occurs at
> https://gitlab.com/cip-project/board-at-desk-single-dev.git
> googling "CIP kernelci" gives also the following outdated (?)
sites which can
> be confusing:
> https://github.com/cip-project/cip-kernelci.git
the github ressource is here for historic reasons. Unless anyone
disagrees, I'm going to remove it.

Thanks, Wolfgang

> https://gitlab.com/cip-project/kernelci-debian.git
>
> Q: are they necessary? or is it some misunderstanding from my side?
>
> 2) Problems behind a proxy
>
> a) I did the following to setup proxy settings for vagrant on
Ubuntu 16.04 Xenial:
>
> $ sudo apt-get remove vagrant <-- gives errors when installing
vagrant-proxyconf
> $ dpkg -i vagrant_1.9.4_x86_64.deb
> $ vagrant plugin install vagrant-proxyconf
> $ vi Vagrantfile
> + if Vagrant.has_plugin?("vagrant-proxyconf")
> + config.proxy.http = "http://xxx:yyyy/"
> + config.proxy.https = "https://xxx:yyy/"
> + config.proxy.no_proxy = "127.0.0.1,localhost,xxxx."
> + end
>
> Q: maybe it would be good to add this to the tutorial
>
> b) I got an error during vagrant up
>
> ==> default: fatal: [kernel-ci-backend]: FAILED! => {"changed":
false, "cmd": "/usr/bin/apt-key adv --keyserver
hkp://keyserver.ubuntu.com <http://keyserver.ubuntu.com> --recv
EA312927", "failed": true,
> - Solved it by adding port 80 for apt-key
> $ vagrant ssh
> guest$ vi
kernelci-backend/roles/install-deps/tasks/install-mongodb.yml
> - hkp://keyserver.ubuntu.com
<http://keyserver.ubuntu.com>
> +hkp://keyserver.ubuntu.com:80
<http://keyserver.ubuntu.com:80>
> [Alt] sudo apt-key adv --keyserver
hkp://keyserver.ubuntu.com:80 <http://keyserver.ubuntu.com:80>
--recv EA312927
>
> Q: if that works without proxies, maybe it should be set to 80 by
default?
>
> c) I got a lot of warnings like these ones
>
> - Warning 1 (ignore)
> GetPassWarning: Can not control echo on the terminal.” or
“Warning: Password input may be echoed.” - These do not affect the
operation of the KernelCI VM.
> - Warning 2 (ignore)
> ==> default:
lava_scheduler_app.Notification.job_status_trigger: (fields.W901)
CommaSeparatedIntegerField has been deprecated. Support for it
(except in historical migrations) will be removed in Django 2.0.
> ==> default: HINT: Use
CharField(validators=[validate_comma_separated_integer_list])
instead.
>
> Q: The tutorial mentions Warning 1, but not Warning 2. Maybe
adding that would be a good idea.
>
> 3) Modifying the 8080 port (very commonly used port, e.g. Fuego
;_+)
>
> I solved this by
> $ vi Vagrantfile
> + config.vm.network :forwarded_port, guest: 8081, host:
8081
> $ sudo vi /etc/apache2/ports.conf
> -> change to 8081
> $ sudo vi /etc/apache2/sites-enabled/lava-server.conf
> -> change to 8081
> $ sudo service apache2 restart
> $ sudo /etc/init.d/lava-server restart
>
> Q: maybe this could be automated (?)
> *************************************************
>
> Second, regarding to LAVA health checks I think this is again a
problem with being behind a
> proxy but I'm not sure how to debug it. These are the error
messages that I get
> with QEMU's health check (/vagrant/tests/qemu-health-check.yaml)
>
> - log:
> Root tmp directory created at
/var/lib/lava/dispatcher/tmp/7
> start: 0 validate
> Validating that
https://images.validation.linaro.org/kvm/standard/stretch-2.img.gz
exists
> no device environment specified
> Invalid job definition
> Invalid job data:
["HTTPSConnectionPool(host='images.validation.linaro.org
<http://images.validation.linaro.org>', port=443): Max retries
exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by

NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection

object at 0x7f430c9b9c50>: Failed to establish a new connection:
[Errno -5] No address associated with hostname',))"]
> validate duration: 0.02
> Cleanup: removing /var/lib/lava/dispatcher/tmp/7
> - traceback
> Traceback (most recent call last):
> File
"/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py", line
88, in run_pipeline_job
> job.validate(simulate=validate_only)
> File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/job.py",
line 173, in validate
> self.pipeline.validate_actions()
> File

"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/action.py",
line 205, in validate_actions
> raise JobError("Invalid job data: %s\n" % self.errors)
> JobError: Invalid job data:
["HTTPSConnectionPool(host='images.validation.linaro.org
<http://images.validation.linaro.org>', port=443): Max retries
exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by

NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection

object at 0x7f430c9b9c50>: Failed to establish a new connection:
[Errno -5] No address associated with hostname',))"]
>
> If someone has a clue about this please let me know.
> # http_proxy-like variables are all defined in the VM
(/etc/environment), and I can use wget and download
> stretch-2.img.gz without problems.
>
> Best regards,
> Daniel
>
>
>
> _______________________________________________
> cip-dev mailing list
> cip-dev@...
<mailto:cip-dev@...>
> https://lists.cip-project.org/mailman/listinfo/cip-dev
>
_______________________________________________
cip-dev mailing list
cip-dev@... <mailto:cip-dev@...>
https://lists.cip-project.org/mailman/listinfo/cip-dev
_______________________________________________
cip-dev mailing list
cip-dev@...
https://lists.cip-project.org/mailman/listinfo/cip-dev
--
Agustin Benito Bethencourt
Principal Consultant - FOSS at Codethink
agustin.benito@...


Re: Problems installing "Board at desk"

Mauerer, Wolfgang
 

Hi Yoshi,

On 26.04.2017 11:00, Yoshitake Kobayashi wrote:
Hi Daniel and Wolfgang,

Daniel>
As Wolfgang mentioned, please use Gitlab repositories.

I kept cip-project on Github to create a mirror from Gitlab, because of
network bandwidth reason.
Currently, only linux-cip repository is automatically synchronized with
CIP official repository on Gitlab.
If others are OK, I will work to create exactly same repository set from
Gitlab.
that's of course also fine for me.

Thanks, Wolfgang

Best regards,
Yoshi

2017年4月26日(水) 17:36 Wolfgang Mauerer <wolfgang.mauerer@...
<mailto:wolfgang.mauerer@...>>:

Hi Daniel,

On 26.04.2017 10:27, Daniel Sangorrin wrote:
> Hi,
>
> I am trying the "board-at-desk-single-dev" project (sorry to do it
so late).
> I managed to build the CIP kernel and see the job results using
KernelCI.
> However, I am having problems with LAVA health checks (see at the
end).
>
> First, I would like to report on a few problems I had to solve to
get the kernel built:
>
> *************************************************
> 1) Although now I know that the current development occurs at
> https://gitlab.com/cip-project/board-at-desk-single-dev.git
> googling "CIP kernelci" gives also the following outdated (?)
sites which can
> be confusing:
> https://github.com/cip-project/cip-kernelci.git
the github ressource is here for historic reasons. Unless anyone
disagrees, I'm going to remove it.

Thanks, Wolfgang

> https://gitlab.com/cip-project/kernelci-debian.git
>
> Q: are they necessary? or is it some misunderstanding from my side?
>
> 2) Problems behind a proxy
>
> a) I did the following to setup proxy settings for vagrant on
Ubuntu 16.04 Xenial:
>
> $ sudo apt-get remove vagrant <-- gives errors when installing
vagrant-proxyconf
> $ dpkg -i vagrant_1.9.4_x86_64.deb
> $ vagrant plugin install vagrant-proxyconf
> $ vi Vagrantfile
> + if Vagrant.has_plugin?("vagrant-proxyconf")
> + config.proxy.http = "http://xxx:yyyy/"
> + config.proxy.https = "https://xxx:yyy/"
> + config.proxy.no_proxy = "127.0.0.1,localhost,xxxx."
> + end
>
> Q: maybe it would be good to add this to the tutorial
>
> b) I got an error during vagrant up
>
> ==> default: fatal: [kernel-ci-backend]: FAILED! => {"changed":
false, "cmd": "/usr/bin/apt-key adv --keyserver
hkp://keyserver.ubuntu.com <http://keyserver.ubuntu.com> --recv
EA312927", "failed": true,
> - Solved it by adding port 80 for apt-key
> $ vagrant ssh
> guest$ vi
kernelci-backend/roles/install-deps/tasks/install-mongodb.yml
> - hkp://keyserver.ubuntu.com
<http://keyserver.ubuntu.com>
> +hkp://keyserver.ubuntu.com:80 <http://keyserver.ubuntu.com:80>
> [Alt] sudo apt-key adv --keyserver
hkp://keyserver.ubuntu.com:80 <http://keyserver.ubuntu.com:80>
--recv EA312927
>
> Q: if that works without proxies, maybe it should be set to 80 by
default?
>
> c) I got a lot of warnings like these ones
>
> - Warning 1 (ignore)
> GetPassWarning: Can not control echo on the terminal.” or
“Warning: Password input may be echoed.” - These do not affect the
operation of the KernelCI VM.
> - Warning 2 (ignore)
> ==> default:
lava_scheduler_app.Notification.job_status_trigger: (fields.W901)
CommaSeparatedIntegerField has been deprecated. Support for it
(except in historical migrations) will be removed in Django 2.0.
> ==> default: HINT: Use
CharField(validators=[validate_comma_separated_integer_list]) instead.
>
> Q: The tutorial mentions Warning 1, but not Warning 2. Maybe
adding that would be a good idea.
>
> 3) Modifying the 8080 port (very commonly used port, e.g. Fuego ;_+)
>
> I solved this by
> $ vi Vagrantfile
> + config.vm.network :forwarded_port, guest: 8081, host: 8081
> $ sudo vi /etc/apache2/ports.conf
> -> change to 8081
> $ sudo vi /etc/apache2/sites-enabled/lava-server.conf
> -> change to 8081
> $ sudo service apache2 restart
> $ sudo /etc/init.d/lava-server restart
>
> Q: maybe this could be automated (?)
> *************************************************
>
> Second, regarding to LAVA health checks I think this is again a
problem with being behind a
> proxy but I'm not sure how to debug it. These are the error
messages that I get
> with QEMU's health check (/vagrant/tests/qemu-health-check.yaml)
>
> - log:
> Root tmp directory created at /var/lib/lava/dispatcher/tmp/7
> start: 0 validate
> Validating that
https://images.validation.linaro.org/kvm/standard/stretch-2.img.gz
exists
> no device environment specified
> Invalid job definition
> Invalid job data:
["HTTPSConnectionPool(host='images.validation.linaro.org
<http://images.validation.linaro.org>', port=443): Max retries
exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection
object at 0x7f430c9b9c50>: Failed to establish a new connection:
[Errno -5] No address associated with hostname',))"]
> validate duration: 0.02
> Cleanup: removing /var/lib/lava/dispatcher/tmp/7
> - traceback
> Traceback (most recent call last):
> File
"/usr/lib/python2.7/dist-packages/lava/dispatcher/commands.py", line
88, in run_pipeline_job
> job.validate(simulate=validate_only)
> File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/job.py",
line 173, in validate
> self.pipeline.validate_actions()
> File
"/usr/lib/python2.7/dist-packages/lava_dispatcher/pipeline/action.py",
line 205, in validate_actions
> raise JobError("Invalid job data: %s\n" % self.errors)
> JobError: Invalid job data:
["HTTPSConnectionPool(host='images.validation.linaro.org
<http://images.validation.linaro.org>', port=443): Max retries
exceeded with url: /kvm/standard/stretch-2.img.gz (Caused by
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection
object at 0x7f430c9b9c50>: Failed to establish a new connection:
[Errno -5] No address associated with hostname',))"]
>
> If someone has a clue about this please let me know.
> # http_proxy-like variables are all defined in the VM
(/etc/environment), and I can use wget and download
> stretch-2.img.gz without problems.
>
> Best regards,
> Daniel
>
>
>
> _______________________________________________
> cip-dev mailing list
> cip-dev@... <mailto:cip-dev@...>
> https://lists.cip-project.org/mailman/listinfo/cip-dev
>
_______________________________________________
cip-dev mailing list
cip-dev@... <mailto:cip-dev@...>
https://lists.cip-project.org/mailman/listinfo/cip-dev

9921 - 9940 of 10158