Re: [C-safe-secure-studygroup] [SystemSafety] Critical systems Linux
this is very helpful, thank you. I'm going to re-add the other lists, for the same reason as before, and hope you're ok with that. Please see my comments inline below...
On 2018-11-22 10:26, Clive Pygott wrote:
I'll have a go at your question - FYI my background is system safetyUnderstood, and I'd be surprised if anyone would challenge this reasoning.
However, a standard like 61508 takes a layered approach to safety.I'm not sure I understand "layered approach", since I've heard it mentioned in multiple situations outside safety (security for one, and general architecture/abstraction for another).
Are you saying that the aim is redundant/overlapping safety methods, to avoid single-point-of-failure, or something else?
The topmostOK, understood. In previous discussions I've been attempting to understand whether there's any fundamental reasons that such requirements would need to exist before the software, or whether they could be originated for a specific system, and then considered/applied to pre-existing code. Is there a hard and fast argument one way or the other?
From the software perspective, under this is the requirement to showOK, makes sense.
* showing the code is well behaved under all circumstancesHere I fall off the horse. I don't believe we can be 100% certain about "all circumstances", except for small/constrained/simple systems. So I distrust claims of certainty about the behaviour of modern COTS multicore microprocessors, for example.
If you are doing full formal semantic verification, the second step isIt's not fair to single out any individual project/system/community, but as an example  SEL4's formal proof of correctness proved to be insufficient in the context of spectre/meltdown. I'd be (pleasantly) surprised if any semantic proof can withstand misbehaviour of the hardware/firmware/OS/tooling underneath it.
However, in practice formal proof isUnderstood, and agreed.
The validation part aims to improve the (albeit flawed) belief inOK, IIUC this is mainly
- 'coding standards lead to tests we can run'. And once we are into tests, we have to consider applicability, correctness and completeness of the tests, in addition to the "flawed belief in contiguous behaviour".
- and possibly some untestable guidance/principles which may or may not be relevant/correct
To get back to the original discussion, it is staggeringly naive toI've also seen that (in part that's why I'm here) but I certainly wouldn't dream of suggesting it.
What the certified kernel (or any otherOK in principle. However from some of the discussion, which I won't rehash here it seemed to me that some of the safety folks were expressly not confident in some of the certified/advertised/claimed behaviours.
As I read your original message you are asking 'why can't a wide userIf that's the question I seemed to be asking I apologise; certainly I wouldn't count a wide user base as evidence of correctness. It's evidence of something, though, and that evidence may be part of what could be assessed when considering the usefulness of software.
The short answer is, doI totally agree - which brings us back to the need for required/architected behaviours/properties.
Is my project about to use someMakes sense. Thanks again Clive!