SlideShare a Scribd company logo
Critical Systems
Topics covered
A simple safety-critical system
System dependability
Availability and reliability
Safety
Security
Critical Systems
Safety-critical systems
• Failure results in loss of life, injury or damage to the
environment;
• Chemical plant protection system;
Mission-critical systems
• Failure results in failure of some goal-directed activity;
• Spacecraft navigation system;
Business-critical systems
• Failure results in high economic losses;
• Customer accounting system in a bank;
System dependability
For critical systems, it is usually the case that the
most important system property is the
dependability of the system.
The dependability of a system reflects the user’s
degree of trust in that system. It reflects the
extent of the user’s confidence that it will operate
as users expect and that it will not ‘fail’ in normal
use.
Importance of dependability
Systems that are not dependable and are
unreliable, unsafe or insecure may be rejected by
their users.
The costs of system failure may be very high.
Undependable systems may cause information
loss with a high consequent recovery cost.
Socio-technical critical systems
Hardware failure
• Hardware fails because of design and manufacturing
errors or because components have reached the end
of their natural life.
Software failure
• Software fails due to errors in its specification, design
or implementation.
Operational failure
• Human operators make mistakes. Now perhaps the
largest single cause of system failures.
A software-controlled insulin pump
Used by diabetics to simulate the function of the
pancreas which manufactures insulin, an
essential hormone that metabolizes blood
glucose.
Measures blood glucose (sugar) using a micro-
sensor and computes the insulin dose required to
metabolize the glucose.
Insulin pump organisation
Needle
assembly
Sensor
Display1 Display2
Alarm
Pump Clock
Controller
Power supply
Insulin reservoir
Insulin pump data-flow
Insulin
requirement
computation
Blood sugar
analysis
Blood sugar
sensor
Insulin
delivery
controller
Insulin
pump
Blood
Blood
parameters
Blood sugar
level
Insulin
Pump control
commands Insulin
requirement
Dependability requirements
The system shall be available to deliver insulin
when required to do so.
The system shall perform reliably and deliver the
correct amount of insulin to counteract the current
level of blood sugar.
The essential safety requirement is that
excessive doses of insulin should never be
delivered as this is potentially life threatening.
Dependability
The dependability of a system equates to its
trustworthiness.
A dependable system is a system that is trusted
by its users.
Principal dimensions of dependability are:
• Availability;
• Reliability;
• Safety;
• Security
Dimensions of dependability
Dependability
Availability Reliability Security
The ability of the system
to deliver services when
requested
The ability of the system
to deliver services as
specified
The ability of the system
to operate without
catastrophic failure
The ability of the system
to protect itelf against
accidental or deliberate
intrusion
Safety
Other dependability properties
Repairability
• Reflects the extent to which the system can be repaired in the
event of a failure
Maintainability
• Reflects the extent to which the system can be adapted to new
requirements;
Survivability
• Reflects the extent to which the system can deliver services
whilst under hostile attack;
Error tolerance
• Reflects the extent to which user input errors can be avoided
and tolerated.
Maintainability
A system attribute that is concerned with the ease
of repairing the system after a failure has been
discovered or changing the system to include
new features
Very important for critical systems as faults are
often introduced into a system because of
maintenance problems
Survivability
The ability of a system to continue to deliver its
services to users in the face of deliberate or
accidental attack
This is an increasingly important attribute for
distributed systems whose security can be
compromised
Survivability subsumes the notion of resilience -
the ability of a system to continue in operation in
spite of component failures
Dependability vs performance
Untrustworthy systems may be rejected by their users
System failure costs may be very high
It is very difficult to tune systems to make them more
dependable
It may be possible to compensate for poor performance
Untrustworthy systems may cause loss of valuable
information
Dependability costs
Dependability costs tend to increase exponentially as
increasing levels of dependability are required
There are two reasons for this
• The use of more expensive development techniques and
hardware that are required to achieve the higher levels of
dependability
• The increased testing and system validation that is required to
convince the system client that the required levels of
dependability have been achieved
Costs of increasing dependability
Low Medium High Very
high
Ultra-high
Dependability
Availability and reliability
Reliability
• The probability of failure-free system operation over a
specified time in a given environment for a given
purpose
Availability
• The probability that a system, at a point in time, will
be operational and able to deliver the requested
services
Availability and reliability
It is sometimes possible to subsume system
availability under system reliability
• Obviously if a system is unavailable it is not delivering
the specified system services
However, it is possible to have systems with low
reliability that must be available. So long as
system failures can be repaired quickly and do
not damage data, low reliability may not be a
problem
Availability takes repair time into account
Reliability terminology
Term Description
System failure An event that occurs at some point in time when
the system does not deliver a service as expected
by its users
System error An erroneous system state that can lead to system
behaviour that is unexpected by system users.
System fault A characteristic of a software system that can
lead to a system error. For example, failure to
initialise a variable could lead to that variable
having the wrong value when it is used.
Human error or
mistake
Human behaviour that results in the introduction
of faults into a system.
Faults and failures
Failures are a usually a result of system errors that are
derived from faults in the system
However, faults do not necessarily result in system errors
• The faulty system state may be transient and ‘corrected’ before
an error arises
Errors do not necessarily lead to system failures
• The error can be corrected by built-in error detection and
recovery
• The failure can be protected against by built-in protection
facilities. These may, for example, protect system resources
from system errors
Reliability achievement
Fault avoidance
• Development technique are used that either minimise the
possibility of mistakes or trap mistakes before they result in the
introduction of system faults
Fault detection and removal
• Verification and validation techniques that increase the
probability of detecting and correcting errors before the system
goes into service are used
Fault tolerance
• Run-time techniques are used to ensure that system faults do
not result in system errors and/or that system errors do not lead
to system failures
Reliability modelling
You can model a system as an input-output
mapping where some inputs will result in
erroneous outputs
The reliability of the system is the probability that
a particular input will lie in the set of inputs that
cause erroneous outputs
Different people will use the system in different
ways so this probability is not a static system
attribute but depends on the system’s
environment
Input/output mapping
IeInput set
OeOutput set
Program
Inputs causing
erroneous outputs
Erroneous
outputs
Reliability perception
Possible
inputs
User
1
User
3
User
2
Erroneous
inputs
Reliability improvement
Removing X% of the faults in a system will not necessarily
improve the reliability by X%. A study at IBM showed that
removing 60% of product defects resulted in a 3%
improvement in reliability
Program defects may be in rarely executed sections of
the code so may never be encountered by users.
Removing these does not affect the perceived reliability
A program with known faults may therefore still be seen
as reliable by its users
Safety
Safety is a property of a system that reflects the
system’s ability to operate, normally or
abnormally, without danger of causing human
injury or death and without damage to the
system’s environment
It is increasingly important to consider software
safety as more and more devices incorporate
software-based control systems
Primary safety-critical systems
• Embedded software systems whose failure can
cause the associated hardware to fail and directly
threaten people.
Secondary safety-critical systems
• Systems whose failure results in faults in other
systems which can threaten people
Discussion here focuses on primary safety-critical
systems.
Safety criticality
Safety and reliability are related but distinct
Reliability is concerned with conformance to a
given specification and delivery of service
Safety is concerned with ensuring system cannot
cause damage irrespective of whether
or not it conforms to its specification
Safety and reliability
Safety terminology
Term Definition
Accident (or
mishap)
An unplanned event or sequence of events which results in human death or injury,
damage to property or to the environment. A computer-controlled machine injuring its
operator is an example of an accident.
Hazard A condition with the potential for causing or contributing to an accident. A failure of
the sensor that detects an obstacle in front of a machine is an example of a hazard.
Damage A measure of the loss resulting from a mishap. Damage can range from many people
killed as a result of an accident to minor injury or property damage.
Hazard
severity
An assessment of the worst possible damage that could result from a particular
hazard. Hazard severity can range from catastrophic where many people are killed to
minor where only minor damage results.
Hazard
probability
The probability of the events occurring which create a hazard. Probability values tend
to be arbitrary but range from probable (say 1/100 chance of a hazard occurring) to
implausible (no conceivable situations are likely where the hazard could occur).
Risk This is a measure of the probability that the system will cause an accident. The risk is
assessed by considering the hazard probability, the hazard severity and the probability
that a hazard will result in an accident.
Safety achievement
Hazard avoidance
• The system is designed so that some classes of hazard simply
cannot arise.
Hazard detection and removal
• The system is designed so that hazards are detected and
removed before they result in an accident
Damage limitation
• The system includes protection features that minimise the
damage that may result from an accident
Security
The security of a system is a system property that
reflects the system’s ability to protect itself from
accidental or deliberate external attack
Security is becoming increasingly important as
systems are networked so that external access to
the system through the Internet is possible
Security is an essential pre-requisite for
availability, reliability and safety
Fundamental security
If a system is a networked system and is insecure
then statements about its reliability and its safety
are unreliable
These statements depend on the executing
system and the developed system being the
same. However, intrusion can change the
executing system and/or its data
Security terminology
Term Definition
Exposure Possible loss or harm in a computing system. This can be loss or
damage to data or can be a loss of time and effort if recovery is
necessary after a security breach.
Vulnerability A weakness in a computer-based system that may be exploited to
cause loss or harm.
Attack An exploitation of a system vulnerability. Generally, this is from
outside the system and is a deliberate attempt to cause some damage.
Threats Circumstances that have potential to cause loss or harm. You can
think of these as a system vulnerability that is subjected to an attack.
Control A protective measure that reduces a system vulnerability. Encryption
would be an example of a control that reduced a vulnerability of a
weak access control system.
Damage from insecurity
Denial of service
• The system is forced into a state where normal services are
unavailable or where service provision is significantly degraded
Corruption of programs or data
• The programs or data in the system may be modified in an
unauthorised way
Disclosure of confidential information
• Information that is managed by the system may be exposed to
people who are not authorised to read or use that information
Security assurance
Vulnerability avoidance
• The system is designed so that vulnerabilities do not occur. For
example, if there is no external network connection then
external attack is impossible
Attack detection and elimination
• The system is designed so that attacks on vulnerabilities are
detected and neutralised before they result in an exposure. For
example, virus checkers find and remove viruses before they
infect a system
Exposure limitation
• The system is designed so that the adverse consequences of a
successful attack are minimised. For example, a backup policy
allows damaged information to be restored

More Related Content

Critical Systems

  • 2. Topics covered A simple safety-critical system System dependability Availability and reliability Safety Security
  • 3. Critical Systems Safety-critical systems • Failure results in loss of life, injury or damage to the environment; • Chemical plant protection system; Mission-critical systems • Failure results in failure of some goal-directed activity; • Spacecraft navigation system; Business-critical systems • Failure results in high economic losses; • Customer accounting system in a bank;
  • 4. System dependability For critical systems, it is usually the case that the most important system property is the dependability of the system. The dependability of a system reflects the user’s degree of trust in that system. It reflects the extent of the user’s confidence that it will operate as users expect and that it will not ‘fail’ in normal use.
  • 5. Importance of dependability Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by their users. The costs of system failure may be very high. Undependable systems may cause information loss with a high consequent recovery cost.
  • 6. Socio-technical critical systems Hardware failure • Hardware fails because of design and manufacturing errors or because components have reached the end of their natural life. Software failure • Software fails due to errors in its specification, design or implementation. Operational failure • Human operators make mistakes. Now perhaps the largest single cause of system failures.
  • 7. A software-controlled insulin pump Used by diabetics to simulate the function of the pancreas which manufactures insulin, an essential hormone that metabolizes blood glucose. Measures blood glucose (sugar) using a micro- sensor and computes the insulin dose required to metabolize the glucose.
  • 8. Insulin pump organisation Needle assembly Sensor Display1 Display2 Alarm Pump Clock Controller Power supply Insulin reservoir
  • 9. Insulin pump data-flow Insulin requirement computation Blood sugar analysis Blood sugar sensor Insulin delivery controller Insulin pump Blood Blood parameters Blood sugar level Insulin Pump control commands Insulin requirement
  • 10. Dependability requirements The system shall be available to deliver insulin when required to do so. The system shall perform reliably and deliver the correct amount of insulin to counteract the current level of blood sugar. The essential safety requirement is that excessive doses of insulin should never be delivered as this is potentially life threatening.
  • 11. Dependability The dependability of a system equates to its trustworthiness. A dependable system is a system that is trusted by its users. Principal dimensions of dependability are: • Availability; • Reliability; • Safety; • Security
  • 12. Dimensions of dependability Dependability Availability Reliability Security The ability of the system to deliver services when requested The ability of the system to deliver services as specified The ability of the system to operate without catastrophic failure The ability of the system to protect itelf against accidental or deliberate intrusion Safety
  • 13. Other dependability properties Repairability • Reflects the extent to which the system can be repaired in the event of a failure Maintainability • Reflects the extent to which the system can be adapted to new requirements; Survivability • Reflects the extent to which the system can deliver services whilst under hostile attack; Error tolerance • Reflects the extent to which user input errors can be avoided and tolerated.
  • 14. Maintainability A system attribute that is concerned with the ease of repairing the system after a failure has been discovered or changing the system to include new features Very important for critical systems as faults are often introduced into a system because of maintenance problems
  • 15. Survivability The ability of a system to continue to deliver its services to users in the face of deliberate or accidental attack This is an increasingly important attribute for distributed systems whose security can be compromised Survivability subsumes the notion of resilience - the ability of a system to continue in operation in spite of component failures
  • 16. Dependability vs performance Untrustworthy systems may be rejected by their users System failure costs may be very high It is very difficult to tune systems to make them more dependable It may be possible to compensate for poor performance Untrustworthy systems may cause loss of valuable information
  • 17. Dependability costs Dependability costs tend to increase exponentially as increasing levels of dependability are required There are two reasons for this • The use of more expensive development techniques and hardware that are required to achieve the higher levels of dependability • The increased testing and system validation that is required to convince the system client that the required levels of dependability have been achieved
  • 18. Costs of increasing dependability Low Medium High Very high Ultra-high Dependability
  • 19. Availability and reliability Reliability • The probability of failure-free system operation over a specified time in a given environment for a given purpose Availability • The probability that a system, at a point in time, will be operational and able to deliver the requested services
  • 20. Availability and reliability It is sometimes possible to subsume system availability under system reliability • Obviously if a system is unavailable it is not delivering the specified system services However, it is possible to have systems with low reliability that must be available. So long as system failures can be repaired quickly and do not damage data, low reliability may not be a problem Availability takes repair time into account
  • 21. Reliability terminology Term Description System failure An event that occurs at some point in time when the system does not deliver a service as expected by its users System error An erroneous system state that can lead to system behaviour that is unexpected by system users. System fault A characteristic of a software system that can lead to a system error. For example, failure to initialise a variable could lead to that variable having the wrong value when it is used. Human error or mistake Human behaviour that results in the introduction of faults into a system.
  • 22. Faults and failures Failures are a usually a result of system errors that are derived from faults in the system However, faults do not necessarily result in system errors • The faulty system state may be transient and ‘corrected’ before an error arises Errors do not necessarily lead to system failures • The error can be corrected by built-in error detection and recovery • The failure can be protected against by built-in protection facilities. These may, for example, protect system resources from system errors
  • 23. Reliability achievement Fault avoidance • Development technique are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults Fault detection and removal • Verification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service are used Fault tolerance • Run-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures
  • 24. Reliability modelling You can model a system as an input-output mapping where some inputs will result in erroneous outputs The reliability of the system is the probability that a particular input will lie in the set of inputs that cause erroneous outputs Different people will use the system in different ways so this probability is not a static system attribute but depends on the system’s environment
  • 25. Input/output mapping IeInput set OeOutput set Program Inputs causing erroneous outputs Erroneous outputs
  • 27. Reliability improvement Removing X% of the faults in a system will not necessarily improve the reliability by X%. A study at IBM showed that removing 60% of product defects resulted in a 3% improvement in reliability Program defects may be in rarely executed sections of the code so may never be encountered by users. Removing these does not affect the perceived reliability A program with known faults may therefore still be seen as reliable by its users
  • 28. Safety Safety is a property of a system that reflects the system’s ability to operate, normally or abnormally, without danger of causing human injury or death and without damage to the system’s environment It is increasingly important to consider software safety as more and more devices incorporate software-based control systems
  • 29. Primary safety-critical systems • Embedded software systems whose failure can cause the associated hardware to fail and directly threaten people. Secondary safety-critical systems • Systems whose failure results in faults in other systems which can threaten people Discussion here focuses on primary safety-critical systems. Safety criticality
  • 30. Safety and reliability are related but distinct Reliability is concerned with conformance to a given specification and delivery of service Safety is concerned with ensuring system cannot cause damage irrespective of whether or not it conforms to its specification Safety and reliability
  • 31. Safety terminology Term Definition Accident (or mishap) An unplanned event or sequence of events which results in human death or injury, damage to property or to the environment. A computer-controlled machine injuring its operator is an example of an accident. Hazard A condition with the potential for causing or contributing to an accident. A failure of the sensor that detects an obstacle in front of a machine is an example of a hazard. Damage A measure of the loss resulting from a mishap. Damage can range from many people killed as a result of an accident to minor injury or property damage. Hazard severity An assessment of the worst possible damage that could result from a particular hazard. Hazard severity can range from catastrophic where many people are killed to minor where only minor damage results. Hazard probability The probability of the events occurring which create a hazard. Probability values tend to be arbitrary but range from probable (say 1/100 chance of a hazard occurring) to implausible (no conceivable situations are likely where the hazard could occur). Risk This is a measure of the probability that the system will cause an accident. The risk is assessed by considering the hazard probability, the hazard severity and the probability that a hazard will result in an accident.
  • 32. Safety achievement Hazard avoidance • The system is designed so that some classes of hazard simply cannot arise. Hazard detection and removal • The system is designed so that hazards are detected and removed before they result in an accident Damage limitation • The system includes protection features that minimise the damage that may result from an accident
  • 33. Security The security of a system is a system property that reflects the system’s ability to protect itself from accidental or deliberate external attack Security is becoming increasingly important as systems are networked so that external access to the system through the Internet is possible Security is an essential pre-requisite for availability, reliability and safety
  • 34. Fundamental security If a system is a networked system and is insecure then statements about its reliability and its safety are unreliable These statements depend on the executing system and the developed system being the same. However, intrusion can change the executing system and/or its data
  • 35. Security terminology Term Definition Exposure Possible loss or harm in a computing system. This can be loss or damage to data or can be a loss of time and effort if recovery is necessary after a security breach. Vulnerability A weakness in a computer-based system that may be exploited to cause loss or harm. Attack An exploitation of a system vulnerability. Generally, this is from outside the system and is a deliberate attempt to cause some damage. Threats Circumstances that have potential to cause loss or harm. You can think of these as a system vulnerability that is subjected to an attack. Control A protective measure that reduces a system vulnerability. Encryption would be an example of a control that reduced a vulnerability of a weak access control system.
  • 36. Damage from insecurity Denial of service • The system is forced into a state where normal services are unavailable or where service provision is significantly degraded Corruption of programs or data • The programs or data in the system may be modified in an unauthorised way Disclosure of confidential information • Information that is managed by the system may be exposed to people who are not authorised to read or use that information
  • 37. Security assurance Vulnerability avoidance • The system is designed so that vulnerabilities do not occur. For example, if there is no external network connection then external attack is impossible Attack detection and elimination • The system is designed so that attacks on vulnerabilities are detected and neutralised before they result in an exposure. For example, virus checkers find and remove viruses before they infect a system Exposure limitation • The system is designed so that the adverse consequences of a successful attack are minimised. For example, a backup policy allows damaged information to be restored