Verification allows us to predict how machines will behave in all possible circumstances, irrespective of the inputs and of other interacting machines, even before the machines are deployed. The challenge arises due to the number of possible states of an autonomous system. Whilst a relatively simple, automatic system like a washing machine has 10^2 possible states, an autonomous system can have 10^90 possible states (by comparison, there are about 10^80 atoms in the universe). Model checking has been adopted by computer scientists at Imperial College to overcome the challenge of an unmanageably high number of unpredictable output states. In the past, model checking has won the coveted Turing Award, and can verify state spaces of 10^20. Symbolic model checking is now mature technology being used in industry for applications such as verifying computer chips, communications protocols and aircraft autopilot systems. However, it has not yet been successfully implemented for greater complexity such as smartphones or IoT systems. Difficulty compounds in messy, ‘real-world’ applications with an unknown number of autonomous agents, working collaboratively, and acting as independent agents. The space for verification becomes very large.
The Imperial researchers have successfully developed a software toolkit based on model checking that can be used to ‘close the trustworthiness gap’ with autonomous systems. The toolkit represents three methods:
1) Predicate Abstraction: Starting with complexity, automatically creating a model of the core components.
2) Parameterised Model Checking: Tackling verification of enormous swarms; it can give guarantees of millions of agents by analysing two - three free agents.
3) Epistemic Verification: Instead of verifying machine characteristics, we verify human-like attributes – e.g. beliefs, intentions, rules, regulations etc.
The use of these methods will enable the risk of autonomous systems to be better understood and verified, and as a result the dangers can be reduced. A proof-of-concept has been deployed in autonomous submarines, and the toolkit can be customised for various applications.
- Help with prediction of insurance risk for autonomous systems
- Reduced risk of expensive product recalls and damage claims
- Fewer systems blocked from being put in production due to unknown output states
- Verification leading to certification of autonomous systems
Requires tailoring to a specific application.
Certification bodies and regulators
Transport (inc. automotive, aircraft, aerospace)
Health and Ambient Assisted Living
Energy (inc. decommissioning)
Autonomous space exploration