Imperial College London

Imperial College London is a world-class university with a mission to benefit society through excellence in science, engineering, medicine and business. Consistently ranked in the top 10 universities worldwide.

South Kensington Campus London SW7 2AZ, UK

Autonomous System Safety: Model-Checker Toolkit

managed by Imperial College London


Verification allows us to predict how machines will behave in all possible circumstances, irrespective of the inputs and of other interacting machines, even before the machines are deployed. The challenge arises due to the number of possible states of an autonomous system. Whilst a relatively simple, automatic system like a washing machine has 10^2 possible states, an autonomous system can have 10^90 possible states (by comparison, there are about 10^80 atoms in the universe). Model checking has been adopted by computer scientists at Imperial College to overcome the challenge of an unmanageably high number of unpredictable output states. In the past, model checking has won the coveted Turing Award, and can verify state spaces of 10^20. Symbolic model checking is now mature technology being used in industry for applications such as verifying computer chips, communications protocols and aircraft autopilot systems. However, it has not yet been successfully implemented for greater complexity such as smartphones or IoT systems. Difficulty compounds in messy, ‘real-world’ applications with an unknown number of autonomous agents, working collaboratively, and acting as independent agents. The space for verification becomes very large.
The Imperial researchers have successfully developed a software toolkit based on model checking that can be used to ‘close the trustworthiness gap’ with autonomous systems. The toolkit represents three methods:
1) Predicate Abstraction: Starting with complexity, automatically creating a model of the core components.
2) Parameterised Model Checking: Tackling verification of enormous swarms; it can give guarantees of millions of agents by analysing two - three free agents.
3) Epistemic Verification: Instead of verifying machine characteristics, we verify human-like attributes – e.g. beliefs, intentions, rules, regulations etc.
The use of these methods will enable the risk of autonomous systems to be better understood and verified, and as a result the dangers can be reduced. A proof-of-concept has been deployed in autonomous submarines, and the toolkit can be customised for various applications.


- Help with prediction of insurance risk for autonomous systems
- Reduced risk of expensive product recalls and damage claims
- Fewer systems blocked from being put in production due to unknown output states
- Verification leading to certification of autonomous systems


Requires tailoring to a specific application.


Certification bodies and regulators
Transport (inc. automotive, aircraft, aerospace)
Health and Ambient Assisted Living
Energy (inc. decommissioning)
Autonomous space exploration

Readiness Level (TRL)

technology validated in relevant environment(The pie chart refers to the entire portfolio contained in the database of the helpdesk)

Technology Readiness Levels (TRL) are a method of estimating technology maturity of Critical Technology Elements (CTE) of a program during the acquisition process. The use of TRLs enables consistent, uniform, discussions of technical maturity across different types of technology.

Patent Grading Report

Patent Grading Report

The Grading Patents Report evaluates and grades US patents

Sample Buy from Wisdomain


Current status



Available for



Alessio Lomuscio

EU-Japan Centre
European CommissionMeti