
Safeguarded AI
Backed by £59m, this programme sits within the Mathematics for Safe AI opportunity space and aims to develop the safety standards we need for transformational AI.
There are currently no open funding calls for this programme.
Previous funding calls in this programme
- Watch the solicitation presentationRead the call for proposals
The first solicitation for this programme focused on TA1.1 Theory, where we sought R&D Creators – individuals and teams that ARIA will fund and support – to research and construct computationally practicable mathematical representations and formal semantics to support world-models, specifications about state-trajectories, neural systems, proofs that neural outputs validate specifications, and “version control” (incremental updates or “patches”) thereof.
- Watch the solicitation presentationRead the call for proposals
The second funding call sought potential individuals or organisations interested in using our gatekeeper AI to build safeguarded products for domain-specific applications, such as optimising energy networks, clinical trials and telecommunication networks.
Safeguarded AI's success will depend on showing that our gatekeeper AI actually works in a safety-critical domain. The research teams selected for TA3 will work with our other programme teams, global AI experts, academics and entrepreneurs, in setting the groundwork to deploy Safeguarded AI in one or more areas.
In this first phase of TA3 funding, we intend to allocate an initial £5.4m aimed at eliciting requirements, sourcing datasets, and establishing evaluations benchmarks for relevant cyber-physical domains.
- Watch the solicitation presentationRead the call for proposals
The third solicitation of this programme sought teams from the economic, social, legal and political sciences to consider the sound socio-technical integration of Safeguarded AI systems.
These teams will work on problems that are plausibly critical to ensuring that the technologies developed as part of the programme will be used in the best interest of humanity at large, and that they are designed in a way that enables their governability through representative processes of collective deliberation and decision making.
- Read the call for proposals
ARIA launched a multi-phased solicitation for Technical Area 2 (TA2) to support the development of a general-purpose Safeguarded AI workflow. The programme aims to demonstrate that frontier AI techniques can be harnessed to create AI systems with verifiable safety guarantees. In TA2, we will award £18m to a non-profit entity to develop critical machine learning capabilities, requiring strong organizational governance and security standards.
Phase 1, backed by £1M, will fund up to 5 teams to spend 3.5 months to develop full Phase 2 proposals. Phase 2 — which will open on 25 June 2025 —will fund a single group, for £18M, to deliver the research agenda.TA2 will explore leveraging securely-boxed AI to train autonomous control systems that can be verified against mathematical models, improving performance and robustness. The workflow will involve forking and fine-tuning mainstream pre-trained frontier AI models to create verifiably safeguarded AI solutions.
- Read the call for proposals
Backed by £14.2m, ARIA (the UK’s Advanced Research and Innovation Agency) are looking to fund teams of software developers to build the scaffolding needed for the success of the Safeguarded AI programme. For TA 1.2, we are looking for Creators to develop the computational implementation of the theoretical frameworks being developed as part of TA 1.1 (the ‘Theory’). This implementation will involve version controlling, type checking, proof checking, security-by-design, flexible paradigms for interactions between humans and AI assistants, among others. For TA 1.3, Creators will work on the ‘Human-Computer Interfaces’ that facilitate interaction between diverse human users and the systems being built in TA 1.2 and TA 2 (‘Machine Learning’). Examples of HCI use cases include AI assistants helping to author and review world models and safety specifications, or helping to review guarantees and sample trajectories for spot/sense-checking or more comprehensive red-teaming.
The Creator experience
What you can expect as an ARIA R&D creator.
Applicant guidance
Discover the process of applying for ARIA funding and find key resources.