ARIA Safeai Image

Safeguarded AI

Backed by £59m, this programme within the Mathematics for Safe AI opportunity space aims to develop the safety standards we need for transformational AI.

TA2: Machine Learning


The Safeguarded AI programme is seeking a single founding team or entity to lead Technical Area 2 (TA2) Phase 2, focusing on the machine learning (ML) elements required to integrate frontier AI capabilities into a secure, general-purpose Safeguarded AI workflow.

We are looking to catalyse the creation of a new organisation that can not only push the boundaries of ML research but also uphold the highest standards of organisational governance and security. An £18 million grant will be awarded to the successful applicant to host the entire TA2 research agenda from early 2026 to the end of 2027.

TA2 will explore leveraging securely-boxed AI to train autonomous control systems that can be verified against mathematical models, improving performance and robustness. The workflow will involve forking and fine-tuning mainstream pre-trained frontier AI models to create verifiably safeguarded AI solutions. Key objectives of TA2 include:

  • World-modelling ML (TA2(a)): Develop formal representations of human knowledge, enabling explicit reasoning and uncertainty accounting, to create auditable and predictive mathematical models.

  • Coherent reasoning ML (TA2(b)): Implement efficient reasoning methods, such as amortised inference or neural network-guided algorithms, to derive reliable conclusions from world models. Safety verification ML (TA2(c)): Create mechanisms to verify the safety of actions and plans against safety specifications, using techniques like proof certificates or probabilistic bounds.

  • Policy training (TA2(d)): Train agent policies that balance task performance with finite-horizon safety guarantees, including backup policies for safety failure scenarios.

 

Eligibility 

We welcome applications from exceptional and ambitious researchers, organisational leaders, or experienced founders driven to create an alternative R&D pathway for safe and transformative AI. 

Candidates must be based in, or prepared to relocate to, the UK and include:

  • New founding teams establishing a UK-based non-profit institution.

  • Leading AI companies creating a UK-based affiliated non-profit entity.

  • Established companies with critical-infrastructure businesses forming a UK-based affiliated non-profit to become a pioneering supplier of guaranteed-safe AI capabilities.

  • Established academic institutions creating, or partnering to create, a new UK-based affiliated non-profit entity.

Please note: For-profit companies and universities aiming to direct host TA2 will not be eligible for this funding.

 


 

Submit a 'late' application to Phase 1 – deadline: 17 August 2025 (13:00 BST)


Phase 1 (open from 2 - 30 April), sought to fund teams to spend time developing full Phase 2 proposals. If you didn’t meet the Phase 1 application deadline, we are accepting shortened Phase 1 proposals until 17 Aug 2025 to make TA2 phase 2 funding as accessible as possible to strong applicant teams. These late proposals will not be eligible for the original Phase 1 funding, but if successful, these teams will be invited to meet with the Safeguarded AI Programme team, including the Scientific Director, to discuss their thinking in preparation for the Phase 2 application.

When writing your application, please follow the Phase 1 instructions in the call for proposals, but limit the submission to 3 pages (instead of 4). Applications will be reviewed against the same Phase 1 evaluation criteria.

To apply, email clarifications@aria.org.uk to get an individual application link.

 

Apply to Phase 2 – deadline: 1 October 2025 (13:00 BST)

Your proposal should be 30-50 pages and comprehensively address both technical and organisational aspects, as outlined in the call for proposals. 

Apply now

 

Resources

Answering your questions

Ahead of submitting your application, we encourage you to look at our funding resources. If you have questions related to Safeguarded AI, please reach out to clarifications@aria.org.uk.

Nb: clarification questions should be submitted no later than 4 days prior to the relevant deadline date. Clarification questions received after this date will not be reviewed. 

Previous funding calls in this programme

The Creator experience

What you can expect as an ARIA R&D creator.

Learn more

Applicant guidance

Discover the process of applying for ARIA funding and find key resources.

Find out more