ARIA Safeai Image

Safeguarded AI

Backed by £59m, this programme aims to develop the safety standards we need for transformational AI.

 

TA1 Scaffolding

We can build an extendable, interoperable language and platform to maintain formal world models and specifications, and check proof certificates.

 

Meet the TA1.4 (Sociotechnical Integration) Creators

We’re funding six teams to address the essential link between advanced AI technology and society. They will explore how the technical aspects of Safeguarded AI, such as specifications and verification, can be aligned with societal values, ethical considerations, and effective governance mechanisms.

The Creators in TA 1.4 come from a range of backgrounds spanning economics, law, policy, the social sciences, and practical philosophy. They will work together and across other technical areas to develop deliberation procedures, legal frameworks, and mathematical constructions that are directly applicable to building and deploying safeguarded AI in a responsible and beneficial manner.

 

Law-following AI

Cullen O'Keefe, Director of Research, Institute for Law & AI

Field Building for Better Formal Models of Society

Joe Edelman; Ryan Lowe, Meaning Alignment Institute

AI-enabled Governance Models for Advanced AI R&D Organisations

Alex Petropoulos, Centre for Future Generations

Privacy-preserving AI Safety Verification

Pascal Berrang, Mirco Giacobbe, University of Birmingham | Yang Zhang, CISPA Helmholtz Center for Information Security

Aggregating Safety Preferences for AI Systems: A Social Choice Approach

Markus Brill, University of Warwick

Deliberative AI Specifications and Infrastructure

Aviv Ovadya, AI & Democracy Foundation

The Creator experience

What you can expect as an ARIA R&D creator

Learn more

Applicant guidance

Discover the process of applying for ARIA funding and find key resources

Learn more