ARIA Naturecomputes Image

Scaling Compute

Backed by nearly £50m, this programme looks to redefine our current compute paradigm.

Our goal: to increase + open up new vectors of progress in the field of computing by bringing the cost of AI hardware down by >1000x

Why this programme

Our current mechanisms for training AI systems utilise a narrow set of algorithms and hardware building blocks, which require significant capital to develop and manufacture. The combination of this significance and scarcity has far-reaching economic, geopolitical, and societal implications. 

What we’re shooting for

We see an opportunity to draw inspiration from natural processing systems, which innately process complex information more efficiently (on several orders of magnitude) than today's largest AI systems.

Unlocking a new technological lever for next-generation AI hardware, alleviating dependence on leading-edge chip manufacturing, and opening up new avenues to scale AI hardware – an industry which is worth trillions of pounds

Read the programme thesis

Meet the R&D Creators

The digital electronics industry has transformed our lives in immeasurable ways is defined by the simple fact that, for 60+ years, we have benefited from exponentially more computing power, at a continually lower cost.

This fact is no longer true. For the first time in history, increased performance requires increasing costs and this coincides with an explosion in demand for more compute power driven by AI.

We're bringing expertise across three critical technology domains (AI systems design, mixed-signal CMOS circuits, and advanced networking) and a strong institutional mix (spanning academia, non-profit R&D organisations, startups and multinational companies), to pull novel ideas to prototypes and into real-world applications. 

1 | Charting the Course

Heterogeneous Scale-out Platform Simulator

James Myers, Imec

Team: Nathan Laubeuf, Debjyoti Bhattacharjee, Abubakr Nada, Arjun Singh, Jonas Svedas and Diksha Moolchandani

This project imagines a future where compute infrastructure systems are designed with a heterogeneous mix of logic, memory and interconnect technology options, to mitigate the different scaling challenges. James and the team at Imec are building a software framework to estimate system efficiency and cost in these systems.

We’re funding two projects to develop software simulators to help the research community map the expected performance/power/cost for any future combination of algorithm, hardware, componentry, and system scale. The goal is to quantify the bottlenecks from different components in the stack, and enable agile adaptation to a fast-paced algorithms research community.

Swipe to meet our Creators

Breaking Down the Compute Graph Step by Step: A Scalable and Modular Simulation

Aaron Zhao, Imperial College London; Luo Mai, University of Edinburgh; Robert Mullins, University of Cambridge

Team: George Constantinides, Wayne Luk, Imperial College London; Michael O’Boyle, University of Edinburgh; Timothy Jones + Rika Antonova, University of Cambridge

There is prevalent demand for performance estimation of the systems used in training frontier AI models. This project, co-led by teams from the University of Edinburgh, Imperial College London, and the University of Cambridge, focuses on the development of a scalable and modular performance simulation framework for future systems.

Heterogeneous Scale-out Platform Simulator

James Myers, Imec

Team: Nathan Laubeuf, Debjyoti Bhattacharjee, Abubakr Nada, Arjun Singh, Jonas Svedas and Diksha Moolchandani

This project imagines a future where compute infrastructure systems are designed with a heterogeneous mix of logic, memory and interconnect technology options, to mitigate the different scaling challenges. James and the team at Imec are building a software framework to estimate system efficiency and cost in these systems.

We’re funding two projects to develop software simulators to help the research community map the expected performance/power/cost for any future combination of algorithm, hardware, componentry, and system scale. The goal is to quantify the bottlenecks from different components in the stack, and enable agile adaptation to a fast-paced algorithms research community.

Swipe to meet our Creators

2 | Advanced Networking and Interconnect

Connectivity Technology for Sustainable AI Scaling

Tony Chan Carusone, Alphawave Semi

Team: Behzad Dehlaghi, Alphawave Semi

This project will develop and demonstrate the next generation of connectivity technologies for sustainable AI scaling. The Alphawave Semi team are reaching for hardware solutions that will enable 10,000s of AI accelerator chips to be interconnected across distances up to 150m with low cost and power consumption without limiting performance.

We know the movement of data has become as critical as raw computational power, so we’re funding two projects to interrogate system-level and advanced network design opportunities.

Swipe to meet our Creators

Scalable AI Systems

Noa Zilberman, University of Oxford

Team: Amro Awad, Martin Booth, Nick McKeown, Dominic O’Brien, Patrick Salter

Noa and the team are aiming to introduce a new interconnect for scalable AI systems that solves the communication bottleneck. By rethinking communication at multiple levels and bringing together different disciplines, the project aims to revolutionise the design of AI systems.

Connectivity Technology for Sustainable AI Scaling

Tony Chan Carusone, Alphawave Semi

Team: Behzad Dehlaghi, Alphawave Semi

This project will develop and demonstrate the next generation of connectivity technologies for sustainable AI scaling. The Alphawave Semi team are reaching for hardware solutions that will enable 10,000s of AI accelerator chips to be interconnected across distances up to 150m with low cost and power consumption without limiting performance.

We know the movement of data has become as critical as raw computational power, so we’re funding two projects to interrogate system-level and advanced network design opportunities.

Swipe to meet our Creators

3 | New Computational Primitives

Training Analogue Electrical Networks with Equilibrium Propagation

Benjamin Scellier, Rain AI UK

Ben’s team from Rain AI aims to demonstrate, through simulations, the feasibility of training analogue hardware with Equilibrium Propagation of the size of modern deep learning architectures.

While the computing industry continues to progress an established path for improved performance, a variety of alternative ideas have emerged which harness noise, statistics, or unique physics in existing mass-manufacturable circuits to perform specific computing primitives. We’re funding seven teams to develop new technologies with the potential to open up new vectors of progress for the field of computing, with a targeted relevance for modern AI algorithms.

Swipe to meet our Creators

CMOS Digital Thermodynamic Hardware Accelerator for Linear Algebra

Phillip Stanley-Marbell, Signaloid

Team: Bilgesu Bilgin, Apostolos Vailakis

Building on insights from analogue thermodynamic computing and randomised linear algebra, Phillip and the Signaloid team are looking to develop digital hardware to accelerate approximate matrix inversion.

Low-Precision Training using Backpropagation and Backpropagation-free Algorithms

Peter McMahon, Cornell University

This project aims to develop training methods that can leverage low-precision hardware, and to develop neural-network architectures that are better-suited to being trained with low-precision hardware. If successful, Peter’s team at Cornell will explore how well hardware designed for accelerating neural-network inference can be applied to training.

Thermodynamic Matrix Inversion

Patrick Coles, Normal Computing UK

Team: Gavin Crooks, Maxwell Aifer, Kaelan Donatella, Denis Melanson, Zachary Belateche, Samuel Duffield, Vincent Cheung

Patrick + the team at Normal Computing will build physics-based computing chips to invert matrices and explore applications in training large-scale AI models, targeting ~1000x energy savings over GPUs.

Better Analogue in Memory Matrix-vector Multiplication

Walter Goodwin, Fractile

Walter and the team at Fractile aim to demonstrate that analogue in-memory compute can drive the world’s highest density and most efficient matrix-vector multiply operations. Their goal is to leverage the developed approaches in Fractile’s inference acceleration hardware to run frontier models orders of magnitude faster than current state of the art. Whether sufficient precision can be achieved for application, in theory, to large scale training systems remains an open question.

Neuromorphic Matrix Multiplication

Bipin Rajendran, King’s College London

Team: Osvaldo Simeone, Kai Xu

Combining the multiplicative advantages that arise from (i) event-driven, backprop-free learning algorithms, (ii) stochastic computing, and (iii) in-memory computing based on Si CMOS technology, Bipin and the team will design and demonstrate a neuromorphic framework to reduce the cost of developing AI models.

Energy-Efficient SRAM-Based Analogue Accelerator for Second-Order Optimisation

Jack Kendall, Rain AI UK

Jack’s team from Rain AI are looking to develop a novel accelerator architecture for performing fast vector-matrix inverse multiplication using digitally-programmable transistor arrays with feedback control.

Shortening the Salesman’s Travel: Massive Parallelism for Combinational Optimisation Problems

Marian Verhelst, Wim Dehaene, KU Leuven

Team: Toon Bettens, Sofie De Weer

Marian and the team at KU Leuven are targeting a new class of mixed-signal processors that are specifically conceived to solve combinatorial optimisation problems.

Training Analogue Electrical Networks with Equilibrium Propagation

Benjamin Scellier, Rain AI UK

Ben’s team from Rain AI aims to demonstrate, through simulations, the feasibility of training analogue hardware with Equilibrium Propagation of the size of modern deep learning architectures.

While the computing industry continues to progress an established path for improved performance, a variety of alternative ideas have emerged which harness noise, statistics, or unique physics in existing mass-manufacturable circuits to perform specific computing primitives. We’re funding seven teams to develop new technologies with the potential to open up new vectors of progress for the field of computing, with a targeted relevance for modern AI algorithms.

Swipe to meet our Creators

Funding: Benchmarking

Our Creators are driving towards one goal – dropping the hardware costs required to train large AI models by >1000x – but as AI hardware and techniques advance rapidly, our baseline metrics and the computational cost of MLPerf benchmark workloads shift, requiring a constant recalibration of our targets. 

Earlier this year, we launchd a call for a team who can help track these moving targets and publish their findings to the research community. Through this work, we'll create an accurate (and open) source of ground truth for programme targets, and ensure the ambitious technologies developed by our Creators are measured against the most up-to-date advances in the field. 

We'll continue to share updates on programmatic targets and benchmarks; click below to be notified when we do.

Sign up for updates

Meet the programme team

Suraj Bramhavar is an electrical engineer. His work focuses on how we can redefine the way computers process information to build dramatically more efficient computers. Suraj joined ARIA from Sync Computing, where he was co-founder and CTO, which optimises the use of modern cloud computing resources. The company was spun-out from his research at MIT Lincoln Laboratory. Suraj previously worked at Intel Corp, helping transition silicon photonics technology from an R&D effort into a business now worth over $1BN.

A photo of Suraj Bramhavar in front of a blue curtain.

"Modern AI algorithms offer an opportunity to exploit principles found in nature to dramatically reduce the cost of the underlying computational hardware. If we can do this, we can fundamentally alter the existing supply chain, improve access to an increasingly important technology, and open up new use cases with massive societal benefit."

Suraj BramhavarProgramme Director

Meet the programme team

Paolo is an electronic engineer by training, and has spent majority of his professional career in technical R&D roles for large high-tech companies, such as HP and Alcatel-Lucent. He returned to academia to earn a PhD in Machine Learning, then joined Graphcore, a startup that develops innovative AI hardware.

A photo of Paolo Toccaceli smiling in front of a white wall.
Paolo Toccaceli Technical Specialist

Meet the programme team

David trained as a chemist at University College London before transitioning to work in materials science at Imperial College London, where he developed nanocarbon devices for sensing, photovoltaics, and energy storage. Prior to joining ARIA, David built sales and operations functions at early stage startups, focusing on physics-based software for the automotive and consumer electronics industries. David supports ARIA as an Operating Partner from Pace.

A photo of David Stringer smiling in front of a grey wall.
David StringerProgramme Specialist

Discover more

A graphic saying 'IEEE Spectrum: Fixing the Future'.
Insights01 May 2024

The UK's ARIA is searching for better AI tech, ft Suraj Bramhavar

IEEE Spectrum — Fixing the Future podcast

Listen now
A graph showing peak performance vs power scatter plot for publicly announced AI accelerators and processors.
Insights25 October 2024

A deep dive on Scaling Compute creators

ARIA's substack

Read more
A photo of Ilan Gur outside.
News13 March 2024

UK researchers seek to slash AI computing costs by a factor of 1,000

Financial Times

Read more
A photo of Programme Directors Jacques, davidad, Mark, Suraj, Gemma and Jenny.
Insights18 September 2024

The UK’s bet to create technologies that change the world

Nature

Read more
Activation Partners Web Asset (1)
Announcements09 October 2024

Meet our Activation Partners

Driving progress through science entrepreneurship

Read more
A photo of Ilan Gur smiling and sitting down in front a cream background.
News23 August 2024

ARIA's Ilan Gur on backing big breakthroughs

Startup Europe – The Sifted Podcast

Listen now
A photo of Matt Clifford, Ilan Gur and Angie Burnett looking at the camera. Matt and Angie are sat down and Ilan is standing.
News02 October 2024

Can ARIA put the UK back on the scientific map?

Wired UK

Read more