Backed by nearly £50m, this programme looks to redefine our current compute paradigm
The digital electronics industry that has transformed our lives in immeasurable ways is defined by the simple fact that, for 60+ years, we have benefited from exponentially more computing power, at a continually lower cost.
This fact is no longer true. For the first time in history, increased performance requires increasing costs and this coincides with an explosion of demand for more compute power driven by AI.
Our current mechanisms for training AI systems utilise a narrow set of algorithms and hardware building blocks, which require significant capital to develop and manufacture. The combination of this significance and scarcity has far-reaching economic, geopolitical, and societal implications.
We see an opportunity to draw inspiration from natural processing systems, which innately process complex information more efficiently (on several orders or magnitude) than today’s largest AI systems.
Our goal: to increase + open up new vectors of progress in the field of computing by bringing the cost of AI hardware down by >1000x.
We’re bringing expertise across three critical technology domains (AI systems design, mixed-signal CMOS circuits, and advanced networking) and a strong institutional mix (spanning academia, non-profit R&D organisations, startups and multinational companies), as we look to pull novel ideas to prototypes and into real-world applications.
If successful, this programme will unlock a new technological lever for next-generation AI hardware, alleviate dependence on leading-edge chip manufacturing, and open up new avenues to scale AI hardware – an industry which is worth trillions of pounds.
We’re also delighted that the programme’s ambition has drawn ambitious international organisations to either establish or expand their UK operations – a crucial step in building the UK’s capabilities in this field.
This project imagines a future where compute infrastructure systems are designed with a heterogeneous mix of logic, memory and interconnect technology options, to mitigate the different scaling challenges. James and the team at Imec are building a software framework to estimate system efficiency and cost in these systems.
There is prevalent demand for performance estimation of the systems used in training frontier AI models. This project, co-led by teams from the University of Edinburgh, Imperial College London, and the University of Cambridge, focuses on the development of a scalable and modular performance simulation framework for future systems.
Noa and the team are aiming to introduce a new interconnect for scalable AI systems that solves the communication bottleneck. By rethinking communication at multiple levels and bringing together different disciplines, the project aims to revolutionise the design of AI systems.
This project will develop and demonstrate the next generation of connectivity technologies for sustainable AI scaling. The Alphawave Semi team are reaching for hardware solutions that will enable 10,000s of AI accelerator chips to be interconnected across distances up to 150m with low cost and power consumption without limiting performance.
This project aims to develop training methods that can leverage low-precision hardware, and to develop neural-network architectures that are better-suited to being trained with low-precision hardware. If successful, Peter’s team at Cornell will explore how well hardware designed for accelerating neural-network inference can be applied to training.
Building on insights from analogue thermodynamic computing and randomised linear algebra, Phillip and the Signaloid team are looking to develop digital hardware to accelerate approximate matrix inversion.
Patrick + the team at Normal Computing will build physics-based computing chips to invert matrices and explore applications in training large-scale AI models, targeting ~1000x energy savings over GPUs.
Walter and the team at Fractile aim to demonstrate that analogue in-memory compute can drive the world’s highest density and most efficient matrix-vector multiply operations. Their goal is to leverage the developed approaches in Fractile’s inference acceleration hardware to run frontier models orders of magnitude faster than current state of the art. Whether sufficient precision can be achieved for application, in theory, to large scale training systems remains an open question.
Combining the multiplicative advantages that arise from (i) event-driven, backprop-free learning algorithms, (ii) stochastic computing, and (iii) in-memory computing based on Si CMOS technology, Bipin and the team will design and demonstrate a neuromorphic framework to reduce the cost of developing AI models.
Jack’s team from Rain AI are looking to develop a novel accelerator architecture for performing fast vector-matrix inverse multiplication using digitally-programmable transistor arrays with feedback control.
Marian and the team at KU Leuven are targeting a new class of mixed-signal processors that are specifically conceived to solve combinatorial optimisation problems.
Ben’s team from Rain AI aims to demonstrate, through simulations, the feasibility of training analogue hardware with Equilibrium Propagation of the size of modern deep learning architectures.
Location: Leuven, Belgium
Start date: Immediate/ASAP
Deadline: 1st December 2024
Location: UK/Remote
Start date: Immediate/ASAP
Deadline: Rolling
Location: UK/Remote
Start date: Immediate/ASAP
Deadline: Rolling
Location: UK/Remote
Start date: Immediate/ASAP
Deadline: Rolling
Location: London, UK/New York City, USA
Start date: Immediate/ASAP
Deadline: Rolling
Location: London, UK/New York City, USA
Start date: Immediate/ASAP
Deadline: Rolling
Location: London, UK
Start date: Immediate/ASAP
Deadline: Rolling
Location: London, UK
Start date: Immediate/ASAP
Deadline: Rolling
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: Rolling
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: Rolling
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: Rolling
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: 3 December 2024
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: 3 December 2024
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: 3 December 2024
Location: Oxford, UK
Start date: Immediate/ASAP
Deadline: 3 December 2024
Location: Flexible
Start date: Immediate/ASAP
Deadline: Rolling
Location: Flexible
Start date: Immediate/ASAP
Deadline: Rolling
Suraj aims to redefine the way computers process information. He is directing funding into how we can build more efficient computers using principles ubiquitously found in nature.
Prior to ARIA, Suraj was co-founder and CTO of Sync Computing, a VC-backed startup optimising the use of modern cloud computing resources. The company was spun-out from his research at MIT Lincoln Laboratory. Prior to that, Suraj worked at Intel Corp, helping transition silicon photonics technology from an R&D effort into a now >$1BN business.