
Scaling Compute
Backed by nearly £50m, this programme looks to redefine our current compute paradigm.
Our goal: to increase + open up new vectors of progress in the field of computing by bringing the cost of AI hardware down by >1000x
Why this programme
Our current mechanisms for training AI systems utilise a narrow set of algorithms and hardware building blocks, which require significant capital to develop and manufacture. The combination of this significance and scarcity has far-reaching economic, geopolitical, and societal implications.
What we’re shooting for
We see an opportunity to draw inspiration from natural processing systems, which innately process complex information more efficiently (on several orders of magnitude) than today's largest AI systems.
Unlocking a new technological lever for next-generation AI hardware, alleviating dependence on leading-edge chip manufacturing, and opening up new avenues to scale AI hardware – an industry which is worth trillions of pounds
Meet the R&D Creators
The digital electronics industry has transformed our lives in immeasurable ways is defined by the simple fact that, for 60+ years, we have benefited from exponentially more computing power, at a continually lower cost.
This fact is no longer true. For the first time in history, increased performance requires increasing costs and this coincides with an explosion in demand for more compute power driven by AI.
We're bringing expertise across three critical technology domains (AI systems design, mixed-signal CMOS circuits, and advanced networking) and a strong institutional mix (spanning academia, non-profit R&D organisations, startups and multinational companies), to pull novel ideas to prototypes and into real-world applications.
1 | Charting the Course
2 | Advanced Networking and Interconnect
3 | New Computational Primitives
Funding: Benchmarking
Our Creators are driving towards one goal – dropping the hardware costs required to train large AI models by >1000x – but as AI hardware and techniques advance rapidly, our baseline metrics and the computational cost of MLPerf benchmark workloads shift, requiring a constant recalibration of our targets.
Earlier this year, we launchd a call for a team who can help track these moving targets and publish their findings to the research community. Through this work, we'll create an accurate (and open) source of ground truth for programme targets, and ensure the ambitious technologies developed by our Creators are measured against the most up-to-date advances in the field.
We'll continue to share updates on programmatic targets and benchmarks; click below to be notified when we do.