Back to perspectives

PERSPECTIVES

Flipping AI Compute Upside Down With Light: Why we’re backing Arago

As AI transforms every industry, new computing technologies are emerging to power the next generation of applications. Read more from Earlybird’s Dr. Moritz Belling and Ferdinand Dansard about why Arago's photonic chips could reshape the future of AI hardware with 10x energy efficiency gains.

Aug 7, 2025

8 Min Read

Portfolio News

Share

Over the past few years, AI’s energy and computational demands have skyrocketed, putting a growing strain on our global infrastructure. As of today, data centers already consume between 1% and 2% of global electricity, similar to the entire airline industry. With the AI boom accelerating, that number is poised to climb sharply. Some researchers predict the energy demand of data centers could reach 21% of global power consumption by the end of the decade.

The Bottleneck of Electrons

Why is AI so power-hungry? The answer lies in how we compute. Today’s AI runs on electronic chips, also known as GPUs, packed with billions of tiny transistors. You can think of each transistor as a microscopic light switch flipping on and off billions of times per second. Every time it switches, it leaks a bit of energy as heat. A cutting-edge GPU contains around 80 billion transistors switching at 2–3 GHz, which means trillions of on/off actions every second. The result, as Arago’s co-founder Nicolas Muller quips, is a processor “more dense than the heart of a nuclear power plant” in terms of heat generation. In practical terms, a single advanced AI chip can draw as much power as a household oven (around 2 kilowatts) running nonstop. Multiply that by thousands of chips in a data center, and you see the problem: Our current electronic computing approach is wasting power.

For decades, the tech industry rode the wave of Moore’s Law, i.e., squeezing more transistors onto chips to boost performance. But physics is catching up. Transistors can’t shrink much further without quantum effects and overheating coming into play. Dennard scaling (which made transistors more power-efficient as they got smaller) has faltered, meaning more compute now almost directly equals more energy burned. In AI, where matrix-matrix multiplication dominates the compute workload, this is especially problematic. Matrix multiplication, essentially multiplying and adding huge grids of numbers, forms the backbone of all AI computations,” accounting for over 90% of the compute in large neural networks. Doing these operations electronically requires vast numbers of transistor flips, which in turn means lots of electricity and heat. As models scale up, even today’s best chips are approaching their physical limits and struggling to keep pace. We’re basically throwing more and more electrons at the problem and hitting a wall of diminishing returns.

A New Light For The Future of AI: Photons

What if instead of electricity, we computed with light? It sounds almost magical: computing at the speed of light, but it’s exactly the approach that photonic computing takes. Photons (light particles) have innate advantages over electrons for moving and processing information. For one, photons don’t carry an electric charge, so they generate virtually no heat as they propagate. No heat means far less energy wasted, plus no need for giant cooling systems. This is a fundamental leap in efficiency gains. Secondly, photons can travel faster (at light speed) and in parallel. Think of how fiber-optic internet cables outclass old Ethernet, meaning a photonic chip can shuttle data internally at incredible rates.

Computing Speed and Energy Efficiency of Photonic vs. Electronic Architectures
Source: Xu, D., Ma, Y., Jin, G., & Cao, L. (2025). Intelligent Photonics: A Disruptive Technology to Shape the Present and Redefine the Future. Engineering, 46(3), 186-213.

Crucially, photonic computing aligns perfectly with the needs of matrix math. Rather than crunching additions and multiplications sequentially with billions of transistors, an optical system can perform matrix multiplications using the physics of light itself. It’s a bit like shining two flashlights and instantly getting a brightness pattern that represents the product of two matrices, all happening in one optical compute, rather than in millions of tiny digital steps. Photonic chips can thus execute the core math of AI faster and with far less energy than traditional chips. The promise is computation that scales without a corresponding explosion in power consumption.

Of course, photonic computing isn’t entirely new. However, past attempts at all-optical computers hit roadblocks around precision, size of optical components, and integrating with existing systems. This is where Arago enters the story, with a novel approach that harnesses light’s advantages while sidestepping previous pitfalls.

Introducing Arago

When we first met the Arago team, their vision was bold and clear: build an AI processor that runs on light, not electricity, to break AI’s energy-performance bottlenecks. Fast forward to today, and Arago has already produced a working prototype of their photonic chip at lightspeed in less than 12 months since the company’s founding. This isn’t vaporware or a science project; it’s a real processor that fits in the palm of your hand and actually runs AI models as of today. Arago’s Optical Processing Unit (OPU) takes a hybrid “multiphysics” approach, which sets it apart from prior optical computing efforts.

In CEO Nicolas Muller’s words, “instead of having thousands, if not hundreds of thousands, of transistor switches to represent one number, we use one laser to encode that value” to handle the heavy matrix math analogously. The hybrid design leverages light for what it does best, i.e., high-throughput math with low energy consumption, ultimately resulting in a unique photonic processor that delivers a step-change in energy efficiency gains.

The early results are astonishing. Arago aims to deliver more than 10x lower energy consumption than today’s leading GPUs, at equivalent performance and cost. That’s an order-of-magnitude leap in efficiency, essentially doing the same AI workload with a tenth of the electricity. To put that in perspective, if the world’s data centers adopted technology like this, those dire energy projections we started with could be radically reduced. Running on light could help AI scale beyond the limits of our energy infrastructure.

As Earlybird Partner & Co-founder, Dr. Hendrik Brandis remarked,

“Arago’s technology has the potential to defy the laws of AI compute, using only a fraction of the resources. The Arago team is executing towards this moment with unseen velocity and frugality”.

It’s exactly the kind of step-change innovation we need to sustain AI’s trajectory.

A chip is only useful if it works with today’s tools. Arago designed its photonic processor to be fully compatible with modern AI software such as PyTorch, TensorFlow, etc. Thanks to their full-stack runtime and compiler, developers don’t need to change their code

Its chiplet-ready design means it can plug into hybrid systems alongside CPUs and GPUs. Arago builds on existing silicon photonics processes and manufactures in the U.S. and Europe using affordable, mature nodes. Arago’s chip integrates cleanly into today’s systems. It’s a true drop-in solution that requires minimal effort to adopt.

Execution at Light Speed: The Team Behind the Breakthrough

Big visions are nothing new in DeepTech; execution is what distinguishes the winners. What was the core reason that made it almost easy to build up conviction on Arago? The team’s ability to deliver extraordinary results at light speed with scarce resources. Converging around the shared insight that AI was “boxed in by today’s hardware”Nicolas Muller, Eliott Sarrey, and Ambroise Müller founded Arago in 2024.

Eliott and Ambroise bring a rare mix of deep expertise across all key domains (optics, physics, electronics, mathematics, AI) from leading French and Swiss universities, while Nicolas has sharp business instincts paired with ML expertise from MIT and experience in DeepTech startups. Together they imagined a radically different kind of processor. Within a year of founding, they had built a prototype photonic chip that proved their claims. 

For context, developing a new processor often takes several years and tens, if not hundreds, of millions of dollars, even for experienced teams. Yet Arago’s lean team of under 20 people got its prototype up and running in less than 12 months. That speaks to exceptional technical prowess and focus. As Pierre Boudier, a former NVIDIA Fellow who joined Arago as an advisor, noted:

I’ve worked with Nvidia CEO Jensen Huang for nearly a decade. I know what it takes to succeed in this industry, and I see a lot of those same qualities in the Arago team.

The team blends seasoned engineering talent with bright young researchers unafraid to rethink fundamentals. It’s a potent mix of experience and first-principles thinking.

When Arago’s $26m seed round came together it attracted an elite roster of angels like Bertrand Serlet, former VP of Software at Apple; Christophe Frey, GM at Arm; Olivier Pomel, CEO of Datadog; Thomas Wolf, co-founder of Hugging Face; and several industry veterans from NVIDIA, Intel & Co. Their vote of confidence in Arago speaks volumes about the credibility of what this team has achieved and, more importantly, will accomplish in the future. The next 12 months will be a race, but if anyone can win that race against the giants, it’s this team.

The Road Ahead: A Massive Market and Mission

The timing for Arago could not be better. The world’s demand for AI hardware is exploding, creating one of the largest market opportunities of humanity. The global AI chip market is projected to grow to more than $500bn by 2033, growing at a 35% CAGR. However, the traditional approach of cramming more power-hungry GPUs into servers will not sustain AI’s growth to that scale, at least not without unacceptable costs and environmental impact. Hence, governments, hyperscalers, and alike are desperately searching for more sustainable and scalable AI compute alternatives. We need a transformational leap in energy efficiency without compromises on performance, and that’s exactly what Arago offers.

Photonic computing is reaching its breakthrough moment, and Arago is at the forefront of this wave. The technology risk has been retired to a large extent (a prototype exists, the core principles are proven), and what lies ahead is execution and scale-up. Of course, Arago won’t be alone in chasing optical AI computing, but they have a meaningful head start and a defensible approach. We also believe Arago’s impact will extend beyond data centers: as the tech matures, imagine energy-efficient AI accelerators in every laptop, phone, and edge device, bringing powerful AI models closer to the user.

Why We Invested In Arago

At Earlybird, we never shy away from investments that are truly visionary in solving the biggest problems of our time. Arago is tackling perhaps the defining technological bottleneck of our era, the energy efficiency of AI, with a solution that is as elegant as it is bold: using light to literally outshine what electronics can do. Their approach promises at least 10× energy efficiency improvements (performance-per-watt). They’ve integrated that solution into the current ecosystem by design, lowering adoption barriers for customers. And they’ve done so with blistering lightspeed execution, assembling a world-class team and delivering a working prototype in under a year. The conviction in their mission, and their ability as a team to make it real, gave us confidence that Arago isn’t just building a chip; they are flipping the world of AI compute upside down.

This investment is more than just capital for us; it’s a partnership in building the future of AI hardware infrastructure. We’re stoked to support Arago as they bring their photonic computing platform from prototype to product, and we’re already working together on the road to commercialization. The next milestones, from tape-out to scaling up manufacturing and demonstrating deployments with pilot customers, will be incredibly challenging, but the upside is uncapped (sorry not sorry). Solving AI’s energy and performance crisis is a mission of both economic and societal importance. It will enable AI developers to innovate faster without worrying about an exponential energy bill. It will help data centers meet growing demand without doubling their carbon footprint. It might even allow cutting-edge AI to run in robots, everyday devices, etc. The future of AI will be written not just in code, but in hardware.

It’s rare to find a startup that checks every box: a rockstar team executing relentlessly and harnessing revolutionary tech in a huge and high-potential market. Yet when we met Arago, it was immediately clear this was one of those moments. 

Earlybird is proud to welcome Arago to our portfolio and to support their mission in building an AI-powered world that neither consumes too much energy nor compromises on performance. We couldn’t be more excited to be on this journey together, helping to light the path forward for a more sustainable and scalable AI.


Earlybird’s Dr. Moritz Belling and Ferdinand Dansard signing the term sheet with Arago’s co-founders

We’re all in since Day One! Check out the news coverage linked here: Sifted, EE Times, and Tech.eu