David vs. Goliath: Can MatX Outperform Nvidia in the AI Race?
Fresh & Hot curated AI happenings in one snack. Never miss a byte 🍔

This snack byte will take approx 4 minutes to consume.
MatX, the chip startup has started making waves in the AI hardware sector. It has just secured a hefty $80 million in a Series A funding round, less than a year after raising $25 million in seed funding.
Led by Spark Capital, the deal values MatX at a pre-money valuation in the mid-$200 million range, with a post-money valuation creeping into the low $300 million territory. For a company barely two years old, these numbers speak volumes about its potential to shake up the AI landscape.
Co-founded by Mike Gunter and Reiner Pope—both alumni of Google’s TPU (Tensor Processing Unit) team—MatX has set its sights on solving a critical pain point in the industry: the shortage of chips tailored for AI workloads.
If the AI industry is a fast-paced marathon, Gunter and Pope are designing the running shoes everyone wants. Their chips are optimized for handling models with at least 7 billion parameters and, ideally, over 20 billion.
And they’re not just chasing raw performance; these chips promise to deliver robust results at a fraction of the cost, making them a tempting alternative to Nvidia’s GPUs, which currently dominate the AI chip market.
What makes MatX’s offering unique is its advanced interconnect system—the digital "superhighways" that allow AI chips to talk to each other efficiently. This innovation makes their chips particularly effective in scaling to large clusters, a must for training and running cutting-edge large language models (LLMs).
MatX’s goal?
To outperform Nvidia’s GPUs by a factor of ten when it comes to training LLMs and delivering inference results. Ambitious, yes—but also potentially transformative for an industry grappling with sky-high computational demands.
The startup’s rapid ascent has attracted serious backing. Its initial seed round was supported by tech heavyweights Nat Friedman, former CEO of GitHub, and Daniel Gross, a seasoned AI entrepreneur. Gross, who previously led search and AI at Apple, has since co-founded Safe Superintelligence alongside Ilya Sutskever, OpenAI’s former chief scientist. Clearly, MatX has the vote of confidence from some of the sharpest minds in AI.
This surge in investor interest isn’t happening in a vacuum. The global demand for GPUs and AI-specific chips has skyrocketed, with Nvidia's processors becoming both the gold standard and a scarcity.
The chip sector is having its moment in the spotlight, and startups like Groq—founded by another former TPU engineer—are seeing their valuations soar. Groq’s valuation, for instance, nearly tripled to $2.8 billion this year. The stakes in this arena couldn’t be higher, and MatX is sprinting to carve out its share of the market.
MatX’s journey reflects a broader trend in the AI hardware race: the hunger for specialized solutions that balance performance, scalability, and cost. The company’s bold claims of delivering chips that are not just incrementally better but orders of magnitude more efficient could be a game-changer.