Next Story
Newszop

Elon Musk reconfirms '50 million plan' for xAI; says: 'Having thought about it…'

Send Push
Elon Musk has announced that his artificial intelligence (AI) startup, xAI , will use 50 million H100-equivalent GPUs over the next five years. This move seems to be an attempt to compete with OpenAI , a company co-founded by the world’s richest person. Musk’s statement emphasises his commitment to creating a highly dense GPU cluster to expand his AI operations quickly. The announcement follows a surge of interest in artificial intelligence since the launch of ChatGPT. In a recent post shared on social media platform X (formerly Twitter), Musk wrote: “Having thought about it some more, I think the 50 million H100 equivalent number in 5 years is about right. Eventually, billions.”

He said this in response to an xAI employee named Andree Jacobson. While replying to a Musk post from July, Jacobson wrote: “Older post but for those that are wondering about what @xai will be up to over the next few years, let’s just say that we’ll be busy…”




In the original post from last month, Musk wrote: “The @xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years.”


What does Elon Musk mean by 'H100 equivalent-AI compute'


According to a report by Tom’s Hardware, Musk’s specific reference to “H100 equivalent-AI compute” emphasises that his projection of 50 million GPUs offering such power would be superior in terms of efficiency. Currently, Nvidia’s Blackwell-based B200 GPUs are considered the most efficient AI accelerators available, but Musk’s phrasing suggests that xAI may eventually move away from Nvidia. This could signal a potential shift toward AMD or even the development of custom accelerators in collaboration with a partner like Broadcom, which already designs custom ASICs for other companies.

With his latest post, Musk has doubled down on his AI goals, suggesting that xAI could eventually reach a computing power equivalent to that of billions of H100 GPUs. While ambitious, this raises concerns given the environmental impact, high energy demands, and strain that large data centres place on local communities.

Meanwhile, OpenAI CEO Sam Altman has outlined plans for over a million H100 GPUs by the end of the year, with a longer-term vision of 100 million GPUs, which would require funding on the scale of the UK’s GDP. In contrast, xAI currently operates about 200,000 H200 GPUs, which is far below Musk’s targets.

Meanwhile, Meta, led by Mark Zuckerberg, is pursuing similar ambitions with its “Hyperion” data centre project, which will consume around 5GW of power and aims to surpass a million AI GPUs by year-end. Meta is also moving toward developing its own chips to reduce reliance on outside suppliers.

Alongside these developments, xAI has open-sourced Grok 2.5 , with Grok 3 set to follow in six months. However, Grok 4 remains under scrutiny due to recent controversies, raising questions about the challenges of scaling while maintaining responsible use.


Loving Newspoint? Download the app now