Will Nvidia's AI gold rush continue?

The meteoric rise of the California-based chipmaker's valuation has brought flashbacks from the dot-com era when the internet was booming like AI is in today's day

Naandika Tripathi
Published: Jun 24, 2024 01:50:46 PM IST
Updated: Jul 4, 2024 12:35:20 PM IST

Nvidia CEO Jensen Huang presents their Blackwell platform at an event ahead of the COMPUTEX forum, in Taipei, Taiwan June 2, 2024. Image: Reuters/Ann WangNvidia CEO Jensen Huang presents their Blackwell platform at an event ahead of the COMPUTEX forum, in Taipei, Taiwan June 2, 2024. Image: Reuters/Ann Wang

Nvidia’s stock has multiplied tenfold since late 2022 and almost tripled so far this year. The chipmaker became the world's most valuable company for the first time on June 18, after its market capitalisation surpassed Microsoft and Apple briefly. It took Microsoft and Apple about six years to touch the $3 trillion-mark from $1 trillion. Whereas Nvidia did it in one year.

The stock increased in value by 591,000 percent since it went public in 2000. This meteoric rise was driven thanks, in part, to its first-mover advantage of introducing graphic processing units (GPUs) and now, its AI revolution.

However, this astronomical growth has raised questions about whether the current AI [artificial intelligence] boom is a bubble. The rise of Nvidia's valuation has brought flashbacks from the dot-com era when the internet was booming like AI is today. Things ended on a bitter note then, with the Nasdaq index falling over 50 percent in the late 1990s, due to the drastic fall of Cisco and other tech stocks.

While this certainly doesn’t indicate that the current AI boom led by Nvidia would meet a similar fate. But doubts have risen and they will linger. The company's revenue and earnings are growing fast. Experts say that it justifies the company’s stock growth in value. Revenue increased 260 percent year on year in the first quarter of 2025. Over the last few years, the demand for Nvidia's most powerful and advanced computer chips necessary for artificial intelligence has boosted sales and profits. Google, Meta, Microsoft, Amazon, and OpenAI buy billions of dollars of Nvidia’s GPUs.

The semiconductor company’s largest and most important business is the sales of the parts it makes for data centres. This includes its AI chips as also many of the additional parts needed to run big AI servers. Nvidia said its data centre category rose 427 percent from the year-ago quarter to $22.6 billion in revenue, led by the shipments of the company’s Hopper graphics processors, which include the company’s H100 GPU. The big buyer was Meta for its Lama 3, the latest large language model (LLM), which used 24,000 H100 GPUs.

The market is betting on a much bigger story than generative AI itself: Maybe the AI boom will be the key critical catalyst for the ultimate digital transformation of all industries and economies, explains Winston Ma, adjunct professor at the NYU School of Law. “Then the demand for Nvidia chips will explode exponentially. The jury is still out because the market is searching for AI killer apps. 2024-2025 would be the years of AI implementation. The real test for Nvidia has just begun.”

Cyclical ups and downs are expected and it may create some space for other companies to catch up. Ironically, no one is even close to Nvidia’s computational power of GPUs, says Robert Quinn, a semiconductor engineer and computing expert with over two decades of experience in information technology. “We're going to see a huge demand for servers and a need for AI facilities. Apparently, the US government is selling off their smart computers right now because they're going to replace them with Nvidia supercomputers.”

Nvidia’s GPUs were initially designed for video games, but things changed post-pandemic when wider use cases emerged, from mining cryptocurrency to self-driving cars to training AI models. Like Nvidia hardware, it is now found in most Tesla vehicles.

The California-based chip designer is rolling out new AI chips at lightning speed. It has pledged to release the latest models on a “one-year rhythm". In the past, it was operating on a slower two-year update timeline for chips. For example, in June, the company unveiled the next generation of chips which will succeed the previous model that was announced only in March. Nvidia CEO Jensen Huang also announced the new AI chip architecture, dubbed “Rubin.” This comes three months after the announcement of the upcoming “Blackwell” model that is still in production and is expected to ship to customers later in 2024.

The B200 Blackwell AI chip can do some computational tasks 30 times faster than its current bestseller, the H100 Hopper—the chip that has helped the company gain an 80 percent market share. Nvidia's H100 GPUs lead the data centre chip market and are in short supply due to its high demand. The company already has the world’s top tech companies, Microsoft and Meta, queued up for its Blackwell chip. All these chips are assembled and made by Taiwan’s TSMC.

According to an IDC report, the Asia-Pacific region is witnessing an unprecedented surge in generative AI adoption, including software, services, and hardware for AI-centric systems. The region is likely to see its GenAI spending soar to $26 billion by 2027, at a compound annual growth rate (CAGR) of 95.4 percent in the period. China is projected to maintain its position as the dominant market for GenAI, while Japan and India are set to become the most rapidly expanding markets in the forthcoming years.

India’s data centre startup Yotta placed an order for 16,000 H100 chips, including the newly announced Blackwell AI-training GPU, in September 2023. The first batch of 4,000 chips arrived in March, comprising Nvidia H100 Tensor Core GPUs. The Mumbai-based venture will offer managed cloud services along with the ability for enterprises to use Yotta’s cloud for training LLMs and building applications like OpenAI’s ChatGPT. The entire lot of the first 4,000 chips is contracted to customers. Yotta’s global customer ratio is almost 70 percent, and India makes up for the remaining 30 percent. This ratio, however, is likely to change significantly in the next twelve months because the Indian ecosystem will grow much faster, Sunil Gupta, co-founder, MD, and CEO, Yotta, told Forbes India recently. Gupta has also placed an additional order for 8,000 chips of the latest version of Blackwell AI chips.

It’s worth noting that the cost of these chips is high and requires a huge capital expenditure for installation. With competitors playing catch-up, Nvidia will have to consider reducing the price. Intel has already come up with the Gaudi 3, which rivals Nvidia’s H100, and costs half the price.

“The competitive market forces will reduce the prices because technology will become democratised across the world. For instance, Unified Payments Interface (UPI), smartphones, and the internet are easily accessible because they provide a good experience at a low price. Technology suddenly got into the hands of everybody. I think, the same thing will happen for AI as well,” adds Gupta.