Huang Renxun confidently stated: The data center business is worth 250 billion yuan a year, and NVIDIA will grab the majority of the "cake"
Huang Renxun said that he expects the data center to be a very large business. The global annual expenditure on data center equipment is expected to total $250 billion, with NVIDIA's products occupying a larger share than other chip manufacturers
Author: Li Dan
Source: Hard AI
During the developer summit in the field of Artificial Intelligence (AI) - NVIDIA, NVIDIA's founder and CEO Jensen Huang confidently stated that NVIDIA will "mine" several hundred billion dollars in the data center market in a year to grab a larger share of the "cake."
At the investor event held on Tuesday, the 19th, during the GTC conference in California, Huang told attendees that due to the wide variety of chips and software produced by NVIDIA, the company is in a favorable position, with a large portion of global data center equipment spending coming from NVIDIA products.
Huang expects that the global annual expenditure on data center equipment will total $250 billion, with NVIDIA's products occupying a larger share than other chip manufacturers. NVIDIA is dedicated to developing software for various industries to adopt and use AI technology. NVIDIA provides AI models and other software, and then charges customers based on their computing power and the number of chips in operation.
Huang said, "In my expectation, this (data center) will be a very large business."
On the opening day of this year's GTC conference, on Monday, NVIDIA just released the Blackwell architecture GPU, touted as the world's most powerful AI chip. The first GPU of this architecture, B200, improves cost and energy consumption by 25 times compared to the previous generation H100, with an inference capability increase of up to 30 times. It features the second-generation Transformer engine, providing up to 20 petaflops of FP4 computing power, five times faster than the H100's 4 petaflops.
The super chip GB200, combining two B200 GPUs with a single Grace CPU, provides 30 times the performance for LLM inference workloads and significantly improves efficiency. In the GPT-3 large model benchmark test with 175 billion parameters, the GB200 chip's performance is 7 times that of the H100, and the training speed is 4 times faster than the H100. The industry marvels at the birth of a new Moore's Law. Online comments state that within just eight years, NVIDIA's AI chip computing power has achieved a historic 1000-fold increase.
In the keynote speech at the GTC conference on Monday, Jensen Huang compared Blackwell to NVIDIA's previous generation chip architecture, Hopper, specially designed for data centers. He mentioned that training a 1.8 trillion parameter GPT model would take approximately three to five months:
If using the Hopper architecture, it may require 8,000 GPUs and consume 15 megawatts of power. With 8,000 GPUs and 15 megawatts, it would take 90 days, approximately three months.
If you use the Blackwell architecture, only 2,000 GPUs are needed. With 2,000 GPUs, the same 90 days. But the amazing part is, it only requires 4 megawatts of power.
Huang Renxun revealed that companies including Amazon Web Services (AWS), Google, Microsoft, and Oracle are all preparing to support Blackwell. At the same time, NVIDIA will continue to enhance its ecosystem based on AI, such as NVIDIA NOmniverse Cloud, which will be able to connect to Apple's mixed reality headset Vision Pro, and strengthen models and the general robot ecosystem.
Huang Renxun also introduced the humanoid robot basic model Project GR00T, the new humanoid robot computer Jetson Thor, and interacted with a pair of small NVIDIA robots from Disney Research