LB Select
2024.02.22 10:39
portai
I'm PortAI, I can summarize articles.

Old Huang: We are accelerating everything! Full record of NVIDIA's earnings conference call.

During this morning's earnings conference call, Huang Renxun provided the latest insights on NVIDIA's performance and the AI revolution.

NVIDIA Corporation (NVDA) 2024 Fourth Quarter Earnings Conference Call Transcript

NVIDIA (NVDA) 2024 fourth quarter earnings conference call, February 21, 2024, 5:00 PM Eastern Time

Company Participants

Simona Jankowski - Vice President, Investor Relations

Colette Kress - Executive Vice President and Chief Financial Officer

Huang Renxun - President and Chief Executive Officer

Conference Call Participants

Toshiya Higuchi - Goldman Sachs

Joe Moore - Morgan Stanley

Stacy Rasgon - Bernstein Research

Matt Ramsay - TD Securities

Timothy Arcuri - UBS

Ben Reitz - Merius Research

C.J. Muse - Cantor Fitzgerald

Aaron Rakers - Wells Fargo

Harsh Kumar - Piper Sandler

Moderator

Good afternoon. I'm Rob, your host for today's conference call. I'd like to welcome everyone to the NVIDIA fourth quarter earnings conference call. To minimize background noise, all lines have been muted. There will be a Q&A session following the speakers' remarks. [Moderator's instructions]

Thank you. Simona Jankowski, you may now begin the meeting.

Simona Jankowski

Thank you. Good afternoon, everyone, and welcome to the NVIDIA fiscal year 2024 fourth quarter conference call. Joining me on the call today are NVIDIA's President and CEO, Huang Renxun, and Executive Vice President and CFO, Colette Kress.

I'd like to remind everyone that our conference call is being webcast live on the NVIDIA Investor Relations website. The webcast will be available for replay until our discussion of the first quarter fiscal 2025 financial results. The content of today's call is NVIDIA's property and may not be copied or transcribed without our prior written consent.

During this call, we may make forward-looking statements based on current expectations. These statements are subject to significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that may affect our future financial results and business, please refer to today's earnings press release, our recent 10-K and 10-Q filings, and any 8-K reports we may file with the Securities and Exchange Commission.

All our statements are based on information available as of today, February 21, 2024. We do not undertake to update any such statements unless required by law. In this call, we will discuss financial measures that are not in accordance with Generally Accepted Accounting Principles (GAAP). You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in the CFO commentary posted on our website.With all this information, let me pass the phone to Colette.

Colette Kress

Thank you, Simona. The fourth quarter was another record-breaking quarter. Revenue reached $22.1 billion, up 22% QoQ and 265% YoY, far exceeding our expectation of $20 billion. The revenue for the fiscal year 2024 was $60.9 billion, up 126% YoY.

Let's start with the data center. The data center revenue for fiscal year 2024 was $47.5 billion, more than doubled from the previous year. The world has reached a turning point in the new era of computing. The $1 trillion data center infrastructure is rapidly transitioning from general-purpose computing to accelerated computing.

As Moore's Law slows down and computational demands continue to soar, the company may accelerate every possible workload to drive future improvements in performance, total cost of ownership, and energy efficiency. Meanwhile, the company has begun building the next-generation modern data centers, which we refer to as AI factories, specifically designed to refine raw data and generate valuable intelligence in the era of AI.

In the fourth quarter, data center revenue reached $18.4 billion, setting a record, with a 27% QoQ increase and a 409% YoY increase, driven by the NVIDIA Hopper GPU computing platform and InfiniBand end-to-end network. Computing revenue increased by more than five times, and network revenue doubled from last year. We are pleased to see improvements in the supply of Hopper architecture products. The demand for Hopper remains very strong. We anticipate that our next-generation products will be in short supply as demand far exceeds supply.

The growth of the data center in the fourth quarter was driven by the training and inference of generative AI and large language models across a wide range of industries, use cases, and regions. The versatility and leading performance of our data center platform make it possible to achieve high return on investment in many use cases, including AI training and inference, data processing, and a wide range of CUDA-accelerated workloads. We estimate that approximately 40% of data center revenue in the past year was used for AI inference.

The construction and deployment of AI solutions have touched nearly every industry. Many companies across industries are training and operating their AI models and services at scale, enterprises are using NVIDIA AI infrastructure through cloud providers, including hyperscale, GPU-specialized, and private or on-premises clouds.

NVIDIA's computing stack seamlessly extends to cloud and on-premises environments, allowing customers to adopt a multi-cloud or hybrid cloud strategy. In the fourth quarter, large cloud providers accounted for more than half of our data center revenue, supporting internal workloads and external public cloud customers.

Microsoft recently pointed out that over 50,000 organizations are using GitHub Copilot to enhance their developers' productivity, contributing to a 40% YoY acceleration in GitHub's revenue growth. The adoption of Copilot for Microsoft 365 has outpaced the growth rate of the previous two major Microsoft 365 enterprise suite versions in the first two months.Consumer internet companies have always been early adopters of AI and are one of our largest customer categories. From search to e-commerce, social media, news and video services, and entertainment, companies are using AI for deep learning-based recommendation systems. These AI investments have generated strong returns by increasing customer engagement, advertising interactions, and click-through rates.

In its latest quarterly report, Meta mentioned that more accurate predictions and improved ad performance have significantly accelerated its revenue. In addition, consumer internet companies are investing in AI to support content creators, advertisers, and customers through automated tools for content and ad creation, online product descriptions, and AI shopping assistants.

Enterprise software companies are applying generative AI to help customers achieve productivity gains. We have collaborated with some early customers for training and inference of generative AI, and they have already seen significant business success.

ServiceNow's generative AI products in the latest quarter have driven its largest-ever annual new contract value contribution. We also partner with many other leading AI and enterprise software platforms, including Adobe, Databricks, Getty Images, SAP, and Snowflake.

The foundational field of large language models is thriving. Companies like Anthropic, Google, Inflection, Microsoft, OpenAI, and xAI continue to make remarkable breakthroughs in generative AI. Exciting companies such as Adept, AI21, Character.ai, Cohere, Mistral, Perplexity, and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to cater to specific languages, cultures, and customs in many regions worldwide.

Other companies are creating foundational models to address entirely different industry problems, such as Recursion Pharmaceuticals and Generate:Biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU cloud providers. Just this morning, we announced our collaboration with Google to optimize the state of its latest Gemma language model to accelerate its NVIDIA GPU inference performance in cloud data centers and PCs.

One of the most significant trends in the past year is the significant adoption of AI by enterprises in vertical domains such as automotive, healthcare, and financial services. NVIDIA offers multiple application frameworks to help companies adopt AI in vertical domains, such as autonomous driving, drug discovery, low-latency machine learning for fraud detection, or robotics, leveraging our full-stack accelerated computing platform.

We estimate that data center revenue contributions in the automotive vertical domain exceeded $1 billion last year through cloud or on-premises. NVIDIA DRIVE infrastructure solutions include systems and software for autonomous driving development, including data ingestion, creation, labeling, and AI training, as well as validation through simulation.Nearly 80 car manufacturers, including global OEMs, new energy vehicles, trucks, robot taxis, and tier-one suppliers, are all using NVIDIA's AI infrastructure to train LLMs and other AI models for autonomous driving and AI cockpit applications. In fact, almost every car company involved in AI work is collaborating with NVIDIA. As AV algorithms shift towards video transformers and more vehicles are equipped with cameras, we anticipate a significant increase in NVIDIA's automotive data center processing demand.

In the healthcare field, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices. Over the past decade, we have built deep domain expertise in healthcare, creating the NVIDIA Clara healthcare platform and NVIDIA BioNeMo, a generative AI service for developing, customizing, and deploying AI foundational models for computer-aided drug discovery.

BioNeMo has an increasing number of pre-trained biological molecular AI models that can be applied to end-to-end drug discovery processes. We announced that Recursion will offer their proprietary AI models through BioNeMo for use in the drug discovery ecosystem. In the financial services sector, customers are using AI for an increasing number of use cases, from trading and risk management to customer service and fraud detection. For example, American Express has improved fraud detection accuracy by 6% using NVIDIA AI.

Turning to the geographical distribution of our data center revenue. Growth in all regions except China has been strong, with a significant decline in our data center revenue in China, following the implementation of export control regulations by the US government in October. Although we have not obtained a license from the US government to ship restricted products to China, we have started shipping alternative products that do not require a license to the Chinese market. China accounts for a mid-single-digit percentage of our data center revenue in the fourth quarter. We expect to maintain a similar range in the first quarter.

Outside the US and China, sovereign AI has become an additional demand driver. Countries worldwide are investing in AI infrastructure to support the development of large language models using their own language, local data, and to support local research and business ecosystems. From a product perspective, the majority of revenue is being driven by our Hopper architecture and InfiniBand network. They have become the de facto standards for accelerating computing and AI infrastructure.

We plan to ship the H200 for the first time in the second quarter, with strong demand as the inference performance of the H200 is nearly twice that of the H100. The network is running at an annualized revenue run rate exceeding $13 billion. Our end-to-end network solutions define the modern AI data center. Our Quantum InfiniBand solution has grown by over 5 times year-on-year.

NVIDIA Quantum InfiniBand sets the standard for the highest performance AI-specific infrastructure. We are now entering the Ethernet networking space, introducing our new Spectrum-X end-to-end product designed specifically for AI-optimized networks in data centers. Spectrum-X introduces new technologies tailored for AI.Our Spectrum switches, BlueField DPU, and integrated technology stack in software are providing 1.6 times higher network performance for AI processing compared to traditional Ethernet.

Major OEMs including Dell, HPE, Lenovo, and Supermicro are collaborating with us through their global sales channels to expand our AI solutions to enterprises worldwide. We expect to ship Spectrum-X this quarter. Our software and service products have made significant progress, with an annualized revenue run rate of $1 billion in the fourth quarter. We announced that NVIDIA DGX Cloud will expand its partner list, including AWS from Amazon, joining Microsoft Azure, Google Cloud, and Oracle Cloud. DGX Cloud is used for NVIDIA's AI research and custom model development, as well as by NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners.

Now, let's talk about gaming. Gaming revenue reached $2.87 billion, remaining flat MoM but growing by 56% YoY, surpassing our expectations for solid consumer demand for NVIDIA GeForce RTX GPUs during the holiday season. This fiscal year's revenue was $10.45 billion, up 15% YoY. At CES, we announced the GeForce RTX 40 Super series GPU family. Starting at $599, they offer incredible gaming performance and generative AI capabilities. Sales have started strong.

NVIDIA AI Tensor cores and GPUs deliver performance of up to 836 AI tops, ideal for providing AI in games and enhancing daily productivity. Our rich software stack for RTX GPUs further accelerates AI. Through our DLSS technology, seven out of eight pixels can be generated by AI, increasing ray tracing speed by 4 times with better image quality. Additionally, using Tensor RT LLM for Windows, our open-source library, accelerates the inference performance of the latest large language models generative AI by 5 times on RTX AI PCs.

At CES, we also announced a new wave of RTX 40 series AI laptops from every major OEM. These laptops offer high-performance gaming and AI capabilities in various form factors, including 14-inch and thin-and-light notebooks. With up to 686 tops of AI performance, the generative AI performance of these next-generation AI PCs has increased by up to 60 times, making them the ultimate performance AI PC platform. At CES, we introduced the NVIDIA Avatar Cloud Engine microservice, allowing developers to integrate cutting-edge generative AI models into digital avatars. ACE has won multiple CES 2024 Best Awards.NVIDIA has an end-to-end platform for building and deploying generative AI applications on RTX PCs and workstations. This includes libraries, SDKs, tools, and services that developers can integrate into their generative AI workloads. NVIDIA is driving the next wave of upcoming PC generative AI applications with over 100 million installed RTX PCs and over 500 AI-supported PC applications and games, we are moving forward.

Now, let's talk about professional visualization. Revenue reached $463 million, up 11% MoM and 105% YoY. This fiscal year's revenue is $1.55 billion, up 1% YoY. The QoQ revenue growth in this quarter was driven by the continuous growth of RTX Ada architecture GPUs. Enterprises are updating their workstations to support generative AI-related workloads such as data preparation, LLM fine-tuning, and retrieval-enhanced generation.

Key industrial verticals driving demand include manufacturing, automotive, and robotics. The automotive industry is also an early adopter of NVIDIA Omniverse as it seeks to digitize workflows from design to construction, simulation, operation, and experiencing their factories and cars. At CES, we announced creative partners and developers including Brickland, WPP, and ZeroLight are building Omniverse-driven car configurators. Leading automakers like LOTUS are adopting this technology to bring a new level of personalization, realism, and interactivity to the car buying experience.

Moving on to automotive, revenue was $281 million, up 8% MoM and down 4% YoY. This fiscal year's revenue is $1.09 billion, up 21% YoY, marking the first time breaking the $1 billion mark, thanks to automakers' continued adoption of the NVIDIA DRIVE platform. NVIDIA DRIVE Orin is the preferred AI car computer for software-defined autonomous driving fleets.

Its successor, NVIDIA DRIVE Thor, is designed for visual transformers, providing more AI performance and integrating a range of intelligent features into a single AI computing platform, including autonomous driving and parking, driver and passenger monitoring, and AI-driven cockpit functions, set to launch next year. This quarter, several automotive customers announced new models based on NVIDIA, including Li Auto, Great Wall Motors, Geely's premium electric vehicle subsidiary ZEEKR, and Xiaomi's car.

Now, let's discuss other income statements. The gross margin expanded to 76% MoM calculated by GAAP and 76.7% non-GAAP, benefiting from strong data center growth and mix. Our fourth-quarter gross margin was influenced by favorable component costs. MoM, operating expenses increased by 6% calculated by GAAP and 9% non-GAAP, mainly reflecting increased investments in computing and infrastructure, as well as employee growth.In the fourth quarter, we returned $2.8 billion to shareholders in the form of stock buybacks and cash dividends. In the fiscal year 24, we used $9.9 billion in cash for shareholder returns, including $9.5 billion for stock buybacks.

Now, let's turn to the outlook for the first quarter. Total revenue is expected to be $24 billion, plus or minus 2%. We anticipate MoM growth in data centers and professional visualization, partially offsetting the seasonal decline in gaming. The gross margin calculated according to GAAP and non-GAAP is expected to be 76.3% and 77%, plus or minus 50 basis points, respectively. Similar to the fourth quarter, the first-quarter gross margin benefits from favorable component costs. Looking ahead to the remaining time after the first quarter, we expect the gross margin to return to the mid-70% range.

Operating expenses calculated according to GAAP and non-GAAP are expected to be approximately $3.5 billion and $2.5 billion, respectively. Operating expenses for fiscal year 25 are expected to increase in the mid-30% range according to GAAP and non-GAAP as we continue to invest in the significant opportunities ahead of us.

Other income and expenses calculated according to GAAP and non-GAAP are expected to be approximately $250 million in revenue, excluding gains and losses from non-affiliated investments. The tax rate calculated according to GAAP and non-GAAP is expected to be 17%, plus or minus 1%, excluding any one-time items. More financial details are included in the CFO's commentary and other information available on our IR website.

Lastly, let me highlight some upcoming financial events. We will be attending the Morgan Stanley Technology, Media, and Telecommunications Conference in San Francisco on March 4, as well as the 44th Annual Healthcare Conference hosted by TD Cowen in Boston on March 5. And please join us for the annual DTC conference in San Jose, California on March 18, the first time in five years to be held in person. DTC will kick off with Jen-Hsun's keynote speech, followed by a Q&A session for financial analysts on the second day, March 19.

Now, we will open the call for questions. Operator, please begin.

Q&A Session

Operator

[Operator Instructions] Your first question comes from Toshiya Hari of Goldman Sachs. Your line is now open.

Toshiya Hari

Hi. Thank you for taking my question, and congratulations on the strong performance. My question is regarding Jen-Hsun's comments on the data center business. Clearly, you have been doing very well in this business. I am curious how your expectations for fiscal years 24 and 25 have evolved over the past 90 days.

In answering this question, I would like you to discuss some of the newer areas within data centers, such as software. Regarding sovereign artificial intelligence, I think you have been very candid about the long-term view. Recently, there was an article mentioning that NVIDIA may participate in the ASIC market. Is there any truth to this, and if so, how should we view your role in this market in the coming years?Thank you.

Huang Renxun

Thank you, Toshiya. Let's take a look. There are three questions, let me repeat them. The first question is - can you - okay?

Toshiya Hari

I guess how your expectations for the data center have evolved. Thank you.

Huang Renxun

Alright. Yes. Well, we guide one quarter at a time. But fundamentally, the conditions are very conducive for sustained growth from fiscal year 24 to fiscal year 25 and beyond. Let me tell you why. We are at the beginning of transformations in two industry scopes, and both are within the industry scope.

The first is the transformation from general computing to accelerated computing. As you know, general computing is losing momentum. You can see this by extending the depreciation period of many data centers, including our own general computing data centers, from four years to six years by CSPs.

When you can't fundamentally and significantly increase its throughput, there is no reason to update with more CPUs. So, you have to accelerate everything. This is what NVIDIA has been pioneering. Through accelerated computing, you can significantly improve energy efficiency. You can reduce data processing costs by 20 times. Huge numbers. And of course, speed. The speed is so amazing that we have enabled a second transformation within the industry scope, called generative AI.

Generative AI, I believe we will fully discuss it in the conversation - fully discuss it. But please remember, generative AI is a new application. It is enabling a new way of software development, creating new types of software. This is a new way of computing. You can't do generative AI on traditional general computing. You have to accelerate it.

Third, it is enabling a whole new industry, which is something worth stepping back to look at, and it is connected to your last question about sovereign AI. A whole new industry, because for the first time, the data center is not just about computing data, storing data, and serving company employees. We now have a new type of data center, it is about AI generation, an AI generation factory.

You have heard me describe it as an AI factory. But basically, it takes raw materials, namely data, and transforms it through these AI supercomputers built by NVIDIA, turning it into extremely valuable tokens. These tokens are what enhance people's amazing experiences on ChatGPT or Midjourney, or even the searches now. All your recommendation systems are now enhanced by this, along with the ensuing hyper-personalization.

All these incredible digital biology startups, generating proteins and chemicals, and so on. So, all these tokens are generated in a very special type of data center. We call these data centers AI supercomputers and AI generation factories. But we are seeing diversity - another reason - so the foundation is like that. How it manifests as new markets, in all the diversity you see where we are.

The amount of reasoning we do now is astronomical. Almost every time you interact with ChatGPT, we are reasoning. Every time you use Midjourney, we are reasoning. Every time you see the amazing - these Sora videos being generated, or Runway, the videos they are editing, Firefly,NVIDIA is currently focusing on inference. The inference segment of our business has seen significant growth, estimated at around 40%. As models become larger, the volume of inferences is increasing.

We are also diversifying into new industries. Large CSPs are still expanding. You can see this from their capital expenditures and discussions. There is also a new category called GPU-specialized CSPs. They are specifically focused on NVIDIA AI infrastructure, GPU-specialized CSPs. You see enterprise software platforms deploying AI. ServiceNow is a very good example. You see Adobe. And others, such as SAP. You see consumer internet services now enhancing all their past services with generative AI. This allows them to create more personalized content.

We are talking about industrial generative AI. Our industry now represents billions of dollars in business, including automotive, healthcare, and financial services. Overall, our vertical industries now represent billions of dollars in business. Of course, there is also sovereign AI. Sovereign AI is driven by the unique data related to the language, knowledge, history, and culture of each region.

They want to use their own data to create their own digital intelligence through training and offer it to themselves to utilize these raw materials. It belongs to them, every region around the world. The data belongs to them. Data is most useful to their society. Therefore, they want to protect the data. They want to transform it themselves, add value to it, turn it into AI, and provide these services themselves.

So, we see sovereign AI infrastructure being built in Japan, Canada, France, and many other regions. Therefore, my expectation is that what has been experienced in the US and the West will definitely be replicated worldwide, with these AI generation factories appearing in every industry, every company, and every region. Therefore, I believe that in the past year, we have seen generative AI truly become a brand-new application space, a new way of computing, and a new industry emerging, which is driving our growth.

Operator

Your next question comes from Joe Moore of Morgan Stanley. Your line is open.

Joe Moore

Great. Thank you. I'd like to follow up on the 40% revenue from inference. This number is larger than I expected. Can you give us a sense of what this number might have been a year ago, how much growth you have seen in inference, especially in large language models (LLMs)? How do you measure this? I assume in some cases, you use the same GPU for training and inference. How accurate is this measurement? Thank you.

Huang Renxun

I'll start by saying that this estimate may be underestimated. But we have estimated it. Let me tell you why. A year ago, when you run the internet, news, videos, music, products, the recommendations you receive from recommendation systems - as you know, there are trillions of things on the internet, and your phone is only 3 inches square. Therefore, the ability to compress all this information into such a small product is achieved through an amazing system called a recommendation system.These recommendation systems used to be based on CPU methods in the past. But recently, they have shifted towards deep learning, now it's generative AI, truly taking these recommendation systems directly onto the path of GPU acceleration. It requires GPU acceleration for embedding. It needs GPU acceleration for nearest neighbor search. It requires GPU acceleration for reordering, and it needs GPU acceleration to generate enhanced information for you.

Therefore, GPU is now involved in every step of recommendation systems. As you know, recommendation systems are the largest single software engine on Earth. Almost every major company in the world must run these large recommendation systems. Whenever you use ChatGPT, it's doing inference. Whenever you hear about Midjourney and the volume they generate for consumers, when you see Getty, our collaboration with Getty, and Adobe's Firefly. These are all generative models. The list goes on. As I mentioned, these did not exist a year ago, 100% brand new.

Operator

Your next question comes from Stacy Rasgon of Bernstein Research. Your line is open.

Stacy Rasgon

Hi, everyone. Thank you for taking my question. I'd like to ask Colette - I want to talk about your previous mention that you expect the next generation product - I guess you're referring to Blackwell, to be supply-constrained. Could you elaborate a bit on what's causing this? Why would it be constrained with Hopper supply easing? Do you expect this constraint to last, like do you anticipate the next-gen product to be constrained until 2025? When do you see it easing?

Huang Renxun

Yes. First of all, overall, our supply is improving. Our supply chain has done an incredible job for us, from wafers, packaging, memory, to all the power regulators, transceivers, networks, and cables, and so on. The list of components we ship - as you know, people think of NVIDIA GPU as just a chip. But the NVIDIA Hopper GPU has 35,000 components. It weighs 70 pounds. These are really very complex things we build. It's called an AI supercomputer for a reason. If you've ever seen the back of a data center, those systems, those complex wiring systems are mind-boggling. It's the densest, most complex network wiring system in the world.

Our InfiniBand business has grown five times YoY. The supply chain is really supporting us well. So overall, the supply is improving. We expect demand to continue to outstrip our supply - throughout the year, we will do our best. Lead times are improving, and we will continue to do our best. However, whenever we have a new product, as you know, it goes from zero to a very large number. It's not something that can be done overnight. Everything is gradually improving, not all at once.

So whenever we have a new generation product - now, we are promoting the H200. In the short term, we cannot reasonably meet the demand. We are promoting Spectrum-X. We are doing very well on Spectrum-X.This is our brand new product in the Ethernet field. InfiniBand is the standard for AI-specific systems. Ethernet plus Spectrum-X - Ethernet itself is not a very good scalable system.

But with Spectrum-X, we have added essential new features on top of Ethernet, such as adaptive routing, congestion control, noise isolation, or traffic isolation, so we can optimize Ethernet for AI. Therefore, InfiniBand will be our AI-specific infrastructure. Spectrum-X will be our AI-optimized network, which is currently being promoted, so we will - for all new products, demand exceeds supply. This is the essence of new products, so we will work quickly to meet the demand. But overall, our supply is increasing very well.

Operator

Your next question comes from TD Cowen's Matt Ramsay. Your line is open.

Matt Ramsay

Good afternoon, Collette and Huang Renxun. Congratulations on your achievements. I have a two-part question related to what Stacey just mentioned, that your demand significantly exceeds your supply, even as supply is improving. The two aspects I want to ask about, I guess the first one is for Collette, such as how you are considering product allocation, considering the readiness of customer deployments, and whether there is any potential backlog of products that may not have been activated yet?

Then I guess for Huang Renxun, I would love to hear from you and your company's thoughts on product allocation, as many of your customers compete with each other, from cross-industry to small startups, to healthcare and government. This is a very unique technology you are enabling, and I would like to hear your thoughts on how you consider so-called fair distribution, both for the benefit of your company and the industry as a whole. Thank you.

Collette Kress

Let me start with your question, thank you, about how we collaborate with customers as they consider how to build their GPU instances and our allocation process. Many of the providers we work with have been working with us for many years, and we have been assisting them, whether it's the content they set up in the cloud or internally.

Many of these providers are simultaneously working on multiple products to meet the various needs of their end customers, as well as their internal needs. So, of course, they plan ahead for the new clusters they will need. Our discussions with them are not only about our Hopper architecture but also help them understand the next wave and get their interest and outlook on demand.

So it's always a dynamic process about what they will purchase, what is still under construction, and what is being used for our end customers. But the relationships we have built and their understanding of the complexity of building indeed help us with allocation and communication with them.

Huang Renxun

First, our CSPs (Cloud Service Providers) have a very clear understanding of our product roadmap and transformation. This transparency gives our CSPs confidence in knowing which products to place when and where. So they know their - they do as much as we know about the timing.They know the quantity, of course, there's also the allocation. We distribute fairly. We distribute fairly. We do our best to distribute fairly and avoid unnecessary allocations.

As you mentioned before, why allocate something when the data center isn't ready yet. There's nothing more challenging than having anything idle. So, fair distribution, avoiding unnecessary allocations. Where we operate - your question about the end market, we have an excellent ecosystem with OEMs, ODMs, CSPs, and a very important end market. What truly sets NVIDIA apart is that we bring our customers, partners, CSPs, and OEMs to their customers.

Biotech companies, healthcare companies, financial services companies, AI developers, large language model developers, autonomous vehicle companies, robot companies. There's a large number of emerging robot companies. From warehouse robots to surgical robots to humanoid robots, all sorts of interesting robot companies, agricultural robot companies. All these startups, large companies, healthcare, financial services, and automotive companies are working on NVIDIA's platform. We directly support them.

And often, we can achieve a win-win situation by simultaneously allocating to CSPs and bringing customers to them. So, this ecosystem, you're absolutely right, it's vibrant. But at its core, we aim to distribute fairly, avoid waste, and look for opportunities to connect partners and end-users. We've always been looking for these opportunities.

Operator

Your next question comes from Timothy Arcuri of UBS. Your line is open.

Timothy Arcuri

Thank you very much. I'd like to ask how you convert backlog orders into revenue. Obviously, the delivery time of your products has been greatly reduced. Colette, you didn't talk about inventory purchase commitments. But if I add your inventory to purchase commitments and your prepaid supplies, your total supply has actually slightly decreased. How should we understand this? Is it because you don't need to make as many financial commitments to suppliers because the delivery time is shorter, or have you possibly reached a certain stable state where you're closer to filling your order book and backlog orders? Thank you.

Colette Kress

Yes. So let me emphasize how we view these three different areas with our suppliers. You are correct. With the inventory we have on hand, given the allocations we are doing, we are trying, as things come into inventory, to immediately work to get them shipped to our customers. I think our customers appreciate that we can meet the timelines we're looking for.

The second part is our purchase commitments. Our purchase commitments have many different components, we need components for manufacturing. But typically, we are also procuring the capacity we may need. The length of our demand for capacity or components varies. Some may be for the next two quarters, but some may be for multiple years.I can say the same about our prepayments. Our prepayments are strategically designed to ensure that we reserve the capacity we need from several manufacturing suppliers in the coming years. So, don't read too much into the roughly similar figures, as we are ramping up our supply. They vary in length because sometimes we have to purchase items with long lead times or things that require capacity building.

Operator

Your next question is from Ben Reitzes at Melius Research. Your line is open.

Ben Reitzes

Yes. Thank you. Congratulations on your achievements. Colette, I'd like to discuss your comments on gross margins, where you mentioned they should return to the mid-70% range. If you don't mind elaborating. Also, is this due to the inclusion of HBM content in new products? What do you think is driving this outlook? Thank you very much.

Colette Kress

Thank you for your question. We highlighted our fourth-quarter results and first-quarter outlook in the opening remarks. Both quarters are unique in terms of gross margins because they include benefits from favorable component costs in the supply chain that span across our calculations and networks, as well as several different stages of our manufacturing processes.

Looking ahead, we have visibility into a mid-70% gross margin, allowing us to return to the peak levels before this fourth quarter and first quarter. So, we are really looking at a balanced portfolio. The portfolio will always be the biggest driver of what we will ship throughout the year. These are really just the driving factors.

Operator

Your next question is from C.J. Muse at Cantor Fitzgerald. Your line is open.

C.J. Muse

Good afternoon, thank you for taking my question. Jensen Huang, I have a broader question for you. With the million-fold improvement in GPU computing over the past decade and the expectations for similar improvements in the future, how do your customers view the long-term sustainability of their NVIDIA investments today? Will today's training clusters become tomorrow's inference clusters? How do you see this evolving? Thank you.

Jensen Huang

Hey, CJ. Thank you for your question. Yes, that's the really cool part. If you look at why we are able to achieve such significant performance gains, it's because our platform has two characteristics. One, it's accelerated. Two, it's programmable. It's not brittle. NVIDIA is the only one that, from the very beginning, specifically from the moment CNN and Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton first revealed AlexNet, all the way to RNN, LSTM, every single - RL, deep learning RL, transformers, to every single version.

Every version and every subsequent kind, visual transformers, multimodal transformers, every - now it's something with time series, and every variant, every subsequent AI kind, we can support it, optimize our stack for it, and deploy it on our installed base.This is truly an amazing part. On one hand, we can invent new architectures and technologies, such as our tensor core, like our tensor core transformer engine, improving new numerical formats and processing structures, just like what we have done on different generations of tensor cores, while supporting the installed base.

As a result, we will invest in all new software algorithms - inventions, all new model inventions, running on our installed base on one hand. On the other hand, whenever we see something revolutionary, like a transformer, we can create something entirely new, like the Hopper transformer engine, and implement it in the future. Therefore, we have the ability to bring software to the installed base and continuously improve it, so that our customers' installed base will be enriched over time through our new software.

On the other hand, the ability to create revolutionary new technologies. If we suddenly make amazing breakthroughs in large language models in our future generation products, don't be surprised. Some of these breakthroughs will be in software, as they run on CUDA and will be provided to the installed base. So, on one hand, we are moving forward together. On the other hand, we have made huge breakthroughs.

Operator

Your next question comes from Aaron Rakers of Wells Fargo. Your line is open.

Aaron Rakers

Yes. Thank you for taking my question. I wanted to ask about your business in China. I know in your prepared remarks you mentioned that you have started shipping some alternative solutions to China. You also noted that you expect this contribution to continue to represent a mid-single-digit percentage of your total data center business. So my question is, how significant is the range of products you are shipping to the Chinese market today, and why shouldn't we expect perhaps other alternative solutions to enter the market and expand the breadth of opportunities you have there? Thank you.

Jensen Huang

Well, fundamentally, remember that the U.S. government wants to restrict NVIDIA's latest capabilities in accelerated computing and AI from entering the Chinese market. The U.S. government wants to see us succeed in China as much as possible. Within these two constraints, within these two pillars, if you will, of restrictions, we had to pause when new restrictions were put in place. We paused immediately. This allowed us to understand what the restrictions were and reconfigure our products in a way that would not be vulnerable to any software hacking. This took some time. So, we reset our product supply to China, and now we are providing samples to our customers in China.

We will strive to compete in that market and succeed in that market within the constraints of the restrictions. That's how it is. Last quarter, our business significantly declined because we paused in the market. We stopped shipping in the market. We expect this quarter to be the same. But afterwards, we hope we can compete in our business, do our best, and see how the results turn out.

Operator

Your next question comes from Harsh Kumar at Piper Sandler. Your line is now open.

Harsh Kumar

Hey, Jen-Hsun, Colette, and the NVIDIA team. First of all, congratulations on the amazing quarter and guidance. I'd like to talk a bit about your software business. It's great to hear that it has surpassed $1 billion. Could Jen-Hsun or Colette help us understand the different parts and segments of your software business? In other words, just help us break it down a bit so we can better understand where the growth is coming from.

Jen-Hsun

Let me take a step back and explain the fundamental reason why NVIDIA has been so successful in software. Firstly, as you know, accelerated computing is truly growing in the cloud. In the cloud, cloud service providers have very large engineering teams, and we work with them to enable them to operate and manage their businesses. Whenever there is any issue, we have large teams assigned to them. Their engineering teams work directly with our engineering teams, and we enhance, fix, maintain, and patch the complex software stack related to accelerated computing.

As you know, accelerated computing is very different from general-purpose computing. You don't start from a program like C++. You compile it, and things run on all your CPUs. The software stack required for each domain is different, from data processing SQL and non-SQL structured data to all the images and text and PDFs, which are unstructured, to classical machine learning to computer vision to speech to large language models, all the way to recommendation systems. All these things require different software stacks. That's why NVIDIA has hundreds of libraries. Without software, you can't tap into new markets. Without software, you can't unlock and enable new applications.

Software is essential for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. Now, people realize that software is really crucial. The way we collaborate with CSPs, that's really easy. We have large teams working with their large teams.

However, now generative AI allows every enterprise and every enterprise software company to embrace accelerated computing—when—now the acceptance of accelerated computing is essential because it's no longer feasible, less likely to continue to increase throughput through just general-purpose computing. All these enterprise software companies and enterprise companies do not have large engineering teams to maintain and optimize their software stacks to run globally in the cloud, private cloud, and on-premises.

Therefore, we will manage, optimize, patch, tune, and install the optimized foundation to adapt to all their software stacks. We containerize them into our stack. We call it NVIDIA AI Enterprise. The way we list it is to see NVIDIA AI Enterprise as a runtime, just like an operating system, it is the operating system of artificial intelligence.


Every year, we charge $4,500 for each GPU. My guess is that every enterprise in the world, every software company deploying software on all clouds, private clouds, and on-premises, will run on NVIDIA AI Enterprise Edition, especially for our GPUs. Therefore, this could become a very important business over time. We are off to a good start. Colette mentioned that it has already reached a $1 billion run rate, while we are just getting started.

Operator

Thank you. I will now hand the conference call back to CEO Jen-Hsun Huang for closing remarks.

Jen-Hsun Huang

The computing industry is undergoing two platform shifts simultaneously. The trillion-dollar installed base of data centers is shifting from general-purpose computing to accelerated computing. Every data center will be accelerated so the world can keep up with computational demands while managing costs and energy. NVIDIA's incredible acceleration has opened up a new computing paradigm, generative AI, where software can learn, understand, and generate any information, from human language to biological structures and the 3D world.

We are now at the beginning of a new industry, where AI-specific data centers process vast amounts of raw data, refining it into digital intelligence. Like the alternating current power plants of the last industrial revolution, NVIDIA's AI supercomputers are essentially the AI generation factories of this industrial revolution. The foundation of every company and every industry is their proprietary business intelligence, which in the future will be their proprietary generative AI.

Generative AI is ushering in a new investment cycle to build the next trillion-dollar infrastructure - AI generation factories. We believe these two trends will double the installed base of the world's data center infrastructure in the next five years, representing an annual market opportunity of tens of billions of dollars. This new AI infrastructure will unlock applications in a completely new world that were previously impossible. We have embarked on the AI journey with hyperscale cloud providers and consumer internet companies. Now, from automotive to healthcare to financial services to industrial to telecommunications, media, and entertainment, every industry is getting involved.

NVIDIA's full-stack computing platform, with industry-specific application frameworks and a vast developer and partner ecosystem, enables us to help every company - help every industry company become an AI company at speed, scale, and coverage. We have a lot to share with you at the GTC in San Jose next month. Be sure to join us. We look forward to updating you on our progress next quarter.