Wallstreetcn
2024.08.29 04:16
portai
I'm PortAI, I can summarize articles.

The full text is here! NVIDIA conference call: Hopper shipments will increase in the second half of the year, and Blackwell will be "in short supply" until next year

NVIDIA's financial report conference call stated that Blackwell is scheduled to begin mass production in the fourth quarter, with an expected revenue of "tens of billions of dollars" in the fourth quarter. Hopper's shipments are expected to increase in the second half of the 2025 fiscal year. The Chinese market makes a significant contribution, and the sovereign AI business is growing. NVLink will change the game rules in the exchange field

NVIDIA "lives up to expectations" as its performance shines again in the second quarter, with revenue continuing to exceed expectations due to strong demand in data centers. However, revenue guidance for the third quarter is lower than the most optimistic expectations, with expectations of the first slowdown to double digits in six quarters, causing NVIDIA's stock price to plummet by over 6% after hours.

During the conference call following the financial report announcement, CEO Jensen Huang remains optimistic about the future of AI, stating that the momentum of generative AI development is accelerating. He mentioned that the company is just beginning to reshape the global data center, which represents a trillion-dollar opportunity.

Regarding the highly anticipated progress and demand for Blackwell, NVIDIA stated that the market demand for the Hopper chip remains "strong," with high expectations for Blackwell as well. With Blackwell set to begin mass production in the fourth quarter, it is expected to generate billions of dollars in revenue, but it was not clarified whether this revenue is incremental.

Here are the key points from the conference call:

Strong Demand for Blackwell and Hopper to Coexist: NVIDIA delivered samples of the Blackwell architecture to customers in the second quarter, made changes to the Blackwell GPU mask to improve production yield. Blackwell is scheduled to begin mass production in the fourth quarter and continue through fiscal year 2026, expected to bring in "billions of dollars" in revenue in the fourth quarter.

Shipments of Hopper are expected to increase in the second half of fiscal year 2025, with improvements in Hopper's supply and availability. Demand for the Blackwell platform far exceeds supply, and this situation is expected to continue into next year.

Significant Contribution from the Chinese Market: Data center revenue in the Chinese market continued to grow in the second quarter, becoming a significant contributor to data center revenue.

Growth in Sovereign AI Business: As countries recognize that artificial intelligence technology and infrastructure are urgent priorities for society and industry, our sovereign AI opportunities are expanding, with sovereign AI business expected to increase by several billion dollars in 2024.

NVLink Will Change the Game in the Interconnect Field: The Blackwell system can connect 144 GPUs packaged in 72GB200 to one NVLink domain, with a total bandwidth of 259TB per second in a rack. From this perspective, this is about 10 times higher than Hopper; in terms of inference, NVLink is crucial for low-latency, high-throughput large models.

Impact of Product Iteration on Gross Margin: The gross margin guidance for the third quarter is around 75%, so the gross margin for the fourth quarter will be between 71% and 72%, with the full-year gross margin expected to be around 70%, mainly due to the different cost structure of the transformation and new product launches.

Regarding the Issue of Capital Expenditure Returns: Accelerated computing can speed up application speed, directly leading to lower costs and energy consumption; many companies are accelerating the creation of AI businesses, and once established, data centers will be leased out, with a very high return on investment; currently, AI is the best computing infrastructure in use

Below is the full text of NVIDIA's Q2 FY2025 earnings call

Senior Director of Investor Relations at NVIDIA: Stewart Stecker

Thank you, good afternoon everyone, and welcome to NVIDIA's Q2 FY2025 earnings conference call. Joining me today on the NVIDIA call are CEO Jensen Huang and CFO Colette Kress.

In this call, we will discuss financial metrics that are not in accordance with Generally Accepted Accounting Principles (GAAP). You can find a reconciliation of non-GAAP financial metrics and GAAP financial metrics in the CFO commentary posted on our website. Let me highlight some upcoming financial events, we will be attending the Goldman Sachs Community & Technology Conference in San Francisco on September 11th, where Jensen Huang will participate in a fireside chat.

The earnings conference call for NVIDIA's Q3 FY2025 performance is scheduled for Wednesday, November 20, 2024. Now, I will turn it over to Colette.

CFO, Executive Vice President: Colette M. Kress

It was another record-breaking quarter in Q2, with revenue reaching $30 billion, a 15% increase sequentially and a 122% increase year-over-year, significantly exceeding our expected $28 billion.

Firstly, in the data center segment, driven by strong demand for NVIDIA Hopper, GPU computing, and networking platforms, data center revenue reached a record $26.3 billion, up 16% sequentially and 154% year-over-year. Computing revenue more than doubled, and networking business revenue increased by over 2 times. Approximately 45% of our data center revenue comes from cloud service providers, with over 50% from consumer internet and enterprise companies.

Customers continue to accelerate purchases of Hopper architecture while preparing for Blackwell adoption. Key workloads driving our data center growth include: AI model training and inference; video, image, and text data preprocessing and post-processing using CUDA and AI workloads; synthetic data generation; AI-driven recommendation systems; SQL and vector database processing.

Next-generation models will require 10-20 times the computing power to handle more data, and this trend is expected to continue. In the past four quarters, we estimate that inference business accounts for over 40% of data center revenue, with CSPs, consumer internet companies, and enterprises benefiting from NVIDIA's remarkable inference platform throughput and efficiency. Demand for NVIDIA's inference platform comes from leading model makers, consumer internet services, and tens of thousands of companies and startups building AI applications for consumers, advertising, education, enterprise, healthcare, and robotics. Developers are eager for NVIDIA's rich ecosystem and availability across various clouds.

The widespread adoption of NVIDIA by CSPs is appreciated, with continued expansion of NVIDIA usage in high-demand situations. The NVIDIA H200 platform began mass production in the second quarter and is shipping to large CSPs, consumer internet, and enterprise companies. Based on the Hopper architecture, NVIDIA H200 offers over 40% increase in memory bandwidth compared to H100In the second quarter, our data center revenue in China continued to grow, becoming a significant contributor to our overall data center revenue. We still anticipate intense competition in the Chinese market in the future, with the latest round of MLPerf inference benchmark tests highlighting NVIDIA's leading position in the inference field, where NVIDIA's Hopper and Blackwell platforms achieved gold medals across all tasks. At this year's Computex, NVIDIA, in collaboration with top computer manufacturers, unveiled a series of Blackwell architecture-driven systems and NVIDIA networks for building AI factories and data centers.

With the NVIDIA MGX modular reference architecture, our OEM and ODM partners are constructing over 100 Blackwell-based systems for CSPs. The NVIDIA Blackwell platform combines multiple GPUs, CPUs, DPUs, NVLink, Link Switch, network chips, systems, and NVIDIA CUDA software to power next-generation AI across industries and countries. The NVIDIA GB200 NVL72 system with fifth-generation NVLink allows all 72 GPUs to function as a single GPU, providing up to 30 times faster inference speed for LLM workloads and enabling real-time operation of trillion-parameter models.

Demand for Hopper is strong, while Blackwell is undergoing extensive sampling. We have adjusted the quality of Blackwell GPUs to increase production yield. Blackwell is scheduled to begin mass production in the fourth quarter and continue into fiscal year 26. In the fourth quarter, we expect to generate billions of dollars in Blackwell revenue, with Hopper shipments projected to increase in the second half of fiscal year 2025. Supply and availability of Hopper have improved, but demand for the Blackwell platform far exceeds supply, a situation we expect to persist into next year.

Network business revenue grew by 16% consecutively. With hundreds of customers adopting our Ethernet products, our AI Ethernet revenue (including the Spectrum-X end-to-end Ethernet platform) doubled consecutively. Spectrum-X has received extensive support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprises, including xAI connecting the world's largest GPU computing clusters. Spectrum-X provides ultra-powerful Ethernet for AI processing, with performance 1.6 times that of traditional Ethernet. We plan to launch new Spectrum-X products annually to support the scaling of computing clusters from the current tens of thousands of GPUs to millions of DPUs in the near future. Spectrum-X is expected to become a billion-dollar product line within a year.

As countries recognize that AI expertise and infrastructure are crucial for their societies and industries, our opportunities in AI sovereignty continue to expand. The National Institute of Advanced Industrial Science and Technology in Japan is collaborating with NVIDIA to build its AI Bridging Cloud Infrastructure 3.0 supercomputerWe believe that this year, Hopper's artificial intelligence revenue will reach a low double-digit figure.

The enterprise artificial intelligence wave has begun, driving continuous revenue growth this quarter. We are collaborating with most Fortune 100 companies to implement artificial intelligence initiatives across industries and regions. A range of applications are driving our growth, including AI-driven chatbots, generative AI collaborative robots, and agents for building new profitable business applications and enhancing employee productivity. Amdocs is using NVIDIA's generative AI for its intelligent agents, transforming customer experience and reducing customer service costs by 30%.

ServiceNow is leveraging NVIDIA technology for its Now Assist product, the fastest-growing new product in company history. SAP is using NVIDIA to build enterprise applications. Cohesity is using NVIDIA to develop its generative AI agents and reduce the development costs of generative AI. Snowflake provides over 30 billion queries per day to more than 10,000 enterprise customers and is working with NVIDIA on enterprise applications.

Lastly, they are using NVIDIA's AI Omniverse to shorten factory end-to-end cycle times by 50%. The automotive industry is the main driver of growth this quarter, as every automaker developing autonomous driving technology is using NVIDIA in their data centers. The automotive industry will bring in billions of dollars in revenue from on-premises and cloud consumption, and as the next generation of AV models significantly increases computing requirements, revenue from the automotive industry will also grow. With AI innovations in medical imaging, surgical robots, patient care, electronic health record processing, and drug development, healthcare will also become a multi-billion dollar business.

This quarter, we announced a new NVIDIA AI foundry service, leveraging Meta's Llama 3.1 model set to provide supercharged generative AI for global enterprises, marking a watershed moment for enterprise AI. For the first time, enterprises can harness the power of cutting-edge open-source models to develop custom AI applications, encoding their institutional knowledge into an AI flywheel for business automation and acceleration. Accenture is the first company to adopt the new service to build custom Llama 3.1 models, which can be used internally and assist clients in deploying generative AI applications.

NVIDIA's NIM model accelerates and simplifies deployment, with companies in healthcare, energy, financial services, retail, transportation, and telecommunications adopting NIM, including Aramco, Lowes, and Uber. AT&T achieved 70% cost savings and an eightfold reduction in latency after using NIM for generative AI, call transcription, and classification. Over 150 partners are embedding NIM at every layer of the AI ecosystem.

We have released the NIM Agent Blueprint, a customizable reference application catalog that includes a full suite of software for building and deploying enterprise generative AI applications. With the NIM Agent Blueprint, enterprises can progressively enhance their AI applications and create a data-driven AI flywheelThe first batch of NIM Agent Blueprints includes workloads for customer service, computer-aided drug discovery, and enterprise search enhancement generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM Agent Blueprints to enterprises.

NVIDIA NIM and NIM Agent Blueprints can be obtained through the NVIDIA AI Enterprise software platform, which is gaining momentum. By the end of this year, we expect our software, SaaS, and support revenue to grow to nearly $2 billion annually, with NVIDIA AI Enterprise making a significant contribution to this growth.

Looking at gaming and AI PCs, gaming revenue is $2.88 billion, with a 9% sequential increase and a 16% year-on-year increase. We see continuous growth in gaming consoles, laptops, and desktops, with strong and growing demand, and healthy channel inventory. Every PC equipped with RTX is an AI PC. RTX PCs can provide up to 1300 AI tops, with over 200 RTX AI laptops already designed by leading PC manufacturers. With 600 AI-driven applications and games and an installation base of 100 million devices, RTX will fundamentally change consumers' experiences through generative AI.

NVIDIA ACE is a generative AI technology used for RTX AI PCs. Megabreak is the first game to use NVIDIA, including our small language model Nemotron 4B, optimized for on-device inference. The NVIDIA gaming ecosystem continues to grow and expand. Recently added RTX and DLSS games include "Fortnite," "Dune: Awakening," and "Dragon Century: Face Killer Guard." The GeForce NOW library continues to expand, with a total catalog size exceeding 2000, making it the largest content among all cloud gaming services. Turning to professional visualization, revenue is $454 million, with a 6% sequential increase and a 20% year-on-year increase. Market demand is being driven by AI and graphics use cases, including model fine-tuning and Omniverse-related workloads.

Automotive and manufacturing were among the key vertical industries driving growth this quarter, with companies competing to digitize workflows to improve overall operational efficiency. The world's largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to support the digital twin of the physical factory producing the NVIDIA Blackwell system. Several large global companies, including Mercedes-Benz, have signed multi-year contracts with NVIDIA Omniverse Cloud to build industrial digital twins of their factories.

We have released new NVIDIA USD NIM and connectors, opening Omniverse to new industries, enabling developers to incorporate generative AI co-pilots and agents into USD workloads, accelerating our ability to build high-precision virtual worldsWPP is implementing USD NIM microservices in its generative AI content creation pipeline for clients such as Coca-Cola. With a shift towards automotive and robotics technology, revenue reached USD 346 million, a 5% increase quarter-on-quarter and a 37% increase year-on-year.

The year-on-year growth in this business was driven by an increase in new customers for the autonomous driving platform and growing demand for AI cockpit solutions. At the Consumer Subconference of the Conference on Computer Vision and Pattern Recognition, NVIDIA emerged victorious in the "End-to-End High-End Driving" sub-session of the "Autonomous Brand Challenge" among over 400 global entries. Boston Dynamics, BYD Electronics, Figure, Intrinsyc, Siemens, and Teradyne Robotics are utilizing the NVIDIA Isaac robotics platform for autonomous robotic arms, humanoid robots, and mobile robots.

Now, let's take a look at other parts of the income statement. The GAAP gross margin is 75.1%, while the non-GAAP gross margin is 75.7%. The quarter-on-quarter decrease is attributed to the addition of new product portfolios in the data center segment and the provision of inventory reserves for low-yielding Blackwell materials. GAAP and non-GAAP operating expenses increased by 12% quarter-on-quarter, primarily reflecting an increase in compensation-related costs. Operating cash flow stood at USD 14.5 billion, with USD 7.4 billion of cash utilized in the second quarter to return value to shareholders through stock repurchases and cash dividends, reflecting an increase in earnings per share. Our board recently approved a USD 50 billion stock repurchase authorization to supplement the remaining USD 7.5 billion authorization at the end of the second quarter.

Allow me to discuss the outlook for the third quarter. Total revenue is expected to be USD 32.5 billion, with a variance of 2% up or down. Our third-quarter revenue outlook includes continued growth in the Hopper architecture and samples of Blackwell products, with Blackwell production expected to increase in the fourth quarter. GAAP and non-GAAP gross margins are forecasted to be 74.4% and 75%, respectively, with a variance of 50 basis points. As our data center portfolio continues to shift towards new products, this trend is expected to persist into the fourth quarter of the 2025 fiscal year, with full-year gross margins expected to be around 70%.

Operating expenses, calculated under the Generally Accepted Accounting Principles (GAAP) and Non-GAAP, are estimated to be approximately USD 4.3 billion and USD 3 billion, respectively. Due to our focus on developing next-generation products, full-year operating expenses are expected to increase by around 40%. Other income and expenses, calculated under GAAP and Non-GAAP, are estimated to be around USD 350 million, including gains and losses from non-affiliated investments and publicly held equity securities. The tax rate under GAAP and Non-GAAP is expected to be 17%, with a variance of 1%, excluding any one-time items.

Q&A Session

Q1 Analyst Vivek Arya: Thank you for addressing my question, Jensen, you mentioned changes in the Blackwell GPU mask in your comments.I am curious, are there any incremental changes in backend encapsulation or other aspects? In relation to this, you have mentioned that despite the design changes, Blackwell's shipments in the fourth quarter will still reach billions of dollars. Is it because by then all these issues will be resolved? Please help us evaluate any changes in the timing of Blackwell and what overall impact it will have, what it means for your revenue situation, and how customers will react?

President and CEO: Jensen Huang

Alright, thank you. The modifications to Blackwell have been completed without any functional changes, and we are conducting functional sampling of Blackwell, Grace Blackwell, and various system configurations. The basic Blackwell systems showcased at Computex have around 100 different types, and we are starting to offer these samples within our ecosystem. The functionality of Blackwell remains unchanged, and we expect production to commence in the fourth quarter.

Q2 Goldman Sachs Toshiya Hari: I have a relatively long-term question. You may be aware that there is quite a debate in the market about the return on investment for your clients and your clients' clients, and what this means for the sustainability of future capital expenditures.

Internally at NVIDIA, what are you focusing on? What is on your dashboard when measuring customer returns and the impact of customer returns on capital expenditures? Then there is a follow-up question from Colette. I think your full-year sovereign AI may increase by several billion. What is driving the improvement in prospects, and how should we view the 26th fiscal year?

President and CEO: Jensen Huang

First of all, when I mentioned production starting in the fourth quarter, I meant shipments starting, not beginning production. Regarding longer-term issues, let's take a step back. You have heard me say that we are undergoing the transition of two platforms simultaneously.

The first is transitioning from accelerated computing to general-purpose computing, and from general-purpose computing to accelerated computing. The reason behind this is well known - the expansion rate of CPUs has slowed down for some time and has slowed to a crawl. However, the demand for computing is still growing significantly. It can even be estimated to double every year.

Therefore, if we do not adopt new approaches, computational inflation will raise costs for every company and increase energy consumption in global data centers. In fact, you have already seen this, which is accelerated computing.

We know that accelerated computing can certainly speed up applications. It also allows for larger-scale computations, such as scientific simulations or database processing, but this directly results in lower costs and energy consumption. In fact, a blog post this week introduced a range of new libraries we offer. This is at the core of the transition from general-purpose computing to accelerated computing, where one can save 90% of computing costs, which is not surprising. Of course, this is because you have just increased the application's running speed by 50 times. You would expect a significant decrease in computing costsSecondly, the reason why accelerated computing can be achieved is that we have reduced the cost of training large language models or deep learning to an incredible extent, to the point where it is now possible to build models of huge scale, models with millions of parameters, and train them on almost all knowledge corpora worldwide, pre-train them on these corpora, and then let the models figure out how to understand human language representations, how to encode knowledge into their neural networks, and how to learn reasoning, thus triggering the revolution of generative artificial intelligence. Now, looking back, why do we need to study generative artificial intelligence so deeply, because it is not just a function, nor just an ability, it is a completely new way of software development. We now have data, not algorithms designed by humans. We tell artificial intelligence, we tell the models, we tell the computer what the expected answers are. What were our previous observations?

Then let it figure out what the algorithm is, what the function is. It learns that a general artificial intelligence is a universal function approximator, it can learn functions. So, you can learn the function of almost anything, anything predictable, anything structured, anything you have examples of before, so now we have generative artificial intelligence. This is a fundamental new form of computer science. From CPU to GPU, from human-designed algorithms to machine learning algorithms, it is affecting every layer of computation, and the types of applications you can now develop and produce are fundamentally amazing.

Generative artificial intelligence is causing several things. First, the scale of cutting-edge models is constantly expanding, and we are still seeing the benefits of scaling. Every time you double the scale of a model, you also have to more than double the scale of the dataset used to train it. Therefore, the computational effort required to create models will quadruple. Therefore, the computational requirements for the next generation of models may be 10, 20, 40 times that of the previous generation, which is not surprising.

Therefore, we must continue to significantly increase generational performance in order to reduce energy consumption and reduce necessary costs. First, cutting-edge models trained in more patterns are larger. Surprisingly, there are more creators of cutting-edge models than last year. Therefore, you will have more and more opportunities. This is one of the driving forces behind generative artificial intelligence.

Secondly, although this is just the tip of the iceberg, what we see is the ChatGPT image generator. What we see is encoding, at NVIDIA, we extensively use generative artificial intelligence for encoding. Of course, we also have many digital designers and similar personnel. But these are just the tip of the iceberg. Beneath the surface is the largest system in the world today, the largest computing system, which is what I have talked about before, the recommendation system shifting from CPU to generative artificial intelligence.

Therefore, recommendation systems, ad generation, custom ad generation, ads targeting very large scales and quite high positioning, search, and user-generated content, these are very large-scale applications that have now evolved into generative artificial intelligence. Of course, the number of generative artificial intelligence startups is bringing billions of dollars in cloud leasing opportunities for our cloud partnersIn addition to sovereign artificial intelligence, some countries are now realizing that their data is their natural and national resource, and they must use artificial intelligence to build their own AI infrastructure so that they can have their own digital intelligence. As mentioned by Colette before, enterprise AI is just beginning, and you may have seen our announcement that leading global IT companies will join us to push the NVIDIA AI enterprise platform to global enterprises.

Many of the companies we have spoken to are extremely excited about improving company productivity. Next is general robotics technology. Last year, with the ability to learn physical AI by watching videos and human demonstrations, as well as generating synthetic data from reinforcement learning systems like Omniverse, we made a significant shift. Now we can collaborate with almost all robotics companies to start thinking and building general robotics technology. You can see that generative AI is evolving in many different directions.

Therefore, we are actually seeing an acceleration in the development momentum of generative AI.

Chief Financial Officer, Executive Vice President: Colette M. Kress

Regarding the issue of sovereign artificial intelligence and our goals in growth and revenue, this is certainly a unique, continuously growing opportunity. With the emergence of generative AI, countries around the world are hoping to have their own generative AI that can integrate their language, culture, and data into the country. Therefore, more and more people are excited about these models and what they can bring to these countries, so we see some development opportunities.

Q3 Morgan Stanley Joe Moore: In the press release, you mentioned that Blackwell's expectations are incredible, but it seems that hopper's demand is also very strong. If there were no Blackwell, you would still have a very strong quarter in October. So, how long do you think the strong demand for both can coexist? Can you talk about the transition to Blackwell? Do you think people will mix clusters? Do you think most of Blackwell's activities are new clusters? I would like to know about the transition period.

President and CEO: Jensen Huang

Thank you, indeed, the demand for hopper is very strong, and the demand for Blackwell is also very high.

There are several reasons for this. The first reason is that if you look at global cloud service providers, the available GPU capacity is basically zero. The reason for this is that they either deploy internally to accelerate their own workloads, such as data processing. Data processing, we rarely talk about it because it is mundane.

It's not cool because it doesn't generate images or text. But almost all companies in the world are processing data in the background. And NVIDIA's GPU is the only accelerator on this planet that can process and accelerate data, SQL data, Pandas data, data science toolkits (such as the Pandas toolkit and the newly launched Polar toolkit)These are the most popular data processing platforms in the world, as I mentioned before, besides CPUs, NVIDIA's accelerated computing is indeed the only way to improve CPU performance. Long before generative artificial intelligence appeared, the first major use case was to migrate applications one by one to accelerated computing platforms.

The second is, of course, leasing, where they lease capacity to model manufacturers. They lease capacity to startups. And an artificial intelligence company will invest the vast majority of its investment funds into infrastructure construction to use artificial intelligence to help them create products, so these companies need it now. They just can't afford it fundamentally - you're just raising funds. They want you to use the money now, you have to do what you have to do.

You can't do it next year, you have to do it today, that's a fair reason. **The second reason for the strong demand for Hopper is that everyone is rushing to the next level. The first person to reach the next plateau will be able to introduce revolutionary artificial intelligence. The second person to reach the goal will also gradually improve, or be similar to the first person. Therefore, systematically and continuously moving to the next level and being the first to get there is the way to establish leadership. NVIDIA has always been doing this, we show the world this, the GPUs we manufacture, the artificial intelligence factories we manufacture, network systems, SoC.

I mean we want to lead the trend, always hope to remain the world's number one, that's why we work so hard. Of course, we also hope that dreams come true, the future capabilities and the benefits we can bring to society, the model manufacturers also want this. Of course, they want to be the best in the world. They want to be number one in the world. Although Blackwell will start shipping by the end of this year, with an amount reaching billions of dollars, the establishment of capacity may still take several weeks or around a month. Therefore, from now until then, the generative artificial intelligence market will be vibrant.

Everyone is very anxious, either for operational reasons they need it. They need accelerated computing. They don't want to build more general computing infrastructure, even Hopper. Of course, H200 is the most advanced. If you have to choose between building CPU infrastructure for businesses now and building Hopper infrastructure for businesses now, then the decision is relatively clear. I think people are just eager to convert the installed $1 trillion infrastructure to modern infrastructure, and Hopper is the most advanced infrastructure.

Q4 TD Cowen analyst Matt Ramsay: I want to go back to a previous question, the debate among investors about the return on investment for all this capital expenditure. You see how many people are spending so much money, hoping to push the frontier of AGI fusion, as you just said, reaching a new height in capabilities, they will spare no effort to reach such a level of capability, which opens many doors for the industry and their companies.CEO Jensen Huang:

Customers who invest in NVIDIA's infrastructure will soon see returns. This is currently the infrastructure with the highest return on investment in computing infrastructure investment. Therefore, one way of thinking, perhaps the simplest way of thinking, is to go back to first principles.

You have a $1 trillion universal computing infrastructure. The question is, do you want to build more of such infrastructure? For every $1 billion universal CPU infrastructure built, the rental cost may be less than $1 billion. Therefore, with $1 trillion already on the ground due to its commoditization, what's the point of having more? So, those clamoring to obtain this infrastructure, once they build infrastructure based on Hopper and soon on Blackwell, they start saving money.

This is a huge return on investment. The reason they start saving money is that data processing becomes cheaper, and data processing may only be one important part of it. Therefore, recommendation systems can save money, and so on, and so forth, and that's how you start saving money.

The second thing is that everything you build will be rented out because so many companies are creating generative artificial intelligence. Your capabilities will be rented out immediately, with a very high return on investment.

The third reason is your own business. Do you want to create the next frontier yourself, or do you want your internet services to benefit from the next generation of advertising systems, recommendation systems, or search systems? Therefore, for your own services, your own stores, your own user-generated content, social media platforms, your own services, generative artificial intelligence also offers a rapid return on investment. So, you can consider this issue from many perspectives.

But at its core, it is the best computing infrastructure currently available for deployment. The world of general computing is moving towards accelerated computing. The software world designed by humans is moving towards generative artificial intelligence software. If you want to build infrastructure to modernize cloud computing and data centers, then use NVIDIA's accelerated computing technology to build it, as it is the best way.

Q5 UBS Timothy Arcuri:

Thank you very much. I would like to ask about recent and long-term revenue growth. I know that you have indeed increased operating costs this year. If I look at the increase in your procurement commitments and supply obligations, this is also quite optimistic.

On the other hand, there are also some views that not many customers seem ready to use liquid cooling, and I also know that some racks can use air cooling. Is this something Blackwell needs to consider in its development process? Then, as you look beyond next year, it will obviously be a good year, you will see 26 years, are you concerned about any other constraints, such as power supply chain, or at some point, models starting to become smaller and smaller?**

CEO Jensen Huang

Let's start from the end. Thank you very much for your question. Please remember, the world is shifting from general computing to accelerated computing, with a global effort to build data centers worth about $1 trillion.

In a few years, all $1 trillion worth of data centers will be using accelerated computing. In the past, data centers only had CPUs without GPUs, but in the future, every data center will be equipped with GPUs. The reason for this is clear: we need to accelerate workloads so that we can continue to sustainably develop, reduce computing costs, and avoid computational inflation as we do more computations.

Furthermore, we need GPUs to enable a new computing model called "Generative Artificial Intelligence," which we all know will bring about significant changes to future computing. Therefore, I believe that the next $1 trillion in global infrastructure will be significantly different from the previous $1 trillion, and it will accelerate greatly. Regarding the shape of the ramp, we offer multiple configurations of Blackwell. Blackwell comes in two versions: Classic Blackwell and HGX. The former adopts the HGX form factor that we first used on Volta.

I believe it's Volta. Our HGX form factor has been shipping for a while now, and it is air-cooled. Grace Blackwell is liquid-cooled, and we hope to see a considerable number of data centers adopting liquid cooling technology. The reason is that in liquid-cooled data centers, in any data center - whether it's a power-limited data center or any scale of data center you choose - you can install and deploy artificial intelligence throughput three to five times higher than before. Therefore, liquid cooling costs less. Liquid cooling technology lowers our total cost of ownership (TCO), and it also allows you to benefit from a feature we call NVLink, which enables us to scale it up to 72 Grace Blackwell packages, essentially having 144 GPUs.

Imagine 144 GPUs connected via NVLink. We will increasingly show you the benefits of doing this. The next click is obviously large language model inference with very low latency and very high throughput, and the large NVLink domain will change the game. Therefore, I believe people will find it very easy to deploy these two products.

So, almost all CSPs we work with are deploying these two technologies. Therefore, I am confident that we will smoothly increase our capacity. The second question is, looking ahead, next year will be a great year, and we expect significant growth in our data center business next year.

Blackwell will completely change the game for the entire industry, and Blackwell will continue into the next year. As I mentioned before, deducing from first principles, please remember that computing is undergoing a transformation across two platforms simultaneously. This is very, very important. You need to stay clear-headed, focused, that general computing is transitioning to accelerated computing, and human engineering software will transition to generative artificial intelligence or AI learning softwareQ6: Bernstein's Stacy Rasgon: I have two brief questions for Colette. In the fourth quarter, Blackwell's revenue is in the tens of billions of dollars, is this incremental? Do you expect Hopper's demand to increase in the second half of the year? Does this mean that with an increase of tens of billions of dollars in Blackwell, Hopper will also be strengthened from the third quarter to the fourth quarter?

The second question is about gross margin. If this year's gross margin is around 75%, then the gross margin in the fourth quarter will be between 71% and 72%. Is this the gross margin exit rate you expect? How should we view the changes in gross margin next year as Blackwell develops? I mean, I hope that revenue, inventory reserves, and everything else can improve.

Chief Financial Officer, Executive Vice President Colette M. Kress:

Let's first address your questions regarding Hopper and Blackwell. We believe that Hopper will continue to grow in the second half of the year. We have developed many new products for Hopper, and we believe that Hopper's existing products will continue to grow over the next few quarters (including the third quarter) and these new products will enter the fourth quarter. Therefore, it can be said that Hopper is an opportunity for growth compared to the first half of the year. In addition, we have the Blackwell project, which will start operations in the fourth quarter. So, I hope these two points are helpful to you.

The second part is about gross margin. We have provided gross margin guidance for the third quarter, with our non-GAAP gross margin at around 75%. We will manage various transitions that are currently underway, but we believe we can achieve 75% in the third quarter, and the full-year gross margin is also expected to be around 70%. We may see some minor differences in the fourth quarter, which is also related to our transformation and the different cost structures of new product launches.

However, my numbers may not match yours. We do not have exact guidance, but I believe your expectations are lower than ours.

Q7: Melius Research analyst Ben Reitzes: I would like to ask about the situation in various regions. In the 10-Q report released, performance in the United States has been declining continuously, while several regions in Asia have seen significant growth.

Just want to know how things are going there. Obviously, China's performance is very good, as you mentioned in your remarks. What impact does this have? And given all these favorable revenue dynamics, does this mean that the overall revenue growth rate for the company will accelerate in the fourth quarter?

Chief Financial Officer, Executive Vice President Colette M. Kress

Allow me to talk about the disclosures in our 10-Q report, which is information that needs to be disclosed in different regions. Sometimes, creating the right disclosures can be very challenging as we have to present a key part. Our disclosures include our sales targets and/or specific invoicing entities, so what you see here is our invoicing entities. But this may not necessarily be the final destination of the products, nor the final customersIn our product portfolio, most products are moving towards our OEM or ODM as well as system integrators. Therefore, what you see is that there are sometimes rapid changes in who completes all configurations before these products enter data centers, laptops, and other products. This shift occurs in gaming, data centers, and even automobiles.

Returning to your remarks, regarding gross margin and the revenue situation of Hopper and Blackwell, Hopper will continue to grow in the second half of the year, and we will continue to grow on the current basis.

We are currently unable to determine the specific composition for the third and fourth quarters. As for the fourth quarter, we are unable to provide guidance at this time. However, we do see demand expectations and growth opportunities in the fourth quarter. In addition, we have the Blackwell architecture.

Q8: Cantor Fitzgerald analyst C. J. Muse: You have started an impressive annual product cadence, given the increasing complexity of the advanced packaging world, you may face more and more challenges.

If you take a step back, how does this background change your thoughts on possible greater vertical integration, supply chain partnerships, and then have a corresponding impact on your profit situation?

President and CEO Jensen Huang

In response to your first question, I believe our speed is so fast because the complexity of the models is increasing, and we hope to continue reducing costs.

It is growing, and we hope to continue to scale it. We believe that by continuing to scale artificial intelligence models, we will reach an extraordinary level and usher in and achieve the next industrial revolution. We firmly believe in this. Therefore, we will spare no effort to continue to scale.

We have a unique ability to integrate and design an artificial intelligence factory, and we have all the components. Unless you have all the components, it is impossible to design a new artificial intelligence factory every year. Therefore, next year, we will ship more CPUs, more GPUs than in the company's history, as well as NVLink switches, CX DPUs, East-West ConnectX, North-South BlueField DPUs, and data and storage processing for InfiniBand and Ethernet for supercomputing centers. In fact, we can build - we can access all of these, we have an architectural stack, as you know, that allows us to launch new features to the market after completion.

Otherwise, you would ship these components, you would look for customers to sell to, and then you would have to build - someone would have to build an artificial intelligence factory, and the artificial intelligence factory would be piled with software. So, it's not about who integrates. We like the fact that our supply chain is crumbling, and we can serve companies like Quanta, Foxconn, HP, Dell, Lenovo, and AMD. We used to serve ZTE in the pastThey were recently acquired, etc. Therefore, we have a large number of ecosystem partners, Gigabyte, who can adopt our architecture. Our original design manufacturers and integrators, as well as the scale and coverage required for the integrated supply chain, are huge because the world is huge. Therefore, we don't want to do this part of the work, nor are we good at it. But we know how to design artificial intelligence infrastructure, provide it in a way that customers like, and let the ecosystem integrate it.

Q9: Aaron Ricks, Bank of America: I'd like to talk about Blackwell's product cycle again. One question we are often asked is, in considering the dynamics of NVLink, GB NVL72, and the Blackwell product cycle's market dynamics, how do you view the dynamics of rack-scale system combinations? As we consider the progress of the Blackwell cycle, how do you view the combination of rack-scale systems?

President and CEO Jensen Huang

The design and architecture of the Blackwell rack system is a rack, but it is sold in the form of decomposed system components. We do not sell the entire rack.

The reason is that each rack is different. Some are OCP standard, some are not. Some are enterprise-grade, so everyone's power limits may be different. The choice of CDU, power busbar, configuration, and integration with the data center are all different. Therefore, when designing, we designed the entire rack architecture. The software will run perfectly across the entire rack. Then we provide system components.

For example, the CPU and GPU compute boards will be integrated into the MGX. This is a modular system architecture. MGX is completely original. There are ODMs, integrators, and OEMs of MGX everywhere in our factories.

So, almost any configuration is possible, as long as you want to deliver a 3000-pound rack. It must be integrated and assembled near the data center because it is quite heavy. Therefore, from the moment we deliver GPUs, CPUs, switches, and network cards, everything on the supply chain is done near CSPs and data centers. So, you can imagine how many data centers there are in the world, and how many logistics centers we have expanded to with our ODM partners.

Therefore, I think because we present it as a rack, and always present and display it in this way, we may have given customers the impression that we are integrating. Our customers hate us for integrating, the supply chain hates us for integrating, they want to do the integration.

That's their added value. If you're willing, there's the final design integration. It's not as simple as entering a data center, but the design installation is indeed complex. Therefore, design, installation, debugging, maintenance, and replacement, the entire cycle is carried out worldwide.

We have a huge network of ODM and OEM partners, and we do very well in this regard. Therefore, integration is not the reason we make racks. This is the opposite reason we make racks, we don't want to be integrators, we want to be technology providers.Closing Remarks by Jensen Huang

Thank you, please allow me to reiterate a few comments I made earlier. Data centers worldwide are working tirelessly to modernize the entire computing stack using accelerated computing and generative artificial intelligence. The demand for Hopper remains strong, and the expectations for Blackwell are also incredible.

Allow me to highlight the five key points of our company. Accelerated computing has reached a tipping point, with CPU expansion slowing down. Developers must accelerate everything as much as possible.

Accelerated computing began with the CUDA-X library. The new library has opened up new markets for NVIDIA. We have released many new libraries, including CUDA-X Accelerated Polars, Pandas, and Spark, leading data science and data processing libraries, as well as CUVI-S for vector databases, which are currently very popular.

Ariel and a full set of data centers for 5G wireless base stations are now accessible to us. CUDA acceleration is now available for Parabricks for gene sequencing and AlphaFold2 for protein structure prediction. We are at the initial stage of modernizing data centers worth $1 trillion from general computing to accelerated computing. This is the first point.

Secondly, compared to Hopper, Blackwell is a functional leap. Blackwell is an artificial intelligence foundational platform, not just a GPU. It is also the name of our GPU, but it is an AI infrastructure platform. As we show more Blackwell and sample systems to partners and customers, the leadership of Blackwell will become apparent.

The vision of Blackwell has been in the making for nearly five years, realized through seven unique chips: Gray CPU, Blackwell dual GPU and colos package, ConnectX DPU for east-west traffic, BlueField DPU for north-south and storage traffic, NVLink switch for all GPU communication, and Quantum and Spectrum-X for InfiniBand and Ethernet to support massive AI traffic. The Blackwell AI Factory is a building-scale computer. NVIDIA has designed and optimized the Blackwell platform from chips, systems, networks, even structured cables, power, cooling, and a large amount of software, achieving an end-to-end full stack for customers to quickly build AI factories.

These are very capital-intensive infrastructures, and customers expect to deploy them immediately upon receipt, providing optimal performance and total cost of ownership. Compared to Hopper, Blackwell can provide three to five times the AI throughput in power-limited data centersThe third is NVLink, which is a very significant transaction that changes the rules of the entire GPU exchange game. The Blackwell system can connect 144 GPUs packaged in 72 GB200 to an NVLink domain, with a total bandwidth of 259TB per second for an NVLink rack. From this perspective, this is about 10 times higher than Hopper. The bandwidth of 259TB per second makes sense because you need to train models with millions of parameters on tens of billions of tokens. Therefore, it is natural to transfer a large amount of data from GPU to GPU.

In terms of inference, NVLink is crucial for low-latency, high-throughput large-scale language model token generation. We now have three network platforms: NVLink for GPU expansion, Quantum InfiniBand for supercomputing and dedicated AI factories, and Spectrum-X for Ethernet AI. NVIDIA's network footprint is much larger than before, and the momentum of generative artificial intelligence development is accelerating.

Manufacturers of cutting-edge generative artificial intelligence models are competing to expand to the next AI summit to enhance model security and intelligence. We are also expanding continuously to understand more patterns from text, images, and videos to three-dimensional physics, chemistry, and biology. Chatbots, AI coders, and image generators are developing rapidly, but this is just the tip of the iceberg. Internet services are deploying generative artificial intelligence for large-scale recommenders, ad targeting, and search systems.

AI startups consume hundreds of billions of dollars in CSP cloud capacity each year, and countries are also realizing the importance of AI, investing in sovereign AI infrastructure. NVIDIA AI, NVIDIA Omniverse are ushering in the next AI era, the era of universal robots. The enterprise AI wave has already begun, and we are ready to help enterprises achieve business transformation. The NVIDIA AI Enterprise Platform consists of Nemo, NIMs, NIM Agent Blueprints, and AI Foundry. Our ecosystem partners and leading IT companies globally use these platforms to help enterprises customize AI models and build custom AI applications.

Then, enterprises can deploy on the NVIDIA AI Enterprise Edition, which costs $4,500 per GPU per year, making it very cost-effective for deploying AI anywhere. For NVIDIA software, as the number of CUDA-compatible GPUs installed grows from millions to tens of millions, the value of TAM will also be substantial. As mentioned by Colette, NVIDIA's software sales this year will reach $2 billion. Thank you all for attending today's meeting