Who can put an end to NVIDIA?
Will NVIDIA continue to rise or will it come to a halt?
From the Hong Kong Stock Market.
As one of the most watched companies in the global capital market, NVIDIA (NASDAQ: NVDA) attracts market attention with every move it makes.
Especially in these days, with huge fluctuations, massive trading volume, and continuous declines, some believe that NVIDIA's bubble has burst and it has peaked and started to fall.
I dare not say whether it has peaked, as more evidence is needed to prove it. However, the adjustment after the rapid rise is a fact. It's just uncertain whether this adjustment will last for a week or a month. Currently, all predictions are just speculations based on technical analysis.
Due to the high star effect, controversies surrounding NVIDIA's stock price fluctuations have sparked intense debates between bulls and bears.
Will NVIDIA continue to rise or fizzle out?
01 What views are widely accepted in the market?
After considering various opinions, I believe two points are widely accepted with logical reasoning, data, and evidence to support them.
First, the AI era is just beginning, and the application level has not yet seen large-scale emergence. Almost all industry giants, many institutions, and investors hold this view. Even the skeptical "Wood Sister" who has been bearish on NVIDIA shares this perspective.
In reality, except for Microsoft's Copilot, which has a complete profit loop, you have no idea how other giants are making money with AI. For example, Alphabet-C and Meta's advertising - you can say AI helps them reduce costs and increase efficiency, but how much cost reduction, efficiency increase, and which line item in the profit statement reflect this? How much advertising clients have they snatched from competitors, and how much has the advertising ASP increased?
As for startups, there are quite a few, like Pika founded by a Chinese girl. However, these companies are still in the market introduction phase, and their business models and profitability are yet to be verified.
For companies like NVIDIA, which are "selling shovels," this is a good thing. It's like there are still people continuously mining gold in the mountains, so why would the shovel business come to an end?
Second, the overall U.S. stock market is not in a bubble. Today's U.S. stock market is completely different from the tech stock bubble in 2000. How crazy was it back then?
You could register a .com company with nothing and list it on Nasdaq, and it would be hyped by funds. That's not the case now. Take Pika as an example - if it lists on Nasdaq now and meets the criteria, it can, but after listing, funds are unlikely to blindly speculate on it because funds are not as naive as they were over 20 years ago. They will demand a clear profit model and focus more on sales and financial performance.
Why are large-cap stocks performing well in the U.S. while some small and mid-cap stocks are not?
It's because the performance of large-cap stocks is still decent, unlike small-cap stocks. Although history won't simply repeat itself, for Wall Street, the lesson from the tech stock collapse in 2000 is still vividly remembered, and this lesson is profound.
How to prevent a tragedy from recurring?
It's simple - don't chase after rabbits without an eagle's retreat. Specifically, if you can deliver the corresponding performance, then I'll consider increasing my position; otherwise, it's a no-go.
NVIDIA has performed well recently. In February, after releasing its earnings report, its stock surged by 16% in a single day, outperforming even Maotai. However, for companies with weaker performances like Broadcom, their stock immediately dropped by 7%.
Why has Microsoft been trading sideways for over a month? And why have Apple and Tesla been excluded from the "Seven Sisters"?
It all comes down to lackluster performance, with some even expecting declines. Moreover, in terms of overall valuation, the NASDAQ is currently lower than during the dot-com bubble in 2000. This actually proves that the market is rational overall, far from the craziness of the 2000 bubble.
As for the comparison between NVIDIA and Cisco that the media has been discussing, it doesn't hold up. NVIDIA's forward valuation for the next two years is only 30 times, while Cisco's valuation back then exceeded 100 times. Cisco's market value expansion was mainly due to mergers and acquisitions, as well as liquidity flooding, rather than exceptional performance.
Therefore, simply applying Cisco's logic to NVIDIA is not appropriate.
Is NVIDIA the cause or effect of AI?
Another controversial topic is the relationship between NVIDIA and AI - who is driving whom?
Even before the earnings were announced, a widely circulated image depicted a scenario where AI demand is minimal, but the stock market has clearly overheated, with a bubble about to burst.
However, if we look at it from a different perspective, what do we find?
It's not the small ants supporting AI demand, but rather the big elephant NVIDIA, continuously incubating various AI applications.
An example makes this clearer.
If the iPhone had not been introduced, would there be as many mobile applications today?
In 2005, you couldn't have gone to Steve Jobs with a bunch of mobile internet demands and asked him to create the iPhone. The correct sequence was for Jobs to create the iPhone first, which then spurred the development of a vast array of mobile internet applications.
The same principle applies to NVIDIA: its technological advancement speed to a certain extent determines the overall development of AI technology and the speed of the explosion of AI applications.
When Sora emerged, many were calculating how much computational power demand would increase and how many more A100 chips the major companies would need to purchase. But why not consider the reverse scenario - if GPU computational power increased by 10 times or 100 times, and costs decreased to 1/10 or 1/100, how many more powerful applications than Sora would emerge, and what level of commercial value would they generate?
For any technology to be massively commercialized, performance and cost are the two most crucial factors. In AI, computational power is the deciding factor behind these two, the "big boss" behind the scenes.
Let's take a look at a technology roadmap:
If the interconnection between chips is 4TB/s, which is 4.5 times that of the current H100, without considering other architectural innovations, the training speed will increase by 20 times.
Looking ahead, today's Sora will seem like a piece of cake tomorrow. Who knows what disruptive applications will emerge next?
Faced with a product that iterates every year and performance that increases tenfold every 1.5 years, it cannot be simply measured by the industrial chain supply and demand. Because there is a demand for upgrades every year, old capacity can be replaced by new capacity at any time.
In the past, Wood Sister has always compared NVIDIA to the Cisco of the Internet era, believing that after the completion of large-scale hardware infrastructure, hardware demand will collapse, and NVIDIA will calm down after the hype like Cisco.
Many people simply apply hardware thinking to NVIDIA, but if NVIDIA's technological progress will determine the development speed of the entire AI application, it is not just a simple hardware infrastructure, but a leading big brother in the forefront.
For downstream manufacturers, if they cannot get the latest and fastest GPU from NVIDIA, it means that their model training and inference speed will lag behind competitors, and they will be left behind in this high-speed AI train.
With the competition so fierce now, once you fall off, it's not so easy to climb back up.
This also means that NVIDIA's products may be in short supply for a long time, entering a state of what can be called a super upward spiral: constantly selling out, constantly launching more powerful products, and constantly leading peers, consolidating market dominance in the process.
This is actually a kind of super monopoly.
NVIDIA also has a killer move
Why does hardware thinking seriously misjudge NVIDIA?
Because the software ecosystem is NVIDIA's real killer move. Huang said a sentence at the financial report meeting:
Accelerated computing and generative AI have reached a tipping point.
How to understand the term "tipping point"?
It means two things: one is that the demand for inference will explode, and the other is that the long-awaited GPT-5 may greatly drive the commercialization and implementation of AI applications. In other words, everyone will soon experience a surge in AI applications, which is a very important turning point.
If NVIDIA is viewed with the past hardware infrastructure, it is easy to see it as a "pipeline" like China Mobile and Cisco back then, where the business model is to set up the stage, let others perform, collect some fees, and the big stars take the stage to make big money, seemingly unrelated to oneself.
But combined with NVIDIA's announcement to enter ASIC, which is chips specifically designed for AI applications, the expectations will be different.
The software with the greatest commercial value for large models globally is recommendation systems, which can focus on more detailed features, or from the relationship chains of various groups, or with their accumulated knowledge, provide more comprehensive answers.
Meta CEO Mark Zuckerberg mentioned in the last earnings call:
The next generation of services needs to be built on top of AGI. In the past, I thought that because many tools are social, commercial, or possibly media-oriented, only a small part of the AI challenge needs to be solved to provide these products. But now it is clear that to provide the best version of our envisioned services, our models will need to have reasoning, NVIDIA is talking about developing ASICs at this time, which is essentially introducing a "GPU specifically for inference." It is clear that they have identified the upcoming surge in demand for inference in the market, and cloud service providers will definitely develop their own inference capabilities. If NVIDIA can successfully secure a large share of this market, it will break the risk of being "pipelined" and directly penetrate downstream application areas.
In essence, this means that besides the stage fees, they can also take a certain percentage of the total income from the stars' performances.
During the earnings call, NVIDIA emphasized the keyword "inference" 16 times, which is twice as much as "training." Jensen Huang also stressed that currently 40% of their revenue comes from inference demands. At a Stanford forum over the weekend, Huang once again explained why they are venturing into ASICs.
This provides the market with a sense of certainty for growth in 2025, alleviating concerns about the risk of training demands becoming increasingly concentrated among top clients. It also hints at a new, potentially larger growth curve for NVIDIA. By consistently proving their value through performance, the support for their stock price will further strengthen.
When discussing NVIDIA's core competencies, Huang mentioned two points: NVIDIA's computing platform is accelerated and programmable. NVIDIA is the only one that has been following along from the beginning with the Hinton team's neural network AlexNet, to all subsequent neural networks, deep learning, various vision transformers, multimodal transformers, and now temporal things.
Implicitly, NVIDIA has adapted to changes in AI algorithms. Regardless of how AI algorithms evolve, NVIDIA can adapt through software. In fact, only NVIDIA currently has the flexibility to switch between inference and training.
Why only NVIDIA?
This brings us to NVIDIA's CUDA computing platform, where if GPUs are seen as computers, CUDA is the Windows operating system.
Introduced in 2007, this GPU general-purpose computing programming framework includes the initial CUDA C language, as well as later CUDA APIs suitable for C++, Fortran, and more.
This is one of Huang's most visionary strategic layouts. After early cost-no-object promotion and updates, CUDA has evolved into the world's largest GPU software computing ecosystem. While NVIDIA may surpass competitors in hardware parameters, when it comes to comparing software ecosystems, the odds are low for competitors.
Just as PC manufacturers would not abandon Windows for an operating system with similar performance parameters, users would not abandon Windows either. This is known as user stickiness.
Who can end NVIDIA's dominance?
As it stands, NVIDIA's future still looks very promising, which is why many continue to be bullish on NVIDIA. In fact, it is true because it's really hard to find evidence of NVIDIA's bubble bursting. At most, it's just a simple analogy using Cisco as an example, or mentioning that competitors or even users are developing alternatives to NVIDIA GPUs. When it comes to core factors affecting profitability, such as NVIDIA's orders, TSMC's wafer capacity, A100/H100 prices, and looking at the profit data, one is left speechless.
Is NVIDIA flawless then?
Not necessarily. Don't forget, NVIDIA has always been a cyclical stock. With high growth and high profit margins in 2025 and 2026, there may not be much of a problem. But in 2027, 2028, or beyond?
No one knows, but there are many lessons to be learned from trying to hype up cyclical stocks as long-term growth stocks, like what happened with lithium mines.
So, who has the ability to end NVIDIA?
It's probably when competitors' GPU products come out, start grabbing NVIDIA's market share, or when the demand for AI chips from companies really slows down, or for some other reason.
In more down-to-earth terms, it means the products are not selling well, prices can't be raised, the money isn't coming in as fast or as much as before, and so on.
But that day seems quite far off. (End of the article)