NVIDIA's understanding and layout of AI, Jensen Huang clarified here

Wallstreetcn
2024.11.15 08:40
portai
I'm PortAI, I can summarize articles.

Jensen Huang believes that the future of AI will develop in two directions: scalability of capabilities and multimodality, as well as an "Cambrian explosion" of applications. In the future, AI agents and robotics technology will be the two most popular forms of artificial intelligence. NVIDIA's future layout focuses on AI technology and ecosystem collaboration, accelerating computing and enhancing CPU with GPUs, while providing rich software and library support for AI development, and promoting a new industrial revolution through corporate collaboration, enabling every company to produce its own AI

Author: Bu Shuqing

Source: Hard AI

Regarding the future direction of AI development, NVIDIA CEO Jensen Huang has made new judgments. He mentioned two AI trends at the NVIDIA Japan Summit held earlier:

Scalability and multimodal capabilities of AI. This means that AI can process and understand various types of data, such as text, speech, images, and videos, and relate these data types to each other, thus functioning in various application scenarios.

The "Cambrian explosion" of AI applications. The explosive growth of AI applications will create many new industries and companies. Huang compared this period to the "Cambrian explosion," a geological period marked by a dramatic increase in biodiversity.

He emphasized that although chips are one of the core components of AI systems, the true value and potential of AI lie in the comprehensive capabilities of the entire system and its broad application prospects.

Huang believes that in the future, there will be two types of AI that will be very popular: digital AI workers (AI agents) and physical AI (robotics). Digital AI workers can perform various tasks, such as marketing and customer support, operating like digital employees. Physical AI is embodied in mechanical systems, such as autonomous vehicles and industrial robots, which can perform complex tasks in the real world.

Huang views NVIDIA as a simulation technology company focused on simulating physics, virtual worlds, and intelligence, helping to predict the future through simulation, more like building a time machine. He stated that NVIDIA's future layout focuses on AI technology and ecosystem collaboration, accelerating computing and enhancing CPU with GPUs while providing rich software and libraries to support AI development .

He also mentioned that a new industry—artificial intelligence manufacturing—is emerging, and every company will become an AI manufacturer.

Highlights from Huang's speech:

  • Essentially, NVIDIA is a simulation technology company. We simulate physics, virtual worlds, and intelligence. Through our simulation technology, we help you predict the future. Therefore, in many ways, NVIDIA is like a time machine.

  • NVIDIA invented accelerated computing but did not replace the CPU. In fact, we are almost the only company in the computing field that does not want to replace the CPU. Our goal is to free up CPU capabilities by shifting compute-intensive workloads to GPUs.

  • Over the past decade, we have increased the scale of artificial intelligence and machine learning by a million times. By scaling up machine learning by a million times, we achieved a huge breakthrough, which is precisely this breakthrough that gave rise to today's ChatGPT—the arrival of artificial intelligence.

  • Software 1.0 is code written to run on CPUs. Now we have entered the era of Software 2.0 because computer speeds are very fast, allowing you to provide it with a large amount of sample data for it to learn and predict functions on its own. We call this Software 2.0.

  • Current software is no longer handwritten code but neural networks running on GPUs. These neural networks running on GPUs are forming a new operating system, a new way of using computers, which is the operating system of modern computers, especially large language models

  • This machine learning method has proven to have incredible scalability. You can do a wide variety of things with it, including digitizing text, sound, speech, images, and video. It can be multimodal. You can teach it amino acid sequences, teach it to understand anything with a large amount of observational data. AI applications are now exploding at a Cambrian-like pace, and we are just getting started.

  • AI is not just a chip issue. (GPU) systems cannot work alone. Even the most advanced computers in the world cannot work for artificial intelligence alone. Sometimes it must work with thousands of other computers, functioning as one computer. Sometimes they must work separately because they are responding to different customers, different queries, sometimes individually, sometimes as a whole.

  • I believe there are two types of artificial intelligence that will be very popular. One is digital, which we call AI agents that you can use in the office to collaborate with your employees. The second is physical AI system robots. These physical AIs will be products built by companies. Therefore, companies will use AI to enhance employee productivity, and we will use AI to drive and empower the products we sell. In the future, car manufacturers will have two factories, one for making cars and one for producing the AI that runs in the cars.

  • Now we have a new industry that has never existed before (AI manufacturing) — AI sits at the top of the computer industry, but it is utilized and created by every industry. Every industry, every company, every country must produce its own AI; this is a new industrial revolution.

  • To achieve robotics, we need to build three computers. The first computer trains the AI, just like all the examples we gave you earlier. The first is the simulation AI. You need to give the AI a place to practice, a place to learn, a retreat where it can receive synthetic data to learn from. The Omniverse platform allows you to create AI. Ultimately, what you want is AI. Ultimately, the AI you expect will see a world where it can recognize videos, the surrounding environment, and your needs, and generate corresponding actions.

Jensen Huang's full speech (AI translation) is as follows:

Welcome to the NVIDIA AI Summit. Everything you just saw is simulated. There are no animations.

Essentially, NVIDIA is a simulation technology company. We simulate physics, virtual worlds, and intelligence. Through our simulation technology, we help you predict the future. Therefore, in many ways, NVIDIA is like a time machine manufacturer.

Today, we will share some of our latest breakthroughs with you. But most importantly, this is an event about the Japanese ecosystem. We have many partners here, including 350 startups, 250,000 developers, and hundreds of companies. We have been here for a long time.

Since the company's inception, the Japanese market has been very important to us. In Japan, we have many "firsts." The first game developer to work with us was Yu Suzuki from Sega, a renowned 3D game developer who first collaborated with us to port Sega's stunning 3D games to NVIDIA's graphics processorsTokyo Institute of Technology has used NVIDIA CUDA for the first time to build the supercomputer Subamer 1.2, enabling us to leverage our graphics processors to drive scientific computing. Japan is first in many ways. This is also the first time we have been able to create mobile processors, which gave rise to one of our very important projects—the Nintendo Switch. So many "firsts."

Now we are at the beginning of a new era, the AI revolution, a new industry, and extraordinary technological transformation. This is a very exciting time, but also very critical. Therefore, we are here to collaborate with excellent companies in the Japanese ecosystem to bring AI to Japan so that we can fully capitalize on this extraordinary opportunity before us.

Today we have many partners here, and I want to thank platinum sponsors such as GMO Internet Group, HP, Microsoft Azure, and Mitsui Group. I want to thank all of you.

There are also 56 other sponsors. Thank you all for coming and for your support. NVIDIA invented accelerated computing, but did not replace the CPU. In fact, we are almost the only company in the computing field that does not want to replace the CPU. Our goal is to free up the CPU's capabilities by offloading compute-intensive workloads to the GPU.

This is the GPU, and this is the CPU. By combining the two, we can leverage the best features of both processors: the CPU is very good at sequential processing, while the GPU excels at parallel processing. I will discuss this in more detail later, but this is accelerated computing—not just parallel computing, but the collaboration between the CPU and GPU.

This computing model is entirely new to the world. In fact, the CPU has existed since 1964, which is the second year of my life, and it has been 60 years since then. The vast majority of the software we see running on computers today runs on the CPU. But now there is a new shift; the computing model is undergoing a fundamental change. However, to achieve this, you cannot simply take sequentially running CPU software and run it in parallel on the GPU. We must create a whole new set of algorithms, just as OpenGL enabled computer graphics applications to connect to acceleration via the graphics processor, we must create many domain-specific libraries for many different applications. These are some of the very important libraries among the 350 different libraries our company possesses.

Kulethos is a tool used to accelerate computational lithography, which is a step in the chip manufacturing process. Computational lithography is a complex process that typically takes weeks to compute patterns for many layers. But with Kulethos, this time can be reduced to a few hours.

Of course, we are able to shorten the cycle of chip manufacturing, but equally important is that we can make the algorithms for lithography technology more complex, which means we can push semiconductor technology to achieve higher precision, such as 2 nanometers, 1 nanometer, or even smaller scales. Therefore, the process of computational lithography will be accelerated through Kulethos and Spark Solver's DSSAI Ariel technology. I will discuss this topic in detail today

This newly developed library is remarkable as it enables computers to run the technology stack for 5G radio. In simple terms, it allows real-time operation of a radio on Nvidia's CUDA accelerators. Additionally, CUDA is also used for quantum simulation, such as simulating quantum circuits. There are also pairing technologies used for gene sequencing, as well as KUV technology, which is a technique for storing vector data and is also used for indexing and querying vector databases, which are particularly useful in the field of artificial intelligence.

NumPy is a library for numerical computation. It is one of the most popular numerical processing libraries in the world, with approximately five million different developers using it. This library is extremely popular, having reached 30 million downloads just last month, which is an astonishing number. Now, NumPy fully supports accelerated computing on multiple GPUs and multiple computing nodes, making it even more powerful when handling large-scale data. It is recommended that you learn about this library; its powerful capabilities are indeed incredible.

QDF is a library for handling data frames and structured data, supporting data processing technologies like SQL, Pandas, and Polars, as well as solving complex traveling salesman problems (TSP, a combinatorial optimization problem). This library greatly accelerates the resolution of these problems, with speed improvements of hundreds of times.

Next is KUDNN, one of the most important libraries we created, which stands for deep neural network qdnn. This library is responsible for handling data at different levels in deep learning models. By creating qdnn and promoting the popularization and acceleration of deep learning, we have increased the scale of artificial intelligence and machine learning by a million times over the past decade. By scaling up machine learning by a million times, we achieved a significant breakthrough, which in turn gave rise to today's ChatGPT—the arrival of artificial intelligence. In short, the KUDNN library plays a key role in advancing the development of artificial intelligence.

Qdnn does something special; it changes the way we write and use software. Software 1.0 is code written to run on CPUs. Now we have entered the era of Software 2.0, because computer speeds are incredibly fast, allowing you to provide it with a large amount of sample data for it to learn and predict functions on its own. We call this Software 2.0. This way, it can learn on its own and predict what the function is, which is machine learning.

So, modern software is no longer handwritten code but neural networks running on GPUs. These neural networks running on GPUs are forming a new operating system, a new way of using computers, which is the operating system of modern computers, especially large language models.

This machine learning approach has proven to have incredible scalability. You can use it for a wide variety of tasks, including digitizing text, sound, speech, images, and video. It can be multimodal. You can teach it amino acid sequences and help it understand anything with a large amount of observational data.

The first step in understanding the meaning of data is that by studying a large amount of text on the internet, we can understand words, vocabulary, grammar, and even the meanings of words by finding patterns and relationships. Using the same approach, we can now not only understand the meanings of different data types connected to different modalities, such as the relationship between words and images (for example, the image of the word "cat" is now connected to the image of a cat). By learning multimodal, we can now even translate and generate various intelligent information.

If you observe all those amazing emerging companies and the applications they create, you will find that they can be categorized into two types, which are presented on a slide from one side to the other. The first type is text-to-text applications, including text summarization, question-answering systems, text generation, and storytelling. The second type is video-to-text applications, such as generating subtitles for videos. There are also image-to-text applications, such as image recognition, and text-to-image applications, such as image generation, like services such as Mid Journey. Additionally, there are text-to-video creations, like platforms such as Runway ML.

All these different combinations are real breakthroughs. You can even text proteins, explaining their functions, or text chemicals, describing the properties of a chemical that could become a successful drug. For drug discovery, you can even have applications that involve video and text to robots. Each of these combinations represents a new industry, a new company, and a new application use case. AI applications are now exploding at a Cambrian-like pace, and we are just getting started.

Of course, one characteristic of machine learning is that the larger the brain, the more data we can teach it, and it becomes smarter. We call this the law of scale. There is ample evidence that as we scale up the size of models, the quantity, effectiveness, quality of training data, and the performance of intelligence are improving year by year. The industry is scaling model sizes by about 2 times, which requires 2 times the data accordingly. Therefore, we need four times the computational power. The amount of computational resources required to push AI to the next level is extraordinary. We call this the scaling law, the training scaling law. Pre-training is part of it, while post-training includes reinforcement learning, human feedback, reinforcement learning, and AI feedback. There are now many different methods using synthetic data generation in the post-training phase. Thus, training, pre-training, and post-training are enjoying significant scaling, and we continue to see outstanding results.

Well, when Strawberry or OpenAI's S01,1 was announced, it showcased a new type of reasoning to the world, which is that interacting with AI is like ChatGPT. But ChatGPT is one-off. You ask a question, and you let it do something for you. Whatever question you have, whatever prompt you provide in a single shot, it can give an answer. However, we know that thinking is often not just a one-off. Thinking requires us to do multiple planning, multiple potential answers, and choose the best one. Just like when we think, we may reflect on the answer before giving it. Reflecting, we may break a question down into a step-by-step thinking chainWe have invented many different technologies, and as we apply more and more computing, the performance of reasoning improves significantly.

Now we have a second scaling law, the reasoning scaling law, which is not just about generating the next word, but also thinking, reflecting, and planning. These two coexisting scaling laws will require us to drive computing at super-fast speeds. Every time we deliver a new generation or new architecture, we will increase performance by a factor of X, but we will also reduce power by the same factor of X. We will lower costs by the same factor of X. Therefore, improving performance is exactly the same as reducing costs. Improving performance is exactly the same as reducing energy consumption. Thus, as the world continues to absorb and embrace artificial intelligence, our mission is to continuously improve performance as quickly as possible. In the process, we aim to expand the reach of artificial intelligence, enhance its effectiveness, reduce costs, and lower power consumption. This is why we choose a one-year cycle.

However, AI is not just a chip issue. These AI systems are enormous. This is the Blackwell system, named after a GPU, but it is also the name of the entire system. The GPU itself is extraordinary. There are two Blackwell chips. Each Blackwell chip is the largest chip in the world, with 104 billion transistors, manufactured by TSMC at the advanced 4-nanometer node. These two Blackwell chips are connected by a low-power link that transmits 10TB per second. Right in the middle, that line, that seam, thousands of interconnections between the two chips, 10TB per second. It is connected by eight HBM3E memory units, which together run at 8TB per second. These two GPUs are connected to another low-power, highly energy-efficient CPU that operates at 1TB per second. Each GPU is connected via MVLink at a speed of 1.8TB per second. That is a lot of TB per second. The reason is that this system cannot work alone. Even the most advanced computers in the world cannot work alone for artificial intelligence. Sometimes it must work with thousands of other computers, like these nodes working together as one computer. Sometimes they must work separately because they are responding to different customers, different queries, sometimes individually, sometimes as a whole.

To enable the MV and GPU to work together, we certainly have network 2 CX Sevens connecting this GPU to thousands of other GPUs. But we still need this MV link, which allows us to connect several GPUs to a rack behind me. One rack connects to this MV at 5.8TB per second. The bandwidth is 35 times that of the highest bandwidth network in the world, allowing us to connect all these GPUs to this MV link switch.

There are nine MV link switches in one rack. Each rack has 72 such computers. They are connected through this spine. This is the MV link spine. This is the cable, copper, 50 pounds of copper directly driven by this incredible Ceres, which we call the MV link. They are connected to the computer MV link in this way, and this switch connects all these computers together. Therefore, the result is that 72 of these computers are connected as one large GPU, a very large GPUFrom a software perspective, it's just a giant ship. These racks, these MBs connect 72 systems, and this rack weighs 3,000 pounds. It's impossible to get on this stage. Otherwise, I'll show you, it's 3,000 pounds and 120 kilowatts.

That is, I have my friends here, that's the power of many Nintendo Switches. It's not portable, but it's very powerful. This is the Blackwall system. We designed it so that it can be configured like this as a Superpod, or a huge data center, with tens of thousands, hopefully tens of thousands, connected to it through switches. Some of these are quantum infinite bandwidth switches. If you want to have a dedicated AI factory or Spectrum X, NVIDIA's revolutionary Ethernet system, you can integrate it into your existing Ethernet environment. We can build AI supercomputers with these. We can integrate them into enterprise data centers, hyperscale servers, or configure them for the edge.

The Blackwall system is not only powerful but also highly adaptable, capable of fitting into every corner of the world's computing infrastructure.

Of course, computers, but most importantly, without all the software running on them, this computer simply cannot operate. When you see these computers, all the liquid cooling, all the wires, your brain will explode. How do you write such an incredible computer? This is where NVIDIA's software stack comes in, this is all our efforts in Cuda Nickel, all our Megatron cores, all the software we created, Tensor, RTLM, Triton, all the software we've built over the years integrated into the system, making it possible for everyone to deploy AI supercomputers around the world. Of course, most importantly, we have AI software that makes it easy for people to build AI. So what is AI?

We talk about AI in many different ways, but I think there are two types of artificial intelligence that will be very popular. I think there are two models that are very helpful.

This is very helpful to me. First, digital AI workers. These AI workers can understand, they can plan, they can take action. Sometimes, digital AI workers are asked to execute marketing campaigns, support customers, develop manufacturing supply chain plans, optimize chips, help us write software, maybe as research assistants, laboratory assistants in the drug discovery industry. Perhaps this agent, you know, is a mentor to the CEO. Maybe all our employees have a mentor, AI, these digital AI workers, we call them AI agents, essentially act like digital employees. Just like digital employees, you have to train them, you have to create data to welcome them into your company, teach them about your company. You train them in specific skills, depending on the functions you want them to have. After completing the training, you evaluate them to ensure they learned what they were supposed to learn. You protect them, ensuring they complete the tasks they are asked to do, not the ones they are not asked to do. Of course, you operate them. You deploy them, providing them energy from Blackwell, giving them AI tokens from Blackwell, and they interact with other agents, solving problems as a team

You will see various agents, and we have created something that makes it easier for companies to build AI agents in the ecosystem. NVIDIA does not engage in service business; we do not create, provide end products, or offer solutions. But we do provide supporting technology that enables the ecosystem to create AI, deliver AI, and continuously improve AI. The AI Agent lifecycle library, the lifecycle platform is called Nemo. Nemo has libraries for every stage I mentioned, from data management to training to fine-tuning to synthetic data generation to evaluation and to guardrails. There, these libraries are integrated into workflows and frameworks around the world.

We are working with AI startups, service providers like Accenture and Deloitte, and companies worldwide to bring this to all large companies. We are also collaborating with ISVs like Service Now so they can create agents using Service Now. Today, you use services through a licensing platform, and your employees interact with the service platform for assistance. In the future, Service Now will also offer a large number of AI agents that you can rent, essentially digital employees you can rent to help you solve problems. We are currently using services. We are working with SAP, Cadence, ANSYS, companies around the world, and Snowflake companies worldwide so that we can all build agents that help you improve your company's productivity.

Now these agents are capable of understanding reasons, planning, and taking action. These agents, a collection or system of our AI models, are not just a single AI model but a system of AI models. Nemo helps us build these. We have also created pre-trained AI models, which we package in what we call NIM. So these NIMs are microservices. They are essentially AI from the old era packaged, software packed in a box, and they come with a CD-ROM. Today, AI is packaged in a microservice, and the software inside is intelligent. You can talk to the software, you can converse with the software because it understands what you mean, and you can connect this software with other software. You can connect this AI with other AIs, and you can create an agent, an AI agent. So that's the first thing.

Let me give you an example of these agents. Agent AI is using complex reasoning and iterative planning to solve complex multi-step problems, thereby transforming every business. AI agents help marketing campaigns go live faster through instant insights, help optimize supply chain operations, saving hundreds of millions of dollars, and shorten software security processes from days to seconds by helping analysts classify vulnerabilities. What makes AI so powerful is its ability to turn data into knowledge and knowledge into action. The digital agent in this example can provide insights to individuals through a set of information-dense research papers. It is built using NVIDIA's AI blueprint. These reference workflows include NVIDIA accelerated libraries, SDKs, and Nim microservices that help you quickly build and deploy AI applications. The multimodal PDF data extraction blueprint helps establish data ingestion pipelines, while the digital human blueprint provides smooth human-like interactions. Hi, I am James, a digital agent that ingests PDF research papers, including complex data such as images, charts, and tables, and generates high-level summaries through an interactive digital human-machine interface

The weather forecast has made an exciting breakthrough. The development of the new generative model cordiff is an important step in accurately predicting weather patterns. By combining unit regression models with diffusion models, it condenses the court.

James can also answer questions or generate new content based on the paper. NVIDIA AI enables businesses to automate processes, leverage real-time insights, and improve workflow efficiency.

AI agents, in three parts, Nimo names and blueprints. These are reference materials. They are provided to you in source code form so that you can use them as you wish and build your AI agent workforce. None of these agents can complete 100% of anyone's tasks or jobs. No agent can do 100%. However, all agents can do 50% of your work. This is a great achievement. Instead of thinking of AI replacing 50% of people's jobs, you should think of AI doing 50% of the work for 100% of people. By thinking this way, you realize that AI will help improve company productivity and enhance your productivity. You know, people ask me, you know, will AI take your job? I always say, because it's true, AI will not take your job. The AI that others use will take your job. So be sure to activate AI as soon as possible. The first is digital AI agents, digital, these are digital artificial intelligences. The second application is physical AI. The same basic technology is now embodied in a mechanical system.

Of course, robotics will become one of the most important industries in the world. So far, robotics has been limited, and the reasons are clear. In fact, in Japan, 50% of the world's manufacturing robots are made. Kawasaki, Fanuc, Yaskawa, and Mitsubishi are the four leaders that manufacture half of the world's robotic systems. Just like robots, they drive productivity in manufacturing and are difficult to scale. The robotics industry has remained basically flat for a long time, primarily because it is too specific and not flexible enough to be applicable in different scenarios, conditions, and jobs. We need more flexible AI that can adapt and learn on its own.

Please note the technology we have described so far, agent AI, regardless of who you are, you should be able to interact with the agent. It can give you responses. Of course, sometimes the responses are not as good as what you can generate, but in fact, many responses are even better than what we can produce. Therefore, we can now apply this general AI technology to the embodied AI or physical AI or what is known as the world of robots.

To achieve robotics, we need to build three computers. The first computer trains the AI, just like all the examples we have given you before. The first is the simulation AI. You need to give the AI a place to practice, a place to learn, a retreat where it can receive synthetic data to learn. We call it the omniverse, which is our virtual world digital twin library suite used to create physical AIS robots. The omniverse. Then validated, trained, evaluated, and then you can put the model into physical robots. In this, we have processors specifically designed for robotics. We call it JetsonThor is a robot processor specifically designed for humanoid robots.

To achieve robotics technology, we need to build three computers. The first computer trains the AI, just like all the examples we've given you before. The first one simulates the AI. You need to give the AI a place to practice, a place to learn, a retreat where it can receive synthetic data that it can learn from.

This loop continues indefinitely. Just as there is a NemoAI agent lifecycle platform, the Omniverse platform allows you to create AI. Ultimately, what you want is AI. Ultimately, the AI you expect will see a world where it can recognize videos, the surrounding environment, and your needs, and generate corresponding actions. You tell it what you want, and this AI will produce joint movements. Just as we can obtain text, we can generate videos, we can take text and generate chemical substances. For drugs, we can take text and generate joint movements. Okay, this concept is very similar to generative AI. That’s why we believe that now we have the necessary technology between the Omniverse and all the computers we are building, these three computers, and the latest generative AI technology, the era of human or robotic technology has arrived.

Now, why is humanoid robot technology so difficult? Well, obviously, the software developed for humanoid robots is very challenging. However, the benefits are incredible. Only two robotic systems can be easily deployed around the world. The first robot is autonomous vehicles, because we have created a world that adapts to cars. The second is humanoid robots. These two robotic systems can be deployed in brownfield sites anywhere in the world because we have created a world for ourselves. This is a very challenging technology, the timing is ripe, but the impact could be enormous.

Last week at the Robotics Learning Conference, we announced a very important new framework. It is called Isaac Lab, a virtual simulation system for reinforcement learning that allows you to teach humanoid robots how to be humanoid robots. Most importantly, we have created several workflows. The first workflow is the Groot Mimic Group.

Mimic is a framework for demonstrating tasks to robots. You use human demonstrations and then use domain randomization to simulate that environment, generating hundreds of other examples, like your demonstrations, so that the robot can learn to generalize. Otherwise, it can only perform very specific tasks using mimic, and we can generalize its learning.

The second is the Group Gen assembly. Using generative AI technology in Omniverse, we can create a large number of I random domain environments and random examples of actions we want the robot to perform. So we are generating a large number of tests, evaluation systems, and assessment scenarios that the robot can try to execute and improve itself, learning how to be a good robot.

The third is group control, a model distillation framework that allows us to distill all the skills we have learned into a unified model, allowing the robot to perform kinematic skills. Robots will not only be autonomous, but remember, the factories of the future will also be roboticTherefore, these factories will become robotic factories, orchestrating robots and building robotic mechanical systems. Let me show you. Of course.

Physical AI is embodied in robots, such as self-driving cars that navigate the real world safely, robotic arms performing complex industrial tasks, and humanoid robots working alongside humans. Factories will be embodied by physical AI, capable of monitoring and adjusting their operations or conversing with us. NVIDIA has manufactured three computers that enable developers to create physical AI. The models are first trained on DGX, and then fine-tuned and tested using reinforcement learning and physical feedback in Omniverse. The trained AI runs on NVIDIA's Jetson AGX robotic computers. NVIDIA Omniverse is a physics-based operating system for physical AI simulation. Robots learn and fine-tune their skills in the Isaac Lab.

The robot gym built on Omniverse features group workflows like Group Gym, which generates diverse learning environments and layouts. Group Mimic generates large-scale synthetic motion datasets based on a small amount of real-world capture and neural whole-body control for group control. This is just one robot.

Future factories will coordinate teams of robots and monitor operations through thousands of sensors. For the digital twins of factories, they use an all-purpose blueprint called Mega. With Mega, the digital twin of the factory is filled with virtual robots, whose AI simulates the brains of the robots. Robots perform tasks by perceiving the environment, reasoning, planning the next action, and ultimately converting it into action. The World Simulator in Omniverse simulates these actions in the environment, and the robotic brain simulates perception results through Omniverse sensors. Based on sensor simulations, the robotic brain decides the next action, and the cycle continues, while Mega precisely tracks the status and location of everything in the factory.

Digital twins. This software-in-the-loop testing brings software-defined processes into physical spaces and implementations, allowing industrial enterprises to simulate and validate the changes of a comprehensive digital twin before deploying to the physical world, thus saving enormous risks and costs. The era of physical AI has arrived, transforming the world of heavy industry and robotics.

An incredible era. So we have two robotic systems, one is digital, which we call AI agents that you can use in the office to collaborate with your employees. The second is the physical AI system robots. These physical AIs will be the products built by companies. Therefore, companies will use AI to enhance employee productivity, and we will use AI to drive and empower the products we sell. In the future, car manufacturers will have two factories, one factory to build cars and another factory to produce the AI that runs inside the cars.

Well, this is the robot revolution. There is so much activity happening around the world. I can't imagine any country leading the robotic AI revolution better than Japan. The reason is, as you know, this country loves robots. You love robots. You have created some of the best robots in the world. These are the robots we grew up with together. These are the robots we have loved all our livesI haven't even shown some of my favorites. Majin Gazi, I hope Japan can leverage the latest breakthroughs in artificial intelligence and combine them with your expertise and large electronics. No country in the world has higher skills than Japan in super electronic integration. This is an extraordinary opportunity that you must seize. So I hope we can work together to make this dream possible.

NVIDIA is doing very well with AI in Japan. We have many partners here. We have partners building large language models. Tokyo Institute of Technology, Rakuten. Self-service banks, Intuition, NTT, Fujitsu NEC, Nagoya University, Kota Bar Technologies. If you go to the top right corner, we also have AI cloud, as well as SoftBank, Sakura Internet, Genetec Internet Group. Hi, Rezzo KDDI Rutilia, building AI cloud here to let the ecosystem thrive in Japan.

Therefore, many robotics companies are starting to understand the capabilities that AI now offers to seize this opportunity. Yaskawa, Toyota, Kawasaki, Repute, Reputa, Medical Imaging Systems, Canon, Fujifilm, Olympus, are all leveraging AI. Because in the future, these medical devices will be more autonomous. It's almost like having a nurse AI inside the medical devices to help nurses guide diagnoses. The drug discovery industry has so many different ways, and AI is being used in so many different ways.

So I am pleased with the progress here, and we hope to leverage the AI revolution faster. Well, the industry is changing, as I mentioned earlier, the computer industry has fundamentally shifted from coding running on CPUs to now running machine learning on GPUs. We have transitioned from an industry that produces software to one that manufactures artificial intelligence. AI is produced in factories. They are running 24/7. When you license software, you install it on your computer. The manufacturing and distribution of that software is complete. However, intelligence is never complete. You are interacting with all the AI, whether they are AI agents or AI robots.

Token, intelligence is represented by tokens, and tokens are units of intelligence. This is a number. These numbers are constructed, these symbols are formed in an intelligent and linguistic way. Intelligence and steering wheels, the intelligence of autonomous vehicles, the intelligent motors used to express humanoid robots, the intelligence of proteins and chemicals, and drug discovery.

All these tokens are produced in these factories. This infrastructure, these factories have never existed before. This is something entirely new, which is why we see so much development around the world. For the first time, we have a new industry, a new factory producing something entirely new that we call artificial intelligence. These factories will be built by companies. They will be constructed, and each company will become an AI manufacturer. Of course, no company can afford not to manufacture and produce artificial intelligence. How can any company afford not to produce intelligent products? How can a country afford not to produce intelligence? You don't have to produce chips. You don't have to produce software, but you must produce intelligenceThis is crucial. This is your core. This is our core.

So we have the new industrial AI factory, and that’s why I call it the new industrial revolution. The last time this happened was 300 years ago when electricity was discovered, the generation and distribution of electricity created a new type of factory. That new factory was not a power plant. Then a new industry was created called energy. Hundreds of years ago, there was no energy industry. It happened during the industrial revolution. Now we have a new industry that has never existed before—artificial intelligence sits at the top of the computer industry, but it is utilized and created by every industry. You must create your own AI. The pharmaceutical industry creates its own AI. The automotive industry creates its own AI. The robotics industry creates its own AI . Every industry, every company, every country must produce its own AI; this is a new industrial revolution.

I have a very important announcement today. We announce that we will collaborate with SoftBank to bring, build, and AI infrastructure to Japan. Together, we will build Japan's largest AI factory, NVIDIA, which will be constructed by NVIDIA DGX. When completed, it will have 25 AIx of flipping. Remember, the largest supercomputer in the world recently had 1x flipping. This is an AI factory producing AI at 25 times flipping. But to distribute AI, SoftBank will integrate NVIDIA's Ariel, which is the engine I mentioned earlier that runs 5G radio on Cuda. By doing this, we can unify and combine radio computers, baseband, and AI computers running 5G. We can now develop and transform the telecom network into AIRAM. It will be able to carry voice, data, video, but in the future, we will also carry AI, a new type of information intelligence. This will be distributed across SoftBank's 200,000 sites in Japan, serving 55 million customers. The AI factory and the distribution network will distribute AI scores to distribute AI. We will also establish a new type of store on it, an AI store. So that the AI created by SoftBank and the AIS created by third parties can be offered to 55 million customers.

Therefore, we will build these applications on top of NVIDIA AI enterprises, which I mentioned earlier and showed you. There will also be a new store that allows everyone to access AI. This is just a grand development. The result will be an AI grid spanning across Japan.

Now this AI grid will become part of the infrastructure and one of the most important infrastructures. Remember, you need factories and roads as part of the infrastructure so that you can manufacture and distribute goods. Infrastructure needs energy and communication parts. Every time you create something entirely new for infrastructure, new industries and new companies are created, new economic opportunities, new prosperity. How could we have an industrial revolution without roads and factories? How could we have an IT revolution without energy and communication? Each of these new infrastructures opens up new opportunities.

Therefore, it is very exciting for me to achieve this goal in Japan in collaboration with SoftBank. Mia Kawasan's team, who should be in the audience. It is incredible to work with you, and we are very pleased to do soThis is completely revolutionary. It is the first time that telecom networks and communication networks have been transformed into AI networks.

Okay, let me show you what you can do. You can do some amazing things. For example, I am standing under a base station, a radio tower, and the car has video, which is streamed to the radio tower, and the radio tower has AI. This radio tower has video intelligence. It has visual intelligence. So it can see what the car sees and also understand what the car sees. That AI model might be heavy in the car, but it won't be too heavy in the base station. Using the video streamed to the base station, it can understand the car and anything happening around it. Okay, this is just one example of using AI at the edge to keep people safe, perhaps this is air traffic control, essentially for autonomous vehicles. The applications are endless.

We can also use this basic concept to turn an entire factory into AI. Here is a factory with many cameras. The cameras are streamed to the base station. Amazingly, with all the cameras and AI models in AI, this factory has now become AI. You can talk to the factory and ask what is happening. Ask the factory, has there been an accident? Is there anything abnormal happening? Did someone get injured today? It gives you a daily report. You just need to ask the factory because the factory has now become an AI. The AI model does not have to run in the factory. This AI model can run in SoftBank's broadcast.

Okay, here is another example. But there are countless examples, you can basically turn every physical object into AI, stadiums, roads, factories, warehouses, offices, buildings... they can all become AI, and you can talk to them just like you chat with GPT. Okay, what is the condition of the aisle? Are there any blockages or overflow? You are just talking to the factory. The factory observes everything, understands what it sees, it can reason, it can plan actions, or just talk to you. Here it says, no, there are no obstacles, overflow, or hazards in the warehouse aisle. The conditions in the video show that the aisle looks orderly, clean, with no obstacles or hazards.

Okay, you are talking to the factory. It's incredible. You are talking to the warehouse, you are talking to the car because all of these have now become intelligent