Track Hyper | Lenovo's new server will be equipped with NVIDIA Blackwell
ThinkSystem SC777, equipped with Blackwell Al accelerator card
Author: Zhou Yuan / Wall Street News
At the 2024 Tech World, Yang Yuanqing, Chairman and CEO of Lenovo Group, and Jensen Huang, CEO of NVIDIA, jointly announced that the SC777 model in the Lenovo ThinkSystem server series will be equipped with the NVIDIA Blackwell Al accelerator card (GPU).
On March 18th this year, at GTC (GPU Technology Conference) 2024, Lenovo Group and NVIDIA announced a collaboration to launch a new hybrid artificial intelligence solution, helping enterprises and cloud providers obtain the crucial accelerated computing power needed for success in the era of artificial intelligence, turning AI from concept to reality.
At the same time, in the area of large-scale and efficient enhanced artificial intelligence workloads, Lenovo unveiled an expansion of the ThinkSystem artificial intelligence product portfolio, including two 8-way NVIDIA GPU systems at that time.
The Lenovo ThinkSystem server series is Lenovo's data center infrastructure product line, consisting of various models tailored for different enterprise applications and services. The series currently includes known models under SC and SR.
Among them, SR has launched various types of products, but SC currently only has the SC777 model, with key features including support for large-scale computing clusters, excellent scalability, and configurability, making it suitable for various enterprise scenarios.
From high-performance computing in data centers to edge computing scenarios, the flexible architecture and excellent energy efficiency of the Lenovo ThinkSystem SC777 allow it to adapt to a variety of dynamic business needs. Additionally, the security design of this server is also outstanding.
The ThinkSystem SC777 server can rapidly execute complex tasks such as AI training, image processing, and video analysis, and with highly flexible configurations, it can quickly adapt to different workload requirements.
Blackwell is NVIDIA's new generation AI chip and supercomputing platform, named after American mathematician David Harold Blackwell. The GPU architecture of this generation has 208 billion transistors and is manufactured using a custom TSMC 4NP process. All Blackwell products use double photolithography limit size wafers, connected into a unified GPU through 10 TB/s inter-chip interconnect technology.
The second-generation Transformer engine combines custom Blackwell Tensor Core technology with NVIDIA TensorRT-LLM and NeMo framework innovations to accelerate the inference and training of large language models (LLM) and expert mixed models (MoE).
To effectively boost the inference of MoE models, Blackwell Tensor Core has added new precision (including new community-defined micro-scaling formats) to provide higher accuracy and easily replace larger precisions The Blackwell Transformer engine utilizes a fine-grained scaling technology called micro-tensor scaling to optimize performance and accuracy, supporting 4-bit floating point (FP4) AI. This doubles the performance and size of the new generation models that memory can support, while maintaining high precision.
Blackwell is equipped with NVIDIA's confidential computing technology, which protects sensitive data and AI models from unauthorized access through hardware-based strong security. It is also the industry's first GPU with Trusted Execution Environment (TEE) I/O capabilities, providing outstanding confidential computing solutions in conjunction with hosts with TEE-I/O capabilities, and offering real-time protection through NVIDIA NVLink technology.
Overall, the Blackwell GPU is NVIDIA's next-generation core platform for accelerated computing and generative artificial intelligence (AI), featuring a new architecture design and six revolutionary accelerated computing technologies.
These technologies will drive breakthroughs in areas such as data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI. Of particular note, its AI inference performance is 30 times higher than the previous generation products, while energy consumption is reduced by 25 times, marking a significant advancement in the AI and computing fields