Skip to content

Key Highlights from NVIDIA’s Keynote Speech 2025

Key Highlights from NVIDIA's Keynote Speech 2025 - cover image

Consumer Electronics Show (CES) is an event that takes place every year to recognize and introduce new breakthrough technologies. This year it was held in the Las Vegas Convention Center in Nevada on 7th January. A big update from the 2025 CES is NVIDIA’s CEO Jensen Huang introducing their next-gen RTX 50 Series Gaming GPUs and innovations in the Grace Blackwell AI chip technology. He also shared NVIDIA’s plans for launching autonomous cars and humanoid robots. Exciting stuff right? Let’s talk about them in more detail.

Next-Generation GPUs: RTX Blackwell Series

Jensen Huang announced the RTX 50 Series, which is based on the new Blackwell architecture. These GPUs deliver twice the performance of their predecessors, the RTX 40 Series 4090 GPUs, at significantly lower prices. This will result in a significantly smoother gaming experience and performing other creative tasks like handling demanding 3D projects or generative AI models on your PC. The RTX 50 Series GPUs also support DLSS 4, NVIDIA’s latest AI-driven technology that boosts frame rates and image quality in supported games. The memory configurations include up to 32GB of VRAM and a massive memory bandwidth of 1.2 petabytes per second, which is comparable to global internet traffic levels.

The RTX 50 Series also features range from the 5070 to 5090 GPUs, emphasizing performance for desktops and even thin laptops. So no need to worry about handling a thick, bulky laptop. The price of the models in the lineup ranges from the RTX 5070, priced at $549, to the flagship RTX 5090 at $1,999, catering to a wide range of performance needs and budgets.

All these new advanced features and improvements make the RTX 50 Series a compelling choice, especially for gamers. It is also an ideal choice for content creators or professionals who have to perform heavy graphics related tasks. So if that is you, maybe it is time for an upgrade.

NVIDIA Nemo: The Digital Workforce

Another big announcement was the introduction of NVIDIA Nemo, an end-to-end AI-driven platform that you can use to build custom generative AI models; these models can be large language models (LLMs), vision language models (VLMs), and even speech AI. The target group for NVIDIA Nemo is enterprises; to allow them to build AI agents that can replace help with any repetitive tasks. They will have the capability to break down the goals into small actionable tasks, retrieve relevant data and statistics, and generate high-quality responses to streamline objectives and key results (OKRs) and promote efficiency.

For instance, an enterprise can create an AI agent that basically acts as a digital employee who can assist in tasks like onboarding new people and training or evaluating employee performance. These AI agents are capable of reasoning, breaking down missions into tasks, retrieving data, and generating quality responses. These systems are designed in a way that allows deployment across multiple environments, including PCs and enterprise systems, which will ensure seamless integration into business operations.

AI for Windows: A First-Class AI Platform

NVIDIA has collaborated with Microsoft to bridge the gap between Linux and Windows systems. They will achieve this by integrating AI capabilities through the Windows Subsystem for Linux 2 (WSL2) in Windows PCs. This integration will allow users to run a variety of AI models (like LLMs, VLMs, and speech AI) directly on their Windows machines. By using WSL2, developers can take advantage of NVIDIA’s CUDA technology to accelerate the machine learning tasks within a native Linux environment while staying in the Windows system. An alternative to this is making use of virtual machines. Now if you have ever used a VM, you would know how resource-intensive it is and how it can really slow things down.

This integration between Windows and Linux is fairly simple to configure. Users will need to install the NVIDIA CUDA-enabled driver for WSL, which provides GPU acceleration for data science, machine learning, and inference tasks. This setup will allow developers to use the existing Linux workflows, such as those involving PyTorch or TensorFlow, within the WSL environment on Windows.

The goal of this collaboration with Microsoft is to make AI development more accessible and efficient for a broader range of users. This initiative is part of NVIDIA’s broader strategy to expand its presence in the consumer and business PC market, challenging established players and bringing advanced AI capabilities to mainstream computing platforms.

NVIDIA Cosmos: Advancing Physical AI

NVIDIA also announced the launch of Cosmos, which is essentially a platform designed to assist in the field of physical AI by simulating real-world scenarios to enhance a machine’s understanding of different physical environments. Cosmos is basically a suite of AI models that are capable of generating photorealistic, physics-based synthetic data. These models have been trained on more than 20,000,000 hours of video footage and are capable of creating a virtual environment that can accurately depict real-world dynamics, including human activities like walking or sitting and natural phenomena like friction or gravity. This capability is really important for training AI models through reinforcement learning and for performing tests in a simulated environment.

Cosmos can integrate with NVIDIA Omniverse to generate very detailed simulations that mirror real-world environments, which allows developers to generate multiple physically plausible future scenarios to help AI models “determine” the most accurate path or solution. Such simulations are invaluable for training autonomous systems to navigate complex physical spaces safely and efficiently on a computer instead of carrying out potentially dangerous experiments in real life. The capability to generate accurate simulations of the real world will ultimately reduce the need for extensive real-world data collection.

NVIDIA also introduced Thor, which is a new-generation robotics computer that can process data from various sensors like cameras, radars, and LIDAR. The architecture of this computer is specially optimized for performance, power, and size, which makes it suitable for deployment in humanoid robots and autonomous vehicles. This advanced computing platform enables robots to perform complex tasks and interact safely and naturally with their surroundings. Another great research initiative that was highlighted in the keynote speech is the Isaac GR00T, which is basically a development platform focused on accelerating humanoid robotics. It provides an open-source framework for robot learning, which facilitates the development of intelligent and adaptable robots through robust, perception-enabled, simulation-trained policies.

Future of AI and Robotics

NVIDIA is at the forefront of advancing artificial intelligence (AI), with CEO Jensen Huang predicting that the robotics industry is poised to become a multitrillion-dollar sector. To realize this vision, NVIDIA is investing heavily in foundational models, simulation frameworks, and scalable data pipelines. These initiatives streamline the creation of intelligent, adaptable robots capable of complex interactions within physical spaces. These strategic investments underscore NVIDIA’s commitment to driving innovation in AI-powered robotics, paving the way for a future where autonomous machines play a pivotal role across various industries.

Project DIGITS: Next-Gen AI Supercomputer

At CES, NVIDIA also unveiled Project DIGITS, which is a personal AI supercomputer designed to bring high-performance computing to individual AI researchers, data scientists, and students. It is powered by the new GB10 Grace Blackwell Superchip, making this supercomputer capable enough to deliver up to 1 petaflop of AI performance, enabling the development and inference of large AI models with up to 200 billion parameters.

The compact form factor of Project DIGITS allows it to fit seamlessly on a desktop, operating efficiently with standard power outlets. Each unit comes equipped with 128GB of unified memory and up to 4TB of NVMe storage, providing ample resources for complex AI tasks. For more demanding applications, two Project DIGITS systems can be interconnected to handle models with up to 405 billion parameters.

The GB10 Superchip integrates an NVIDIA Blackwell GPU with the latest CUDA cores and fifth-generation Tensor Cores, connected via NVLink-C2C to a high-performance NVIDIA Grace CPU featuring 20 power-efficient Arm-based cores. This architecture, developed in collaboration with MediaTek, ensures exceptional power efficiency and performance.

Project DIGITS runs on a Linux-based NVIDIA DGX OS and supports popular AI frameworks such as PyTorch, Python, and Jupyter notebooks. It also provides access to NVIDIA’s comprehensive AI software library, including development kits, orchestration tools, and pre-trained models available through the NVIDIA NGC catalog. This ecosystem facilitates seamless prototyping, fine-tuning, and deployment of AI models, whether locally or scaled to cloud and data center infrastructures.

With a starting price of $3,000, Project DIGITS is set to democratize access to advanced AI computing, empowering a broader community of developers to engage with and contribute to the evolving AI landscape. The system is expected to be available in May 2025.

FAQs

What are the key features of NVIDIA’s RTX 50 Series GPUs announced in 2025?

NVIDIA’s RTX 50 Series GPUs, based on the new Blackwell architecture, deliver twice the performance of their predecessors, the RTX 40 Series 4090 GPUs, at significantly lower prices. They support DLSS 4, NVIDIA’s latest AI-driven technology that boosts frame rates and image quality in supported games. The memory configurations include up to 32GB of VRAM and a massive memory bandwidth of 1.2 petabytes per second. The lineup ranges from the RTX 5070, priced at $549, to the flagship RTX 5090 at $1,999, catering to a wide range of performance needs and budgets.

What is NVIDIA Nemo, and how does it benefit businesses?

NVIDIA Nemo is an AI-driven platform introduced by NVIDIA that creates digital employees capable of assisting in tasks like onboarding new personnel, training, and evaluating employee performance. These AI agents can reason, break down missions into tasks, retrieve data, and generate quality responses. Designed for deployment across multiple environments, including PCs and enterprise systems, Nemo aims to enhance operational efficiency and support human employees in various business processes.

How is NVIDIA integrating AI capabilities into Windows PCs?

NVIDIA is enhancing Windows PCs by integrating AI capabilities through the Windows Subsystem for Linux 2 (WSL2). This integration allows users to run a variety of AI models—including those for language processing, computer vision, and speech recognition—directly on their Windows machines. By leveraging WSL2, developers can utilize NVIDIA’s CUDA technology to accelerate machine learning tasks within a native Linux environment on Windows, transforming standard PCs into powerful AI platforms and enabling seamless transitions between Windows and Linux applications without the need for resource-intensive virtual machines.