Nvidia Releases AI Nemotron 3 Super with 120 Billion Parameters
US-based chip and artificial intelligence company Nvidia has launched its latest AI model, Nemotron 3 Super, this week.
This represents the newest member of the Nemotron 3 family, which was previously introduced through the Nemotron 3 Nano model in December 2025.
The AI model also positions itself as a direct competitor to OpenAI’s GPT and Google’s Gemini.
According to Nvidia, Nemotron 3 Super is designed to run complex agentic AI systems at scale, with a focus on speed and computational efficiency.
Nvidia claims the AI model features 120 billion parameters, significantly more than Nemotron 3 Nano, which has approximately 30 billion parameters.
Due to its greater capability, the use case for Nemotron 3 Super differs slightly from the “Nano” variant.
Whilst Nemotron 3 Nano targets more specific AI tasks, Nemotron 3 Super is positioned as a model capable of running complex AI systems with multiple agents simultaneously.
The model uses a hybrid latent mixture-of-experts (MoE) and Mamba-Transformer architecture.
This architecture enables the model to access up to four times more AI “experts” during the inference process without increasing computational costs.
This approach is also claimed to help the model understand long conversations or instructions without requiring excessive memory.
In several benchmark tests, Nemotron 3 Super is reported to excel in some aspects compared with other popular AI models.
In testing conducted by artificial intelligence analysis firm Artificial Analysis, Nemotron 3 Super achieved an intelligence score of 36 points.
However, Nvidia’s model still lags behind leading AI models such as Gemini 3.1 Pro or GPT-5.4, which each scored approximately 57 points.