
Summary Bullets:
• Nvidia is working to ease adoption of new AI RAN applications with tools for training, simulation, and deployment that leverage the power of AI to speed up and scale these processes.
• New offerings include automatic code translators, digital twin simulators, and an “AI-native 6G lab in a box.”

Source: Nvidia
Artificial intelligence (AI) chip giant Nvidia is continuing its drive toward ushering in a new age of AI-enhanced radio access networks (RANs) – tackling some of the hurdles facing this young ecosystem and working to make it easier for software application developers to innovate in the space.
A key characteristic of the AI RAN brings both benefits and challenges: AI promises to transform the RAN, making it more dynamic, flexible, granular, and intelligent – but also much more complex. And that complexity means, for example, that traditional standards-based methods for prototyping and testing new RAN software applications won’t be sufficient.
In an event targeting application developers last week, Nvidia described how it’s helping to make these changes easier to digest. The company is applying to the RAN a version of the “three-computer” model it already uses for physical systems like robotics, with distinct platforms for training, simulation, and deployment. Training (which includes design) is supported by the company’s Aerial AI RAN platform and its Sionna Research Kit – an open-source research library based on the vendor’s Jetson AGX Orin platform (i.e., a high-performance edge-compute platform used for robots and autonomous machines). Simulation is conducted using the Nvidia Aerial Omniverse Digital Twin platform. And deployment involves the vendor’s aerial RAN computer (ARC), a virtual RAN baseband platform using Nvidia graphics processing units (GPUs) – the latest version of which, Nvidia ARC Pro, has become central to Nokia’s RAN strategy in the runup to the 6G era.
Nvidia offers the option to run Nvidia Aerial tests and Nvidia Sionna on the same hardware – a product called Nvidia DGX Spark, which can be used with software-defined radios and user equipment to test new applications. Sionna includes link- and system-level simulation functions and a standalone ray-tracer for radio propagation modeling. Each Nvidia DGX Spark system – which Nvidia calls an “AI-native 6G lab in a box” – sells for $6,000 to $8,000, the company says.
A key challenge the company is addressing is the need for applications running on its GPUs to be written in Nvidia’s own programming language, Nvidia Compute Unified Device Architecture (Nvidia CUDA). So the company is offering a tool chain that leverages GPU computing to automatically convert developers’ algorithm code from popular languages like Python into Nvidia CUDA, allowing the converted code to run in real time for evaluation on the latest GPUs. The company’s aim is to enable testable code in just days or weeks. And it is working on evolving the system – for example, with plans to add support for third-party Media Access Control (MAC) protocols and radio units in 2026.
Nvidia is also issuing free, open tutorials to help operators and developers get started using its tools. In its event last week, the company gave the example of testing site-specific channel estimation, which replaces traditional, uniform standards-based methods with channel estimation that is optimized for different scenarios, such as dense urban environments or rural areas. In a test using commercial 5G user equipment, Nvidia showed a performance boost using site-specific channel estimation.
The company also emphasized the power of AI to scale simulations dramatically. Using a group of 96 GPUs, Nvidia scaled one simulation to encompass the entire US, including more than 80,000 radio sites, zooming in to any particular site across the country to view detailed local network performance data. With 96 GPUs, spinning up a nationwide network simulation took less than five minutes, the company claims.
Achieving Nvidia’s vision for AI RANs will require the company to overcome significant headwinds, including operator concerns about the cost of widely distributed GPUs and a steep learning curve for operators willing to blaze new trails. But with each new incremental step, Nvidia continues to demonstrate its commitment to using its considerable resources to make this vision reality.
