NVIDIA Support
NVIDIA dGPU
Full support
x86_64
Ubuntu 20+ and Windows
CUDA 11, CUDA 12, CUDA 13
NVIDIA Jetson
Full support
aarch64
Ubuntu 20+
JetPack 5, JetPack 6, JetPack 7
AI models inference is available on a wide-range of NVIDIA GPUs whether it is consumer-grade like GeForce RTX 3060, data center-grade like A100, or embedded devices (Jetson platform).
To enable the inference on the GPUs using the Nx AI Manager, you need to set up the machine by installing compatible NVIDIA drivers and CUDA toolkit SDK versions based on the compute capability of the GPU.
Nx AI Manager supports only GPUs compatible with CUDA Toolkit 11 or higher.
To determine the minimal version of CUDA version to install on your machine, please refer to this table and this one. Preferably install the latest version that is compatible with the GPU.
To determine which GPU driver version to install based on your desired CUDA version (or vice versa), please check out this table.
Installation procedure
Please make sure you install GPU drivers and CUDA toolkit versions that are compatible with each other.
x86_64 Windows
Go to https://www.nvidia.com/en-us/drivers/, and select your GPU model and OS. Then, download the driver installer, and run it.
Go to https://developer.nvidia.com/cuda-toolkit-archive and choose the CUDA version compatible with your installed GPU drivers. (To find out the recommended CUDA toolkit to install, please run
nvidia-smion PowerShell. The version will be printed in the top right corner of the output) After a successful installation, the CUDA toolkit should be available at:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\
x86_64 Ubuntu
Install the GPU drivers by running this command:
sudo ubuntu-drivers autoinstall, then reboot the machine.Go to https://developer.nvidia.com/cuda-toolkit-archive and install the CUDA version compatible with your installed GPU drivers. (To find out the recommended CUDA toolkit to install, please run
nvidia-smion a terminal window. The version will be printed in the top right corner of the output)
Troubleshooting
For systems using NVIDIA's JetPack SDK, especially recent installations, the networkoptix-metavms user might not automatically be added to the render group. This group membership is essential for the Network Optix AI Manager plugin to fully utilize NVIDIA GPUs for hardware acceleration. While this process will be automated in a future Nx Server release, for now, you can manually add the user to the render group by following these steps:
1. Check if the 'render' Group Exists
First, verify whether the render group exists on your system:
Expected Output
If the
rendergroup exists, you will see output similar to:This indicates that the group exists and lists the users currently in the group.
No Output
If there's no output, the
rendergroup does not exist on your system. In this case, there's no need to continue with the next steps. Y
2. Add 'networkoptix-metavms' to the 'render' Group
Run the following command to add the user to the
rendergroup:Explanation of the Command:
sudoruns the command with administrative privileges.usermodis used to modify user accounts.-aGappends the user to the specified group(s) without removing them from others.renderis the group you're adding the user to.networkoptix-metavmsis the username for the Network Optix VMS user.
3. Verify the Group Membership
Confirm that the networkoptix-metavms user has been added to the render group:
Expected Output
The command will list all groups the user is a part of. You should see
renderincluded in the list.
4. Restart the Network Optix Service
For the changes to take effect, restart the Network Optix media server service:
This command restarts the service, allowing it to recognize the updated group permissions.
If the AI Manager doesn't work even when the conditions above are met, please refer to our general troubleshooting section.
Last updated