# NVIDIA Support

<table><thead><tr><th width="153.22265625">AI Accelerator</th><th>Support Level</th><th>CPU Architecture</th><th>Operating System</th><th>API/driver version</th></tr></thead><tbody><tr><td>NVIDIA dGPU</td><td>Full support</td><td>x86_64</td><td>Ubuntu 20+ and Windows</td><td>CUDA 11, CUDA 12, CUDA 13</td></tr><tr><td>NVIDIA dGPU</td><td>Full support</td><td>aarch64</td><td>Ubuntu 24.04</td><td>CUDA 13</td></tr><tr><td>NVIDIA Jetson</td><td>Full support</td><td>aarch64</td><td>Ubuntu 20+</td><td>JetPack 5, JetPack 6, JetPack 7</td></tr></tbody></table>

AI models inference is available on a wide-range of NVIDIA GPUs whether it is consumer-grade like GeForce RTX 3060, data center-grade like A100 and DGX Spark, or embedded devices (Jetson platform).

To enable the inference on the GPUs using the Nx AI Manager, you need to set up the machine by installing compatible NVIDIA drivers and CUDA toolkit SDK versions based on the [compute capability](https://developer.nvidia.com/cuda-gpus) of the GPU.

{% hint style="warning" %}
Nx AI Manager supports only GPUs compatible with **CUDA Toolkit 11** or higher.
{% endhint %}

{% hint style="info" %}
To determine the minimal version of CUDA version to install on your machine, please refer to this [table](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) and this [one](https://docs.nvidia.com/deeplearning/cudnn/backend/v9.14.0/reference/support-matrix.html). Preferably install the latest version that is compatible with the GPU.

To determine which GPU driver version to install based on your desired CUDA version (or vice versa), please check out this [table](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#id6).
{% endhint %}

## Installation procedure

{% hint style="info" %}
**Please make sure you install GPU drivers and CUDA toolkit&#x20;*****versions*****&#x20;that are compatible with each other.**
{% endhint %}

### x86\_64 Windows

1. Go to <https://www.nvidia.com/en-us/drivers/>, and select your GPU model and OS. Then, download the driver installer, and run it.
2. Go to <https://developer.nvidia.com/cuda-toolkit-archive> and choose the CUDA version compatible with your installed GPU drivers.\
   (To find out the recommended CUDA toolkit to install, please run `nvidia-smi` on PowerShell. The version will be printed in the top right corner of the output)\
   After a successful installation, the CUDA toolkit should be available at: `C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\`

### x86\_64 Ubuntu

1. Install the GPU drivers by running this command: `sudo ubuntu-drivers autoinstall` , then reboot the machine.
2. Go to <https://developer.nvidia.com/cuda-toolkit-archive> and install the CUDA version compatible with your installed GPU drivers.\
   (To find out the recommended CUDA toolkit to install, please run `nvidia-smi` on a terminal window. The version will be printed in the top right corner of the output)

### Verifying the runtime is available

After installing the drivers and CUDA toolkit and restarting Nx Server, the NVIDIA runtime appears in the plugin's runtime selection list. Open any camera's settings, go to the **Integrations** tab, click **Nx AI Manager**, and confirm that the NVIDIA runtime is listed.

<figure><img src="/files/3l9eveFst4JGVgdidJF8" alt="Nx AI Manager plugin runtime selection panel listing available runtimes including NVIDIA CUDA"><figcaption><p>The NVIDIA runtime appears in the list once drivers and CUDA are installed and the server has been restarted</p></figcaption></figure>

## Troubleshooting

For systems using NVIDIA's JetPack SDK, especially recent installations, the `networkoptix-metavms` user might not automatically be added to the `render` group. This group membership is essential for the Network Optix AI Manager plugin to fully utilize NVIDIA GPUs for hardware acceleration. While this process will be automated in a future Nx Server release, for now, you can manually add the user to the `render` group by following these steps:

#### Step 1: Check if the 'render' group exists

First, verify whether the `render` group exists on your system:

```bash
getent group render
```

* **Expected Output**

  If the `render` group exists, you will see output similar to:

  ```
  render:x:104:username
  ```

  This indicates that the group exists and lists the users currently in the group.
* **No Output**

  If there's **no output**, the `render` group does not exist on your system. In this case, there's no need to continue with the next steps.

#### Step 2: Add 'networkoptix-metavms' to the 'render' group

* Run the following command to add the user to the `render` group:

  ```bash
  sudo usermod -aG render networkoptix-metavms
  ```

#### Step 3: Verify the group membership

Confirm that the `networkoptix-metavms` user has been added to the `render` group:

```bash
groups networkoptix-metavms
```

* **Expected Output**

  The command will list all groups the user is a part of. You should see `render` included in the list.

#### Step 4: Restart the Network Optix service

For the changes to take effect, restart the Network Optix media server service:

```bash
sudo systemctl restart networkoptix-mediaserver.service
```

* This command restarts the service, allowing it to recognize the updated group permissions.

{% hint style="info" %}
If the AI Manager doesn't work even when the conditions above are met, please refer to our [general troubleshooting section](/nx-ai-manager-v6.1.2/support-and-troubleshooting/how-to-get-support.md).
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://nx.docs.scailable.net/nx-ai-manager-v6.1.2/ai-accelerators-support/nvidia-support.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
