To use OpenVINO runtimes, you have the option select it on plugin install for compatible hardware.
If you want to preselect GPU or NPU runtimes, make sure you have the very latest Intel drivers installed on your machine.
The Intel NPUs are not supported on Windows at the moment.
Hardware requirements
The list of Intel CPU and GPU models that can be used to perform AI inference are available here. Please check out the tables to learn if you setup is compatible.
Also make sure you have the latest libtbb installed:
sudo apt install libtbb12 ocl-icd-libopencl1
Add the user to video and gender groups:
sudo usermod -aG video $USER
sudo usermod -aG render $USER
If, for some reason, your Ubuntu or Debian version tends to overwrite some of the packages installed above, you can prevent APT from upgrading them by placing the relevant packages on hold, for instance:
sudo apt-mark hold intel-ocloc intel-opencl-icd libigdgmm12 libze-intel-gpu1
For Windows
To make use of the Intel GPU, make sure to install the Intel Graphics Driver as shown on this page.
Configuration
By default, the OpenVINO runtime performs model inference on CPU.
To run a model on a compatible Intel GPU, you need to specify that in a JSON file located in /opt/networkoptix-metavms/mediaserver/var/nx_ai_manager/nxai_manager/bin/runtime_args.json :
Similarly, to use the Intel NPU, the JSON file should contain:
After making the changes, please restart the Nx Server as detailed in this section: Restarting the server.
Monitoring
On Ubuntu
You can monitor the usage of Intel CPU, GPU and NPU by using the adequate command:
Not all models running on CPU are supported by the GPU and NPU. In such case, the Nx AI Manager will fail to run and display an error message. Please reach out to the support team to advise if there is a workaround in such situations.