Nx AI Manager Documentation
  • Nx AI Manager plugin v4.3
  • Nx AI Manager
    • Get started with the NX AI Manager plugin
    • 1. Install Network Optix
    • 2. Install Nx AI Manager Plugin
    • 3. Configure the Nx AI Manager plugin
      • 3.1 Model Settings
      • 3.2 Model pipeline selection and configuration
      • 3.3 Model pipelines on multiple devices
    • 4. Other Network Optix Plugin Settings
    • 5. Manual Plugin Installation
    • 6. Removing the Nx AI Manager
    • 7. Advanced configuration
      • 7.1 Nx AI Manager Manual Installation
      • 7.2 External Post-processing
      • 7.3 External Pre-processing
      • 7.4 Training Loop
      • 7.5 Enable ini settings
  • Nx AI Cloud
    • Introduction
    • Registration and log-in
    • Deployment and device management
    • Upload your model
      • Normalization
    • Use your model
    • API Documentation
  • SUPPORT & TROUBLESHOOTING
    • How to get support
    • Troubleshooting
      • Plugin checks
      • OS checks
      • System checks
      • Things to try
      • Controlling the server and the plugin
      • Q&A
  • Videos
    • Howto videos
  • AI Accelerators Support
    • Introduction
    • Supported AI accelerators
    • Nvidia Support
    • OpenVino Support
    • Hailo Support
  • For Data Scientists
    • Introduction
    • About ONNX
    • Custom model creation
    • ONNX requirements
    • Importing models
      • From Edge Impulse
      • From Nota AI
      • From Teachable Machine
      • From Hugging Face
      • From Ultralytics
      • From PyTorch
      • From TensorFlow / TFLite
      • From Scikit-learn
      • Common Models
  • Miscellaneous
    • Nx AI Certification Test
    • Nx AI Manager on SCAiLX
    • Privacy policy
    • Support
    • End user license agreement
    • Nx cloud cookie statement
Powered by GitBook
On this page
  1. AI Accelerators Support

Supported AI accelerators

PreviousIntroductionNextNvidia Support

Last updated 3 months ago

The Nx AI Manager has integrated several runtimes allowing to benefit from hardware acceleration to run the deployed models.

The following table contains a compiled list of supported runtimes:

AI Accelerator
CPU Architecture
API/driver version

CPU

aarch64, x86_64

-

Intel (OpenVINO)

x86_64

-

Hailo-8

x86_64

4.17.0, 4.18.0, 4.19.0

Hailo-8L

aarch64

4.18.0, 4.19.0

Hailo-15 (coming soon)

MemryX

x86_64

2.2.37

Nvidia

x86_64

CUDA 11, CUDA 12

Nvidia

aarch64

Jetpack 4.6, Jetpack 5.x, Jetpack 6.x

Qualcomm (coming soon)

aarch64

2.20.x

MemryX (coming soon)

aarch64

2.2.37

DeepX (coming soon)

aarch64, x86_64

-

CUDA
Jetpack