Nx AI Manager Documentation
  • Nx AI Manager plugin v4.4
  • Nx AI Manager
    • Get started with the NX AI Manager plugin
    • 1. Install Network Optix
    • 2. Install Nx AI Manager Plugin
    • 3. Configure the Nx AI Manager plugin
      • 3.1 Model Settings
      • 3.2 Model pipeline selection and configuration
      • 3.3 Model pipelines on multiple devices
    • 4. Other Network Optix Plugin Settings
    • 5. Manual Plugin Installation
    • 6. Removing the Nx AI Manager
    • 7. Advanced configuration
      • 7.1 Nx AI Manager Manual Installation
      • 7.2 External Post-processing
      • 7.3 External Pre-processing
      • 7.4 Training Loop
      • 7.5 Enable ini settings
  • Nx AI Cloud
    • Introduction
    • Registration and log-in
    • Deployment and device management
    • Upload your model
      • Normalization
    • Use your model
    • API Documentation
  • SUPPORT & TROUBLESHOOTING
    • How to get support
    • Troubleshooting
      • Plugin checks
      • OS checks
      • System checks
      • Things to try
      • Controlling the server and the plugin
      • Q&A
  • Videos
    • Howto videos
  • AI Accelerators Support
    • Introduction
    • Supported AI accelerators
    • Nvidia Support
    • OpenVino Support
    • Hailo Support
  • For Data Scientists
    • Introduction
    • About ONNX
    • Custom model creation
    • ONNX requirements
    • Importing models
      • From Edge Impulse
      • From Nota AI
      • From Teachable Machine
      • From Hugging Face
      • From Ultralytics
      • From PyTorch
      • From TensorFlow / TFLite
      • From Scikit-learn
      • Common Models
  • Miscellaneous
    • Nx AI Certification Test
    • Nx AI Manager on SCAiLX
    • Privacy policy
    • Support
    • End user license agreement
    • Nx cloud cookie statement
Powered by GitBook
On this page
  1. AI Accelerators Support

Introduction

PreviousHowto videosNextSupported AI accelerators

The Nx AI Manager accelerates AI models inference using various built-in runtimes, each of which is dedicated to a specific platform and AI accelerator (CPU, GPU, NPU, etc.). These runtimes are seamlessly integrated within the AI Manager, facilitating effortless utilization for individuals employing new runtimes and harnessing various forms of AI acceleration.

When a user uploads a AI model to the Nx AI Cloud, the uploaded model file is subject to several conversion processes, each of which generates a new model artifact that is used by one of the that are provided.

Example of a Teachable Machine model

Suppose you've trained a teachable machine model, and exported it according to the guidelines mentioned , the following model artifacts are stored to be used by the Nx AI Manager when needed:

  • application/zip; source=original: is the original ZIP archive that's uploaded in the interface.

  • application/zip; kind=teachable-machine: is the same file that's uploaded but with a different MIME type after detecting its nature.

  • application/x-tensorflow-lite; type=float32: is the TFLite file that's extracted from the TM archive.

  • application/x-onnx: the TFLite generated in an earlier stage is converted to ONNX, validated and optimized to run on both CPU and Nvidia hardware. Please note, that in this step, no quantization is performed on the model.

  • application/zip; device=mxa: is the artifact generated by compiling the ONNX file into an optimized file dedicated only for MemryX hardware.

  • application/x-onnx; device=hailo: is a custom ONNX generated specifically for Hailo-8 chips.

Any runtime/toolchain combination adhering to the standard is compatible with the Nx AI Manager, enabling straightforward substitution of any existing installed runtime with new one. If you're a AI chip maker and would like to integrate with the Nx AI Manager, please refer to this documentation repository on .

Open AI Accelerator eXchange (OAAX)
Github
supported
runtimes
here