Nx AI Manager Documentation
  • Nx AI Manager plugin v4.4
  • Nx AI Manager
    • Get started with the NX AI Manager plugin
    • 1. Install Network Optix
    • 2. Install Nx AI Manager Plugin
    • 3. Configure the Nx AI Manager plugin
      • 3.1 Model Settings
      • 3.2 Model pipeline selection and configuration
      • 3.3 Model pipelines on multiple devices
    • 4. Other Network Optix Plugin Settings
    • 5. Manual Plugin Installation
    • 6. Removing the Nx AI Manager
    • 7. Advanced configuration
      • 7.1 Nx AI Manager Manual Installation
      • 7.2 External Post-processing
      • 7.3 External Pre-processing
      • 7.4 Training Loop
      • 7.5 Enable ini settings
  • Nx AI Cloud
    • Introduction
    • Registration and log-in
    • Deployment and device management
    • Upload your model
      • Normalization
    • Use your model
    • API Documentation
  • SUPPORT & TROUBLESHOOTING
    • How to get support
    • Troubleshooting
      • Plugin checks
      • OS checks
      • System checks
      • Things to try
      • Controlling the server and the plugin
      • Q&A
  • Videos
    • Howto videos
  • AI Accelerators Support
    • Introduction
    • Supported AI accelerators
    • Nvidia Support
    • OpenVino Support
    • Hailo Support
  • For Data Scientists
    • Introduction
    • About ONNX
    • Custom model creation
    • ONNX requirements
    • Importing models
      • From Edge Impulse
      • From Nota AI
      • From Teachable Machine
      • From Hugging Face
      • From Ultralytics
      • From PyTorch
      • From TensorFlow / TFLite
      • From Scikit-learn
      • Common Models
  • Miscellaneous
    • Nx AI Certification Test
    • Nx AI Manager on SCAiLX
    • Privacy policy
    • Support
    • End user license agreement
    • Nx cloud cookie statement
Powered by GitBook
On this page
  • Model NMS Threshold
  • Preprocessor
  • Postprocessor
  1. Nx AI Manager
  2. 3. Configure the Nx AI Manager plugin

3.1 Model Settings

Previous3. Configure the Nx AI Manager pluginNext3.2 Model pipeline selection and configuration

Last updated 3 months ago

A model that is active on a device can have multiple settings These settings depend on the capabilities of the model or the server.

When you change one of these settings, the pipeline form will change to indicate that the settings need to be saved manually.

To save the settings, click the "Save pipelines" button. If you do not want to save the settings, refreshing the page will reset the form. Navigating away from the device details page will also reset the form without saving the settings.

Model NMS Threshold

The NMS Threshold (Non Max Suppression) sets the cut off for when models should not return detections with a probability score below the current threshold value.

This is a setting that is model dependent, so not all models have this option.

Preprocessor

If the server that the device is connected to has any preprocessors available, they can be selected here.

This is a setting that is server dependent, so moving a device to another server may change the available options.

Postprocessor

If the server that the device is connected to has any postprocessors available, they can be selected here.

This is a setting that is server dependent, so moving a device to another server may change the available options.

A device is one model pipeline with a single model
A model pipeline form with changed settings that are not saved yet