Nx AI Manager Documentation
  • Nx AI Manager plugin v4.4
  • Nx AI Manager
    • Get started with the NX AI Manager plugin
    • 1. Install Network Optix
    • 2. Install Nx AI Manager Plugin
    • 3. Configure the Nx AI Manager plugin
      • 3.1 Model Settings
      • 3.2 Model pipeline selection and configuration
      • 3.3 Model pipelines on multiple devices
    • 4. Other Network Optix Plugin Settings
    • 5. Manual Plugin Installation
    • 6. Removing the Nx AI Manager
    • 7. Advanced configuration
      • 7.1 Nx AI Manager Manual Installation
      • 7.2 External Post-processing
      • 7.3 External Pre-processing
      • 7.4 Training Loop
      • 7.5 Enable ini settings
  • Nx AI Cloud
    • Introduction
    • Registration and log-in
    • Deployment and device management
    • Upload your model
      • Normalization
    • Use your model
    • API Documentation
  • SUPPORT & TROUBLESHOOTING
    • How to get support
    • Troubleshooting
      • Plugin checks
      • OS checks
      • System checks
      • Things to try
      • Controlling the server and the plugin
      • Q&A
  • Videos
    • Howto videos
  • AI Accelerators Support
    • Introduction
    • Supported AI accelerators
    • Nvidia Support
    • OpenVino Support
    • Hailo Support
  • For Data Scientists
    • Introduction
    • About ONNX
    • Custom model creation
    • ONNX requirements
    • Importing models
      • From Edge Impulse
      • From Nota AI
      • From Teachable Machine
      • From Hugging Face
      • From Ultralytics
      • From PyTorch
      • From TensorFlow / TFLite
      • From Scikit-learn
      • Common Models
  • Miscellaneous
    • Nx AI Certification Test
    • Nx AI Manager on SCAiLX
    • Privacy policy
    • Support
    • End user license agreement
    • Nx cloud cookie statement
Powered by GitBook
On this page
  1. Nx AI Manager
  2. 7. Advanced configuration

7.3 External Pre-processing

This page describes how to implement an external preprocessor to integrate with the Nx AI Manager

Previous7.2 External Post-processingNext7.4 Training Loop

Last updated 3 months ago

It is sometimes desired to add custom or proprietary pre-processing to the inference pipeline. It is therefore possible to add a custom application which receives the input frame to the Nx AI Manager and has the opportunity to alter or analyse the input frame.

Examples are provided in how to create these applications. These applications can be created using any programming language, as long as the device can execute this program, send it messages over a Unix socket, and receive a response.

Through the settings, instructions can be provided to the Nx AI Manager on how to start the application. The Nx AI Manager will automatically start the applications on startup, and terminate it when the Nx AI Manager terminates.

The external pre-processor runs as a completely independent application. The Nx AI Manager puts no restrictions on which hardware, API's or tools this application uses. As long as this application can receive and respond to messages over a Unix socket, it will be compatible.

The external pre-processing step happens before any other pre-processing is done on the frame. This means that the external pre-processor receives the original, full resolution image as it was sent to the Nx AI Manager.

The external pre-processor will receive a header message over Unix socket which describes the input frame, as well as provide details on how to connect to the shared memory segment where the input frame is stored. The external pre-processor can then connect to this shared memory, alter the data, or write back a completely new image with new dimensions. The altered data will then be used in the rest of the pipeline.

The Nx AI Manager will wait until the external pre-processor responds with a header message, containing information on the new ( or same ) image dimensions and new ( or same ) shared memory segment containing image data. After this message is received, the Nx AI Manager will copy the data from the shared memory segment and use it for the rest of the inference pipeline.

External Preprocessor data flow