Nx AI Manager Documentation
  • Nx AI Manager plugin v4.4
  • Nx AI Manager
    • Get started with the NX AI Manager plugin
    • 1. Install Network Optix
    • 2. Install Nx AI Manager Plugin
    • 3. Configure the Nx AI Manager plugin
      • 3.1 Model Settings
      • 3.2 Model pipeline selection and configuration
      • 3.3 Model pipelines on multiple devices
    • 4. Other Network Optix Plugin Settings
    • 5. Manual Plugin Installation
    • 6. Removing the Nx AI Manager
    • 7. Advanced configuration
      • 7.1 Nx AI Manager Manual Installation
      • 7.2 External Post-processing
      • 7.3 External Pre-processing
      • 7.4 Training Loop
      • 7.5 Enable ini settings
  • Nx AI Cloud
    • Introduction
    • Registration and log-in
    • Deployment and device management
    • Upload your model
      • Normalization
    • Use your model
    • API Documentation
  • SUPPORT & TROUBLESHOOTING
    • How to get support
    • Troubleshooting
      • Plugin checks
      • OS checks
      • System checks
      • Things to try
      • Controlling the server and the plugin
      • Q&A
  • Videos
    • Howto videos
  • AI Accelerators Support
    • Introduction
    • Supported AI accelerators
    • Nvidia Support
    • OpenVino Support
    • Hailo Support
  • For Data Scientists
    • Introduction
    • About ONNX
    • Custom model creation
    • ONNX requirements
    • Importing models
      • From Edge Impulse
      • From Nota AI
      • From Teachable Machine
      • From Hugging Face
      • From Ultralytics
      • From PyTorch
      • From TensorFlow / TFLite
      • From Scikit-learn
      • Common Models
  • Miscellaneous
    • Nx AI Certification Test
    • Nx AI Manager on SCAiLX
    • Privacy policy
    • Support
    • End user license agreement
    • Nx cloud cookie statement
Powered by GitBook
On this page
  • External Postprocessor
  • External Tensor Postprocessor
  1. Nx AI Manager
  2. 7. Advanced configuration

7.2 External Post-processing

This page describes how to implement an external postprocessor to integrate with the Nx AI Manager

Previous7.1 Nx AI Manager Manual InstallationNext7.3 External Pre-processing

Last updated 3 months ago

It is sometimes desired to add custom or proprietary processing to the inference pipeline. It is therefore possible to add a custom application which receives information from the Nx AI Manager and returns optionally altered information.

Examples are provided in how to create these applications in both C and Python. However these applications can be created using any programming language, as long as the device can execute this program, send it messages over a Unix socket, and receive a response.

A high-level overview of the inference pipeline is as follows:

The postprocessor can therefore receive the inference results from the model, optionally alter these results, and return them. The changes added by the postprocessor will then be sent to the Network Optix platform, where it can be used to generate bounding boxes or events.

Through the settings, instructions can be provided to the Nx AI Manager on how to start the application. The Nx AI Manager will automatically start the applications once necessary, and terminate them once execution is finished.

External Postprocessor

A postprocessor receives the inference results as a MessagePack encoded buffer. This message is equivalent to what will be sent to the output. The postprocessor can alter this message and return it. The altered message will then be sent to the Network Optix platform. The returned message should have the same structure as the input message, otherwise the Network Optix platform might be unable to parse it. Examples are provided to show how this structure can be parsed, altered, and written.

External Tensor Postprocessor

A setting is provided in which the user can indicate that a postprocessor should receive access to the input tensor which the inference results were generated from. This can be useful for many applications, such as investigating the input tensor within the generated bounding boxes, or even create sub-images.

When this setting is enabled, the Nx AI Manager platform will write the input tensor to shared memory, where all external postprocessor can access it. It will then send an additional message to the external postprocessor containing information which can be used to access this shared memory.

The image header message is sent after the inference results message. It is therefore required for the external postprocessor to expect to receive two separate messages before responding with its own message. The image header is also a MessagePack encoded message.

The postprocessor can do additional analysis on the tensor data.

A high-level overview of the inference pipeline
External Postprocessor data flow
Tensor Postprocessor data flow