# DEEPX Support

<table><thead><tr><th width="153.22265625">AI Accelerator</th><th>Support Level</th><th>CPU Architecture</th><th>Operating System</th><th>API/driver version</th></tr></thead><tbody><tr><td>DEEPX DX-M1</td><td>Experimental</td><td>aarch64, x86_64</td><td>Ubuntu 20+</td><td>DXRT v3.1.0</td></tr><tr><td>DEEPX DX-H1</td><td>Experimental</td><td>aarch64, x86_64</td><td>Ubuntu 20+</td><td>DXRT v3.1.0</td></tr></tbody></table>

### About DEEPX

DEEPX is a leading on-device AI semiconductor company specializing in Neural Processing Units (NPUs), headquartered in South Korea.

![](https://4052997117-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FkP02geOjtLSPAt4JiZUv%2Fuploads%2FNA3LIt1S4NsJVYLJ8Sfu%2Funknown.png?alt=media\&token=64c0848f-8b74-4b49-8679-1ed693d8be6b)

DEEPX has launched the DX-M1 AI Accelerator, delivering 25TOPS under 5W, offering exceptional performance-per-watt efficiency.

To support seamless deployment and execution of deep learning models, DEEPX provides the DXNN Compiler along with various software components optimized for DEEPX hardware.

For more information, please visit the [official website](https://deepx.ai/).

### Hardware and Software Requirements

The DX-M1 module supports both Linux (Ubuntu, Debian) and Windows operating systems.

#### Supported platforms

* x86\_64 (Intel, AMD)
* ARM64 (aarch64)

#### Minimum system requirements

* RAM: 8GB (16GB or higher recommended)
* Disk Space: 4GB or more
* Slot: M.2 Key-M (PCIe Gen3 x4 recommended)

**Note: PCIe Gen3 x1 is supported, but performance will be limited.**

{% hint style="info" %}
The [OAAX DEEPX runtime library](https://github.com/OAAX-standard/deepx-acceleration/tree/main/runtime-library) for NX AI Manager currently supports only Ubuntu 20.04, 22.04, and 24.04. Windows support will be added soon.
{% endhint %}

### Installation Guide

This guide provides instructions for the physical and software installation of the DX-M1 module.

#### **1. Hardware Installation**

To install the DX-M1 module, insert it into an available M.2 (Key M) slot on the target system. The required power (up to 3.3V / 3A) is delivered through the M.2 interface, so no external power connection is needed.

#### **2. Software Installation**

After the module is physically installed, you must install the necessary software components from the DEEPX SDK, which is publicly available on GitHub.

* [DX Runtime (DX-RT)](https://github.com/DEEPX-AI/dx_rt)
* [NPU Driver](https://github.com/DEEPX-AI/dx_rt_npu_linux_driver)
* [Firmware (FW)](https://github.com/DEEPX-AI/dx_fw)

The installation must follow a specific order:

1. **Install DX Runtime (DX-RT) and NPU Driver**. The installation order between these two components does not matter.
2. **Install Firmware (FW)**. Once both the DX Runtime and NPU Driver are installed, proceed with the Firmware (FW) update.

{% hint style="info" %}
The Firmware (FW) can only be installed after both the DX Runtime and NPU Driver are installed and the DX-M1 m.2 module is connected to the system.
{% endhint %}

<details>

<summary>Shell script to install DX RT, NPU Driver and the Firmware.</summary>

```bash
#!/bin/bash

# Version mapping: DX RT version -> commit hashes for each repository
# Format: "version:dx_rt_commit:dx_rt_npu_linux_driver_commit:dx_fw_commit"
declare -A VERSION_MAP=(
    ["3.0.0"]="3.0.0:559f6f19665920d166a5aa1f51880fd72ee529f2:9b61de90a03aa9948eacc0709322fbca664a84cf:c2743976565d93767b0787bf9853d48ba3c993d1"
    ["3.1.0"]="3.1.0:458c62f14b1294909abe785d33fe4d39f49464a2:a90cd9616f6ebe1e10feafb1371e4ca11f0c2c48:d3e0004a70d7e6dd71b33e6fce8a849c9ef3369f"
)

# Function to get the latest supported version from VERSION_MAP
get_latest_version() {
    printf '%s\n' "${!VERSION_MAP[@]}" | sort -V | tail -n1
}

# Function to display available versions
show_versions() {
    local latest_version=$(get_latest_version)
    echo "Available DX RT versions:"
    # Sort versions numerically
    printf '%s\n' "${!VERSION_MAP[@]}" | sort -V | while read -r version; do
        if [ "$version" = "$latest_version" ]; then
            echo "  - $version (latest)"
        else
            echo "  - $version"
        fi
    done
}

# Function to get version selection from user
select_version() {
    show_versions
    echo ""
    read -p "Enter the DX RT version to install (default: latest): " selected_version
    
    if [ -z "$selected_version" ]; then
        selected_version="latest"
    fi
    
    # Resolve "latest" to the highest version in VERSION_MAP
    if [ "$selected_version" = "latest" ]; then
        selected_version=$(get_latest_version)
        echo "Resolved 'latest' to version: $selected_version"
    fi
    
    if [ -z "${VERSION_MAP[$selected_version]}" ]; then
        echo "Error: Invalid version '$selected_version'"
        echo ""
        show_versions
        exit 1
    fi
    
    echo "Selected version: $selected_version"
    SELECTED_VERSION=$selected_version
}

# Parse version mapping
parse_version_info() {
    local version=$1
    local version_info="${VERSION_MAP[$version]}"
    IFS=':' read -r -a parts <<< "$version_info"
    DX_RT_COMMIT="${parts[1]}"
    DRIVER_COMMIT="${parts[2]}"
    FW_COMMIT="${parts[3]}"
}

# Main installation script
select_version
parse_version_info "$SELECTED_VERSION"

echo "Creating installation directory"
mkdir -p deepx-installation
cd deepx-installation

HERE=$(pwd)

# Clone repositories
echo "Cloning repositories..."
git clone https://github.com/DEEPX-AI/dx_rt
git clone --recurse-submodules https://github.com/DEEPX-AI/dx_rt_npu_linux_driver
git clone https://github.com/DEEPX-AI/dx_fw

# Checkout specific versions
echo "Checking out version $SELECTED_VERSION..."

echo "  Checking out dx_rt to commit: $DX_RT_COMMIT"
cd "$HERE/dx_rt"
git checkout "$DX_RT_COMMIT" || {
    echo "Error: Failed to checkout commit $DX_RT_COMMIT in dx_rt"
    exit 1
}

echo "  Checking out dx_rt_npu_linux_driver to commit: $DRIVER_COMMIT"
cd "$HERE/dx_rt_npu_linux_driver"
git checkout "$DRIVER_COMMIT" || {
    echo "Error: Failed to checkout commit $DRIVER_COMMIT in dx_rt_npu_linux_driver"
    exit 1
}
git submodule update --init --recursive

echo "  Checking out dx_fw to commit: $FW_COMMIT"
cd "$HERE/dx_fw"
git checkout "$FW_COMMIT" || {
    echo "Error: Failed to checkout commit $FW_COMMIT in dx_fw"
    exit 1
}

# Install DX RT
echo "1. Installing DX RT..."
cd "$HERE/dx_rt"
./install.sh --all
./build.sh

# Install NPU driver
echo "2. Installing NPU driver..."
cd "$HERE/dx_rt_npu_linux_driver/modules"
sudo ./build.sh -c uninstall
./build.sh
sudo ./build.sh -c install

# Install firmware
echo "3. Installing firmware..."
cd "$HERE/dx_fw/m1/latest/mdot2"
dxrt-cli -u fw.bin -u force 

# Go back to start directory
cd "$HERE/.."
sudo rm -rf deepx-installation
echo "Installation completed successfully. Please reboot your machine"
```

</details>

#### **3. Verifying Installation and Compatibility**

After completing the software installation, you can run verification commands to ensure the setup is correct and all components are compatible.

**Checking System Status**

To verify that the module is properly recognized and all components are running, use the `dxrt-cli --status` command:

```bash
dxrt-cli --status
```

A successful setup will produce an output similar to the following, displaying the correct versions for the runtime, driver, and firmware:

```bash
DXRT v3.1.0
=======================================================
 * Device 0: M1, Accelerator type
---------------------   Version   ---------------------
 * RT Driver version   : v1.8.0
 * PCIe Driver version : v1.6.0
-------------------------------------------------------
 * FW version          : v2.4.0
--------------------- Device Info ---------------------
 * Memory : LPDDR5 5600 Mbps, 3.92GiB
 * Board  : M.2, Rev 1.0
 * Chip Offset : 0
 * PCIe   : Gen3 X4 [01:00:00]

NPU 0: voltage 730 mV, clock 1000 MHz, temperature 48'C
NPU 1: voltage 730 mV, clock 1000 MHz, temperature 48'C
NPU 2: voltage 730 mV, clock 1000 MHz, temperature 48'C
=======================================================
```

**Checking Version Compatibility**

To check the minimum required versions of the driver and firmware for your installed DX-RT version, run the `dxrt-cli -v` command:

```bash
dxrt-cli -v
```

The output will show the compatibility requirements as follows:

```bash
DXRT v3.1.0
Minimum Driver Versions
  Device Driver: v1.8.0
  PCIe Driver: v1.5.1
  Firmware: v2.4.0
Minimum Compiler Versions
  Compiler: v1.18.1
  .dxnn File Format: v6
```

### Supported Models

The DEEPX SDK includes an [Open Model Zoo](https://developer.deepx.ai/wp-content/modelzoo/model_zoo_fin.html) that provides various pre-optimized neural network models for DEEPX NPUs.

All models are available as:

* Pretrained ONNX models
* Corresponding configuration .json files
* Precompiled DXNN binaries

These models cover tasks such as:

* Object Detection
* Face Detection
* Classification
* Other vision AI applications

### Creating a ZIP Archive for Model Conversion

To convert models for use with DEEPX Accelerators in the [Nx AI Cloud](https://admin.sclbl.nxvms.com/), models must be uploaded as a ZIP archive.

For instructions on preparing the archive, refer to this GitHub guide: [Prepare Your Model](https://github.com/OAAX-standard/deepx-acceleration/tree/main/conversion-toolchain#step-2-prepare-your-model)

For guidance on JSON configuration files and supported ONNX operators, refer to the following

documents:

* [JSON File Configuration](https://developer.deepx.ai/wp-content/docs/DX-M1/SDK%20User%20Guide/DX-COM_v1.60.1/docs/docs/02_05_JSON_File_Configuration.html)
* [Supported ONNX Operations](https://developer.deepx.ai/wp-content/docs/DX-M1/SDK%20User%20Guide/DX-COM_v1.60.1/docs/docs/03_Building_Models.html)

### Building post-processor for the Nx AI Manager

Models deployed to DEEPX Accelerators using the Nx AI Manager require [custom external post-processing](https://nx.docs.scailable.net/nx-ai-manager/7.-advanced-configuration/7.1-external-post-processing) to transform the model’s raw outputs to meaningful insights such as bounding boxes.

For instance, for the [Yolov5 (320, final output only)](https://drive.google.com/file/d/1IdEaIHiXvFSwwpEAAlU81XVLk06r_9MX/view?usp=drive_link) trained on the COCO dataset, the following archive contains the source code and the recipe to build it:

[Yolov5 DEEPX post-processor archive](https://drive.google.com/file/d/1ViD5UPsVKWoIc9dCtlPq3qOS7i5kw69J/view?usp=sharing).

To compile and install it, you can run:

```bash
bash compile_install.sh
sudo systemctl restart  networkoptix-metavms-mediaserver.service # For changes to take effect.
```

Finally, make sure to assign the compiled post-processor to the DEEPX model pipeline as shown [here](https://nx.docs.scailable.net/nx-ai-manager/configure-the-plugin/model-settings#postprocessor).

#### Important Notes

**DX-RT Build Option**

The provided Post-Processor functions correctly only when DX-RT is built with the `USE_ORT=ON` option. If you build DX-RT with the `USE_ORT=OFF` option, the number and shape of the output tensors sent to the Post-Processor will be different. Therefore, you must develop and use a separate Post-Processor specifically for that environment.

***

**Model Output Configuration**

We do not recommend using models that output per-layer feature maps in addition to the final output tensor required for post-processing. Such models can cause overall performance degradation due to unnecessary memory usage and data copying.

For example, in the case of a YOLOv5 320x320 model, the required final output tensor for post-processing is as follows:

* `output, FLOAT, [1, 6300, 85]`

However, some ONNX models may be configured to output unnecessary feature maps in addition to the final result, as shown below:

* `458, FLOAT, [1, 3, 20, 20, 85]`
* `output, FLOAT, [1, 6300, 85]`
* `397, FLOAT, [1, 3, 40, 40, 85]`
* `519, FLOAT, [1, 3, 10, 10, 85]`

Using a model with such unnecessary outputs can lead to the following problems:

* **Potential for Malfunction**: If the order of the output tensors passed to the Post-Processor changes, it may fail to find the correct tensor and will not function properly.
* **Performance Degradation**: Even when the Post-Processor functions without errors, the presence of these extra outputs still causes performance degradation. The Inference Engine must perform unnecessary memory allocation and copy operations for the unused feature maps, which leads to a decrease in overall **throughput** and an increase in **latency**.

Therefore, before converting an ONNX model to the `dxnn` model format, it is crucial to verify that the model does not output any unnecessary feature maps.

### Monitoring and Troubleshooting Tips

The DX-RT includes essential tools like dxrt-cli and dxtop, designed for device status

checks, real-time monitoring, firmware updates, and troubleshooting.

#### dxrt-cli – Command-line tool for device management

It supports operations such as device status checks, firmware updates, monitoring, resets, and diagnostics.

Example usage:

```bash
$ dxrt-cli -h
DXRT v3.1.0
DXRT v3.1.0 CLI
Usage:
  dxrt-cli [OPTION...]

  -s, --status             Get device status
  -i, --info               Get device info
  -m, --monitor arg        Monitoring device status every [arg] seconds
                           (arg > 0)
  -r, --reset [=arg(=0)]   Reset device(0: reset only NPU) (default: 0)
  -d, --device arg         Device ID (if not specified, CLI commands will
                           be sent to all devices.) (default: -1)
  -u, --fwupdate arg       Update firmware with deepx firmware file.
                           sub-option : [force:force update, unreset:device
                           unreset(default:reset)]
  -g, --fwversion arg      Get firmware version with deepx firmware file
  -C, --fwconfig_json arg  Update firmware settings from [JSON]
  -v, --version            Print minimum versions
  -h, --help               Print usage
```

#### dxtop – Real-time monitoring tool (Linux only)

dxtop is a terminal-based utility that provides real-time insights into:

* NPU Memory usage
* Utilization
* Temperature
* Voltage
* Clock frequency

Example usage:

```
[DX-TOP]  (q) Quit   (n) Next Page   (p) Prev Page   (h) Help
Mon Jan 19 16:59:48 2026
DX-RT: v3.1.0     NPU Device driver: v1.7.1     DX-TOP: v1.0.1
-------------------------------------------------------------------------------
Device :0     Variant: M1     PCIe Bus Number: 08:00:00     Firmware:  v2.4.0
  NPU Memory:  [                    ] 0 B / 3.92 GiB (0.00%)
    Core :0   Util:   0.0%   Temp: 40 °C   Voltage: 750 mV   Clock: 1000 MHz
    Core :1   Util:   0.0%   Temp: 40 °C   Voltage: 750 mV   Clock: 1000 MHz
    Core :2   Util:   0.0%   Temp: 40 °C   Voltage: 750 mV   Clock: 1000 MHz
-------------------------------------------------------------------------------

Total Devices: 1
Page: 1 / 1
```

### Technical Support

For technical assistance or questions, please contact us through the following channels:

* DEEPX Developer Portal: <https://developer.deepx.ai/>
* DEEPX Support Email: <tech_support@deepx.ai>
* Nx Customer Support Portal: <https://support.networkoptix.com/>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://nx.docs.scailable.net/ai-accelerators-support/advanced/deepx-support.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
