NX AI Certification Test
Certification Test
Quickstart
The NX AI Certification test should be able to run on any Ubuntu installation which is also compatible with Network Optix Server.
Ensure your device has at least a few gigabytes of free space, a working internet connection and a working Python 3 and Pip installation. Execute the following commands:
Running the python3 Utilities/install_acceleration_library.py
command will automatically detect the available hardware acceleration that is available on your device. If the script finds more than one option, it will pause execution and ask you to choose. If you want to execute these commands without pausing, add an the accelerator name to the command, such as the "Nx CPU" example below.
The test will run for a couple of hours and will stress the device. Do not power off your device.
Introduction
The Nx AI Certification Test is meant to test any device to ensure it is compatible and stable enough to run the Nx AI Manager for extended periods of time.
The test will attempt to run multiple common model architectures, with different sizes, and even multiples of those models to test compatibility and what can be expected to run on the device. The test will also run long operation tests to check if there are any memory issues or degradation in performance due to overheating or other reasons.
Finally, the test will gather all results in a folder which can be used to generate a report, either on the device or somewhere else. The report should contain enough information to determine if the device is indeed compatible.
Installing NxAI Manager
A useful script is provided to install the NX AI Manager runtime locally within the test environment. This will ensure that this testing does not interfere with any existing installations on the device.
The script also allows for installing different acceleration libraries by giving command line arguments. For CPU acceleration:
For Nvidia Cuda acceleration:
For Hailo acceleration:
It is also possible to install multiple runtimes and have the NxAI Certification test performed on all installed runtimes. This can be done by running the installation script for each runtime you would like to test.
Running test
The NxAI Certification test includes a collection of tests to test different aspects of your device to ensure that the NxAI Toolkit can run on your device. To run all of these tests in one large test, run:
This could take many hours to complete.
Uploading Results
An endpoint was created where you can upload and view the test results of your device. After the test has completed, run the following to automatically upload the script to the cloud:
Generating Report
A script is provided which can be used to generate a report. To generate the report, additional dependencies must be installed. These can be installed with:
Finally the report can be generated with:
This will create a "Report" folder containing a file called 'report.md', which is the generated report in Markdown format.
If the dependencies cannot be installed on the target device or there are space limitations, the "Results" folder can be distributed to another device where the report can be generated instead.
Custom Model Benchmark
The Custom Model Benchmark can only be used to benchmark models which have been uploaded to the NxAI Cloud.
The test allows you to benchmark your own models on different devices. The Certification Test includes the functionality to add your own models to its benchmark test. If you already have that downloaded, you can skip downloading and extracting the benchmark test. If you are only interested in benchmarking a model, you can instead download the smaller benchmark package:
Install an NxAI Manager with runtime by following the steps at Installing NxAI Manager
Next, add as many models as you want to benchmark by running:
And entering the ID of the model you want to test. The ID of your model can be found by navigating to your model in the model cloud. For example:
After you have successfully added all the models you want to benchmark, you can download the model files from the cloud by running:
Once all your models have been downloaded, start the benchmark by running:
The test should now automatically benchmark all the models you've added and give you an overview of how this model/device performs.
Troubleshooting
If the device is working well, but the test is not passing, feel free to contact us for support. To make it easier for us to provide support, please include log files so that we can see what is going wrong on the device.
Make sure you're in the root folder of the test suite and run the following command to gather all the log files:
This will create a file called test_logs.tgz
, please send us this file with your request.
Last updated