ArmNN Execution Provider
Contents
Accelerate performance of ONNX model workloads across Armv8 cores with the ArmNN execution provider. ArmNN is an open source inference engine maintained by Arm and Linaro companies.
Build
For build instructions, please see the BUILD page.
Usage
C/C++
To use ArmNN as execution provider for inferencing, please register it as below.
Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions so;
bool enable_cpu_mem_arena = true;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_ArmNN(so, enable_cpu_mem_arena));
The C API details are here.
Performance Tuning
For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning
When/if using onnxruntime_perf_test, use the flag -e armnn