Accelerate TensorFlow model inferencing
ONNX Runtime can accelerate inferencing times for TensorFlow, TFLite, and Keras models.
Contents
Get Started
Export model to ONNX
TensorFlow
These examples use the TensorFlow-ONNX converter, which supports TensorFlow 1, 2, Keras, and TFLite model formats.
- TensorFlow: Object detection (efficentdet)
- TensorFlow: Object detection (SSD Mobilenet)
- TensorFlow: Image classification (efficientnet-edge)
- TensorFlow: Image classification (efficientnet-lite)
- TensorFlow: Natural Language Processing (BERT)
TFLite
Keras
Keras models can be converted using either the tensorflow-onnx or Keras-ONNX converter. The TensorFlow-ONNX converter supports newer opsets with more active support.
- tf2onnx: Image classification (Resnet 50)
- keras2onnx: Image classification (efficientnet)
- keras2onnx: Image classification (Densenet)
- keras2onnx: Natural Language Processing (BERT)
- keras2onnx: Handwritten Digit Recognition (MNIST)