Link Search Menu Expand Document

Accelerate TensorFlow model inferencing

ONNX Runtime can accelerate inferencing times for TensorFlow, TFLite, and Keras models.

Contents

Get Started

Export model to ONNX

TensorFlow

These examples use the TensorFlow-ONNX converter, which supports TensorFlow 1, 2, Keras, and TFLite model formats.

TFLite

Keras

Keras models can be converted using either the tensorflow-onnx or Keras-ONNX converter. The TensorFlow-ONNX converter supports newer opsets with more active support.

Accelerate TensorFlow model inferencing