Link Search Menu Expand Document

Build ONNX Runtime for training

Contents

CPU

Build Instructions

To build ORT with training support add --enable_training build instruction.

All other build options are the same for inferencing as they are for training.

Windows

.\build.bat --config RelWithDebInfo --build_shared_lib --parallel --enable_training

The default Windows CMake Generator is Visual Studio 2017, but you can also use the newer Visual Studio 2019 by passing --cmake_generator "Visual Studio 16 2019" to .\build.bat

Linux/macOS

./build.sh --config RelWithDebInfo --build_shared_lib --parallel --enable_training

GPU / CUDA

Prerequisites

The default NVIDIA GPU build requires CUDA runtime libraries installed on the system:

These dependency versions should reflect what is in the Dockerfiles.

Build instructions

  1. Checkout this code repo with git clone https://github.com/microsoft/onnxruntime

  2. Set the environment variables: adjust the path for location your build machine
     export CUDA_HOME=<location for CUDA libs> # e.g. /usr/local/cuda
     export CUDNN_HOME=<location for cuDNN libs> # e.g. /usr/local/cuda
     export CUDACXX=<location for NVCC> #e.g. /usr/local/cuda/bin/nvcc
     export PATH=<location for openmpi/bin/>:$PATH
     export LD_LIBRARY_PATH=<location for openmpi/lib/>:$LD_LIBRARY_PATH
     export MPI_CXX_INCLUDE_PATH=<location for openmpi/include/>
    
  3. Create the ONNX Runtime wheel

    • Change to the ONNX Runtime repo base folder: cd onnxruntime
    • Run ./build.sh --enable_training --use_cuda --config=RelWithDebInfo --build_wheel

    This produces the .whl file in ./build/Linux/RelWithDebInfo/dist for ONNX Runtime Training.

GPU / ROCM

Prerequisites

The default AMD GPU build requires ROCM software toolkit installed on the system:

These dependency versions should reflect what is in the Dockerfiles.

Build instructions

  1. Checkout this code repo with git clone https://github.com/microsoft/onnxruntime

  2. Create the ONNX Runtime wheel

    • Change to the ONNX Runtime repo base folder: cd onnxruntime
    • Run ./build.sh --config RelWithDebInfo --enable_training --build_wheel --use_rocm --rocm_home /opt/rocm --nccl_home /opt/rocm --mpi_home <location for openmpi>

    This produces the .whl file in ./build/Linux/RelWithDebInfo/dist for ONNX Runtime Training.

DNNL and MKLML

Build Instructions

Linux

./build.sh --enable_training --use_dnnl

Windows

.\build.bat --enable_training --use_dnnl

Add --build_wheel to build the ONNX Runtime wheel

This will produce a .whl file in build/Linux/RelWithDebInfo/dist for ONNX Runtime Training