Command Line Usage

To know what passes to run, what are its inputs, outputs and arguments, olive uses run configurations. A run configuration is a JSON file that you pass to the command-line tool, and it defines the entire workflow.

If you already have your own configuration file, update the path in the examples below accordingly. If not, an example configuration is provided for you to download and review.

1. Prepare configuration

[ ]:
from pathlib import Path
import requests

# Change this if you have your own model.
config_path = Path("my_path_to_config.json")
[ ]:
# Alternatively download example model.
example_config_url = "https://eiq.nxp.com/training-materials/_misc/config.json"

with open(config_path, "wb") as f:
    response = requests.get(
        url=example_config_url
    )
    f.write(response.content)

    if response.status_code != 200:
        print(f"Failed to download model: {response.content}")
    else:
        print(f"Configuration file downloaded and saved to {config_path} file.")

The example JSON file contains the configuration shown below. The first part of the configuration defines the input model for the entire workflow. Notice that the path is set to input_model.onnx. In the next step, we download this model and use it with the eiq-olive package. If you want to use your own model instead, make sure to update the path in the configuration file.

The next section of the configuration file describes the passes applied to the input model.

In this example, the workflow uses:

  • TFLiteConversion pass — converts the input ONNX model into a TFLite model

  • VelaConversion pass — converts the resulting TFLite model for deployment on the i.MX 93 NPU

For more details on specific passes and their available parameters, see the Conversion and Quantization guide.

{
  "input_model": {
    "type": "onnxmodel",
    "config": {
      "model_path": "input_model.onnx"
    }
  },
  "systems": {
    "local_system": {
      "type": "LocalSystem"
    }
  },
  "passes": {
    "0": [
      {
        "type": "TFLiteConversion",
        "config": {}
      }
    ],
    "1": [
      {
        "type": "VelaConversion",
        "config": {}
      }
    ]
  },
  "engine": {
    "log_severity_level": 0,
    "host": "local_system",
    "target": "local_system",
    "evaluate_input_model": false,
    "clean_cache": true,
    "cache_dir": "cache_dir",
    "output_dir": "models_dir"
  }
}

2. Prepare model

Now we download an example model to use in the conversion workflow. If you prefer to use your own model, simply set the path to its location and remember to update the JSON configuration file accordingly.

[ ]:
model_path = Path("input_model.onnx")
[ ]:
example_model_url = "https://eiq.nxp.com/training-materials/_misc/models/quantized_model.onnx"

with open(model_path, "wb") as f:
    response = requests.get(
        url=example_model_url
    )
    f.write(response.content)

    if response.status_code != 200:
        print(f"Failed to download model: {response.content}")
    else:
        print(f"Model downloaded and saved to {model_path} file.")

3. Run workflow

With both the model and the configuration file ready, we can now run the conversion workflow.

[ ]:
!python -m olive run --run-config $config_path

If you want to see additional command‑line options, run:

[ ]:
!python -m olive run -h
!python -m olive -h