Deployment

Note

See the Chapter 4 - Optimization Pipeline and other video tutorials on how-to use the GUI.

Deploying a machine learning model to an NXP platform involves converting the model into the required format. Once the model is in the correct format, it is up to the user to load it onto the device and integrate it into an application.

The standard model format for NXP platforms is quantized TF Lite. Some platforms require a custom variant of this format. All individual conversion steps are described in detail in the guide: Conversion and Quantization.

Depending on the type of model obtained during training or fine-tuning, converting it to quantized TF Lite may involve multiple steps. This guide will show you how to combine these steps into a single workflow.

You will learn: