Conversion & Quantization¶
Note
See the Chapter 4 - Optimization Pipeline and other video tutorials on how-to use the GUI.
NXP’s edge computing solutions require machine learning models in quantized TensorFlow Lite (TF Lite) format for optimal performance and deployment. However, most ML engineers develop and train models using popular frameworks such as PyTorch or TensorFlow, which often produce formats not directly compatible with NXP hardware.
eIQ AI Toolkit bridges this gap by offering robust conversion and quantization tools that transform trained models into deployment-ready formats.
Especially platforms like
i.MX93
i.MX RT700
MCX N series
need models converted to a specialized TF Lite variants. For platform-specific deployment instructions, see the Deployment Guide.
In the following guides, you’ll find all the steps needed to convert models into quantized TFLite format. Specifically, you’ll learn how to:
Convert quantized TFLite to the format supported by i.MX 93 NPU
Convert quantized TFLite to the format supported by Neutron NPU
These guides include practical examples demonstrating the eIQ AI Toolkit’s conversion API. For complete API documentation:
Set up IQ AI Toolkit using the Installation Guide
Launch the application
Visit http://localhost:8000/docs for interactive API documentation