Conversion & Quantization ========================= .. toctree:: :hidden: convPytorchOnnx quantOnnx convOnnxTfLite tensorflow imx93 neutron .. note:: See the :ref:`video-opt-pipeline` and other video tutorials on how-to use the GUI. NXP’s edge computing solutions require machine learning models in quantized **TensorFlow Lite (TF Lite)** format for optimal performance and deployment. However, most ML engineers develop and train models using popular frameworks such as **PyTorch** or **TensorFlow**, which often produce formats not directly compatible with NXP hardware. **eIQ AI Toolkit** bridges this gap by offering robust conversion and quantization tools that transform trained models into deployment-ready formats. Especially platforms like * **i.MX93** * **i.MX RT700** * **MCX N series** need models converted to a specialized TF Lite variants. For platform-specific deployment instructions, see the :doc:`Deployment Guide <../deploy/index>`. In the following guides, you’ll find all the steps needed to convert models into quantized TFLite format. Specifically, you’ll learn how to: * :doc:`Convert a TensorFlow model to quantized TFLite<./tensorflow>` * :doc:`Convert a PyTorch model to ONNX<./convPytorchOnnx>` * :doc:`Quantize an ONNX model<./quantOnnx>` * :doc:`Convert a quantized ONNX model to quantized TFLite<./convOnnxTfLite>` * :doc:`Convert quantized TFLite to the format supported by i.MX 93 NPU<./imx93>` * :doc:`Convert quantized TFLite to the format supported by Neutron NPU<./neutron>` These guides include practical examples demonstrating the **eIQ AI Toolkit**’s conversion API. For complete API documentation: 1. Set up **IQ AI Toolkit** using the :doc:`Installation Guide <../tools/aiToolkit/installRun>` 2. Launch the application 3. Visit http://localhost:8000/docs for interactive API documentation