Deployment to eIQ Neutron NPU¶
eIQ AI Toolkit simplifies deploying machine learning models to NXP devices by providing tools to convert the models into formats optimized for NPU-based inference acceleration. Most require quantized TF Lite models as an input for conversion. For platforms such as i.MX 8M Plus, a standard quantized TF Lite model is sufficient.
Devices equipped with a Neutron NPU require an additional conversion step. The output remains a quantized TF Lite model but is adapted specifically for Neutron execution.
To complete the conversion process, refer to these guides:
Deploying machine learning models to devices– Learn the workflow for converting models to quantized TF Lite.
Converting model for Neutron NPU – Perform the final conversion for Neutron-based platforms.