{ "cells": [ { "cell_type": "markdown", "id": "31cd209d-39a4-4e0b-b34a-ced26db5b7d3", "metadata": {}, "source": [ "# i.MX 8M Plus" ] }, { "cell_type": "markdown", "id": "c06c3b4b-01c8-4b99-9b30-7c8bd7a676cb", "metadata": {}, "source": [ "This guide walks you through the process of profiling a TensorFlow Lite model on the i.MX 8M Plus MPU using the **eIQ AI Toolkit** On-device Profiling feature. It covers the essential steps required to register your hardware and model, and highlights useful endpoints for profiling tasks.\n", "\n", "What You'll Learn:\n", "- How to register the i.MX8MP MPU device in **eIQ AI Toolkit**\n", "- How to upload and register a TF Lite model for profiling\n", "- Key **eIQ AI Toolkit** API endpoints relevant to model profiling\n", "\n", "*Note: This guide is specifically focused on profiling TF Lite models. Other model formats may require different steps or configurations.*\n", "\n", "*Note: This guide was developed and run using Python 3.11.*" ] }, { "cell_type": "markdown", "id": "0401283f", "metadata": {}, "source": [ "This guide requires the **eIQ AI Toolkit** backend to be running.\n", "If you haven't set it up yet, please refer to the following tutorial:\n", "[eIQ AI Toolkit setup & launch](../../tools/aiToolkit/installRun.ipynb)" ] }, { "cell_type": "code", "execution_count": null, "id": "dee64a28", "metadata": {}, "outputs": [], "source": [ "import requests\n", "from pathlib import Path\n", "\n", "# Set your eIQ AI Toolkit url:\n", "AI_TOOLKIT_BACKEND_URL = \"http://localhost:8000\"" ] }, { "cell_type": "markdown", "id": "bbec1ba0", "metadata": {}, "source": [ "## Device" ] }, { "cell_type": "markdown", "id": "58edee4f", "metadata": {}, "source": [ "Ensure that this device is properly set up and accessible by the **eIQ AI Toolkit** backend before initiating the profiling process:" ] }, { "cell_type": "code", "execution_count": null, "id": "6adeccfb", "metadata": {}, "outputs": [], "source": [ "device_url = \"http://your_device_url\"\n", "device_name = \"your_custom_device_name\"\n", "\n", "response = requests.post(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/devices\",\n", " json={\n", " \"url_address\": device_url,\n", " \"name\": device_name,\n", " }\n", ")\n", "\n", "print(response.json())" ] }, { "cell_type": "markdown", "id": "1d68e88d", "metadata": {}, "source": [ "Once the device has been successfully added to **eIQ AI Toolkit**, its details can be retrieved using the following endpoint:" ] }, { "cell_type": "code", "execution_count": null, "id": "a042fc4a", "metadata": {}, "outputs": [], "source": [ "response = requests.get(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/devices/info\",\n", " params={\n", " \"url_address\": device_url,\n", " }\n", ")\n", "\n", "print(response.json())" ] }, { "cell_type": "markdown", "id": "fc323842", "metadata": {}, "source": [ "To proceed with profiling, the device detail must report `device_available`: `True`.\n", "\n", "This detail also indicates whether the *modelrunner* utility is installed on the device, along with its version. Even if the utility appears to be installed, it is recommended to invoke the following endpoint to ensure the device has installed the recommended version of *modelrunner*:" ] }, { "cell_type": "code", "execution_count": null, "id": "b5b9eb8d", "metadata": {}, "outputs": [], "source": [ "response = requests.post(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/devices/modelrunner_install\",\n", " params={\n", " \"url_address\": device_url,\n", " }\n", ")\n", "\n", "print(response.json())" ] }, { "cell_type": "markdown", "id": "8baa9c4a", "metadata": {}, "source": [ "## Model" ] }, { "cell_type": "markdown", "id": "79aabe3b", "metadata": {}, "source": [ "If you already have a trained model ready, simply update the path to point to its location.\n", "If you don’t have a trained model yet, set the path to a location where the model should be saved.\n", "(See the following sections for instructions on how to download a sample model.)" ] }, { "cell_type": "code", "execution_count": null, "id": "0460cd26", "metadata": {}, "outputs": [], "source": [ "model_path = Path(\"your_model_path.tflite\")" ] }, { "cell_type": "markdown", "id": "af40bec09b417432", "metadata": {}, "source": [ "Use the following script to download the example model:\n", "\n", "*Note: Skip this step if you already have your own model.*" ] }, { "cell_type": "code", "execution_count": null, "id": "f4cdf90ec83e3bb8", "metadata": {}, "outputs": [], "source": [ "example_model_url = \"https://eiq.nxp.com/training-materials/_misc/models/mobilenet_v3-small_224_1.0_uint8.tflite\"\n", "\n", "with open(model_path, \"wb\") as f:\n", " response = requests.get(\n", " url=example_model_url\n", " )\n", " f.write(response.content)" ] }, { "cell_type": "markdown", "id": "1e553fa8", "metadata": {}, "source": [ "### Upload model to eIQ AI Toolkit" ] }, { "cell_type": "markdown", "id": "0b0b2db3", "metadata": {}, "source": [ "Uploading a model to **eIQ AI Toolkit** consists of two steps:\n", "\n", "1. Upload Metadata - This includes information such as the model name, format (e.g., TF Lite), input/output shapes, and other relevant attributes.\n", "\n", "2. Upload Model File - After the metadata is registered, the actual model file (e.g., .tflite) is uploaded to the platform." ] }, { "cell_type": "markdown", "id": "90ec8ca3", "metadata": {}, "source": [ "Submit the metadata:" ] }, { "cell_type": "code", "execution_count": null, "id": "baa7de76", "metadata": {}, "outputs": [], "source": [ "response = requests.post(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/models\",\n", " params={\n", " \"model_name\": \"your_custom_model_name\",\n", " },\n", " json={\n", " \"model_type\": \"tflite\"\n", " }\n", ")\n", "\n", "data = response.json()\n", "print(data)\n", "model_uuid = data[\"data\"][\"model\"][\"uuid\"] # Assigned model identifier" ] }, { "cell_type": "markdown", "id": "e051255e", "metadata": {}, "source": [ "Upload the model file:" ] }, { "cell_type": "code", "execution_count": null, "id": "6234175a", "metadata": {}, "outputs": [], "source": [ "with open(model_path, \"rb\") as model_file:\n", " response = requests.post(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/models/{model_uuid}\", # Model identifier is part of the request URL\n", " files={\n", " \"model_file\": model_file,\n", " }\n", " )\n", "\n", "print(response.json())" ] }, { "cell_type": "markdown", "id": "62282b36", "metadata": {}, "source": [ "After uploading the model metadata and file, you can verify the model's registration and readiness status using the following endpoint:" ] }, { "cell_type": "code", "execution_count": null, "id": "fcdcdfcdc47c5433", "metadata": {}, "outputs": [], "source": [ "response = requests.get(f\"{AI_TOOLKIT_BACKEND_URL}/models/{model_uuid}\")\n", "data = response.json() \n", "print(f'Model status: {data[\"data\"][\"model\"][\"status\"]}')\n", "print(f'Model status description: {data[\"data\"][\"model\"][\"status_description\"]}')" ] }, { "cell_type": "markdown", "id": "e9428d0c", "metadata": {}, "source": [ "The model can be used for profiling once its status is reported as `ready`." ] }, { "cell_type": "markdown", "id": "d9ececfc", "metadata": {}, "source": [ "## Profiling" ] }, { "cell_type": "markdown", "id": "6ba92e80", "metadata": {}, "source": [ "To start online profiling, invoke the endpoint `/profiling/run_on_device`.\n", "\n", "\n", "You will need the following parameters:\n", "- Model identifier – the unique ID of the model you uploaded\n", "- Device URL – the address of the target device (e.g., i.MX 8M Plus MPU)\n", "- Delegate – the profiling hardware parameter (see cell below)\n", "- Run name (optional) – a custom name for the profiling session, useful for tracking and organizing results" ] }, { "cell_type": "code", "execution_count": null, "id": "879a2448", "metadata": {}, "outputs": [], "source": [ "profiling_delegate = \"npu\" # The two available options are \"cpu\" and \"npu\"\n", "profiling_run_name = \"example_online_profiling_tflite_model\"\n", "\n", "print(f\"Model identifier: {model_uuid}\")\n", "print(f\"Device url: {device_url}\")\n", "print(f\"Delegate: {profiling_delegate}\")\n", "print(f\"Custom profiling name run: {profiling_run_name}\")" ] }, { "cell_type": "markdown", "id": "c7088659", "metadata": {}, "source": [ "Request the profiling:" ] }, { "cell_type": "code", "execution_count": null, "id": "61e4e01f", "metadata": {}, "outputs": [], "source": [ "response = requests.post(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/profiling/run_on_device\",\n", " json={\n", " \"model_uuid\": model_uuid,\n", " \"target_url\": device_url,\n", " \"delegate\": profiling_delegate,\n", " \"name\": profiling_run_name,\n", " }\n", ")\n", "\n", "data = response.json()\n", "print(data)\n", "profiling_uuid = data[\"data\"][\"profiling\"][\"uuid\"] # Assigned identifier of the requested profiling run" ] }, { "cell_type": "markdown", "id": "86bb8c30", "metadata": {}, "source": [ "After initiating the profiling job, you can monitor its progress using the following API call:" ] }, { "cell_type": "code", "execution_count": null, "id": "6de8c5fd53235ee3", "metadata": {}, "outputs": [], "source": [ "response = requests.get(\n", " url=f\"{AI_TOOLKIT_BACKEND_URL}/profiling/{profiling_uuid}\" # Profiling run identifier is part of the request URL\n", ")\n", "\n", "data = response.json()\n", "print(data)\n", "print(f'Profiling status: {data[\"data\"][\"profiling\"][\"status\"]}')\n", "print(f'Profiling status description: {data[\"data\"][\"profiling\"][\"status_description\"]}')" ] }, { "cell_type": "markdown", "id": "79ef35c99d16e3cf", "metadata": {}, "source": [ "Once the profiling status is marked as `success` you can proceed to analyze the results. If the status is still `in_progress`, re-run the status check (cell above) until the profiling completes." ] }, { "cell_type": "code", "execution_count": null, "id": "101afafdac53eb84", "metadata": {}, "outputs": [], "source": [ "profiling_data = data[\"data\"][\"profiling\"]\n", "print(profiling_data)" ] } ], "metadata": { "kernelspec": { "display_name": "sphinx-env (3.11.11)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }