Deploy neural network inference on edge AI chips and embedded platforms. Covers quantisation (INT8/INT4), model pruning, TensorFlow Lite, ONNX runtime, NPU operator mapping, memory bandwidth optimisation, and latency profiling. Hands-on deployment on ARM Ethos NPU, Raspberry Pi 5, and custom edge AI
Learn more| Has discount |
![]() |
||
|---|---|---|---|
| Expiry period | Lifetime | ||
| Made in | English | ||
| Last updated at | Sun Apr 2026 | ||
| Level |
|
||
| Total lectures | 0 | ||
| Total quizzes | 0 | ||
| Total duration | Hours | ||
| Total enrolment |
0 |
||
| Number of reviews | 378 | ||
| Avg rating |
|
||
| Short description | Deploy neural network inference on edge AI chips and embedded platforms. Covers quantisation (INT8/INT4), model pruning, TensorFlow Lite, ONNX runtime, NPU operator mapping, memory bandwidth optimisation, and latency profiling. Hands-on deployment on ARM Ethos NPU, Raspberry Pi 5, and custom edge AI | ||
| Outcomes |
|
||
| Requirements |
|