Neural Network Inference on Edge Chips
Buy now
Deploy neural network inference on edge AI chips and embedded platforms. Covers quantisation (INT8/INT4), model pruning, TensorFlow Lite, ONNX runtime, NPU operator mapping, memory bandwidth optimisation, and latency profiling. Hands-on deployment on ARM Ethos NPU, Raspberry Pi 5, and custom edge AI