Model Inference Guide
This section introduces how to use the CPU and NPU to accelerate neural network model inference on the RevyOS system, including environment setup, model deployment, common issues, and solutions.
Reference: hhb-tools Yuque Notes
Issue Feedback
If you encounter any problems, please submit