Hello! I am running a trained YOLOv10 model on the MX3, and I’m testing between direct run of the .pt file generated by YOLO and running a compiled model on the MX3. It seems that both are generating very similar box detections, but the MX3 accelerator has an extremely low confidence value for every single one. The PT-trained one has multiple detections at over 90 percent confidence, whereas the MX3 has no detections over 0.01 confidence. Are there any glaring reasons why this is happening? I am using the YOLOv10 extension and no weird compiler options for this.
Here are images as example.
Not compiled:
Compiled:
Different (worse) training run compiled, no code changed between:
Ill try training another model, the new model was with yolo data augmentation, but still it is worrying to have such different results between non compiled and compiled models.
I also tried compiling with –exp_auto_dp and that very slightly increased confidence, 10% at most, but still basically the same result.
Never mind, I forgot to convert the input image to rgb from bgr in the accelerator code ![]()
If anyone else has this issue, make sure to apply EXACTLY the same pre-processing as when trained.
Hi @ScytheEngineering , glad to hear it was resolved! Yea, opencv is weird like that and captures inputs as BGR, necessitating a BGR2RGB swap in pre-processing. We’ve overlooked that ourselves in an example app in the past, so no worries ![]()
Let us know if you still have any accuracy issues. As you pointed out, --exp_auto_dp can help by making some weights INT16 instead of INT8 (feature maps are always BFloat16).
Thanks!
This looks like the First Robotics reefscape competition. Good Luck!
Thanks! Im building a vision system for this next year and im working out all the kinks with simulated data from last year!


