To use your own custom model, first compile it into a .dfp file, which is the format used by MemryX.
Compile the Model
If you can export your ONNX model to the host, I’d recommend compiling it there instead.
-
Install the MemryX Neural Compiler tools from the Install Tools page on the host in a Python venv (Python 3.9–3.12).
-
Activate the MemryX environment, then run the
mx_nccommand to compile your model. For example:
mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp
You can also refer to the MemryX Compiler documentation for more details on compiling your model.
Note: We recommend compiling the model on the host machine, or on a separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. We also recommend using a Python virtual environment for the compiler installation.
Package the Compiled Model
- Package your compiled model into a
.zipfile. - The
.zipfile must contain the compiled.dfpfile. - Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix
_post.onnx. - Bind-mount the
.zipfile into the container and specify its path usingmodel.pathin your config. - Update
labelmap_pathto match your custom model’s labels.
Example
path: /config/yolonas.zip
The .zip file must contain:
yolonas.zip
├── yolonas.dfp
└── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)