Guide to Compiling Your Custom Frigate Model to .dfp

To use your own custom model, first compile it into a .dfp file, which is the format used by MemryX.

Compile the Model

If you can export your ONNX model to the host, I’d recommend compiling it there instead.

  1. Install the MemryX Neural Compiler tools from the Install Tools page on the host in a Python venv (Python 3.9–3.12).

  2. Activate the MemryX environment, then run the mx_nc command to compile your model. For example:

mx_nc -m yolonas.onnx -c 4 --autocrop -v --dfp_fname yolonas.dfp

You can also refer to the MemryX Compiler documentation for more details on compiling your model.

Note: We recommend compiling the model on the host machine, or on a separate machine, rather than inside the Frigate Docker container. Installing the compiler inside Docker may conflict with container packages. We also recommend using a Python virtual environment for the compiler installation.


Package the Compiled Model

  1. Package your compiled model into a .zip file.
  2. The .zip file must contain the compiled .dfp file.
  3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix _post.onnx.
  4. Bind-mount the .zip file into the container and specify its path using model.path in your config.
  5. Update labelmap_path to match your custom model’s labels.

Example

path: /config/yolonas.zip

The .zip file must contain:

yolonas.zip
├── yolonas.dfp
└── yolonas_post.onnx    (optional; only if the model includes a cropped post-processing network)