Hello…
Are there any quad M.2 adapters that support 4 MX3 devices?
Are there any devices that will support 4 MX3 devices with a Raspberry Pi 5?
For my particular application, even something connected with USB3/USB4 would work.
Thanks!
Jamie
Hello…
Are there any quad M.2 adapters that support 4 MX3 devices?
Are there any devices that will support 4 MX3 devices with a Raspberry Pi 5?
For my particular application, even something connected with USB3/USB4 would work.
Thanks!
Jamie
Hi Jamie,
These work for systems that support x4x4x4x4 PCIe bifurcation. Nearly all AMD Ryzen/Threadripper/Epyc support this, while for Intel it can vary: some desktop motherboard+CPU combos only do x8/x8, not the needed x4/x4/x4/x4. Their workstation and server CPUs all have x4x4x4x4, though.
These have PCIe switches on-board, so the host doesn’t need bifurcation. However they do tend to be much more expensive.
We’ve used some by HighPoint before, namely their SSD7104 and SSD7105.
Unfortunately here the situation is different – you’ll need to find a card with a true PCIe switch (not an NVMe-only controller or USB) like this 2x M.2 hat from SeeedStudio. On paper should work with MX3, but I haven’t tested it myself… it’s probably a good idea to try this out, so I’ve ordered it today and will keep you updated with the result!
The MX3 M.2 is PCIe protocol only, so USB 2/3 adapters won’t work with it. We do have a native USB card with MX3 coming later this year, but our M.2 is PCIe only.
Let me know if you have questions here, and I’ll update this thread with the RPi 2x M.2 HAT result soon.
Thanks,
Tim
Thanks very much, Tim!
I have the card that you suggested might support dual MX3 devices. I haven’t been able to get it to work, so I’m really looking forward to hearing about your results. As I read the error, it doesn’t look like the Raspberry Pi 5 actually sees the MX3 devices. I could be wrong about that, though.
--
jcpole@aipi:~ $ source ~/mx/bin/activate
(mx) jcpole@aipi:~ $ mx_bench --hello
[2026-03-10 19:02:32.653] [error] [Client] No devices in system, please check the server
Traceback (most recent call last):
File “/home/jcpole/mx/bin/mx_bench”, line 10, in
sys.exit(main())
^^^^^^
File “memryx/runtime/benchmark.py”, line 463, in memryx.runtime.benchmark.main
memryx.errors.MxaError: Failed to connect to mx server
(mx) jcpole@aipi:~ $
--
Please let me know if I can provide any additional information.
Thanks very much and have a great day!
Jamie
So I was able to run with a single MX3 module using the N04 M.2 interface.
Having said that, the dual M.2 interface from Seeed Studios does NOT appear to work at all. I updated the firmware on both MX3 modules, and I get the same error as before. Essentially, it does not seem that either MX3 module is detected when the dual M.2 interface from Seeed Studios is used.
Were you able to get anything out of the dual M.2 interface from Seeed Studios? I’m seriously hoping that I’m just doing something wrong. I have tried doing a complete rebuild of the Raspberry Pi 5, and that still did not work.
Thanks!
Jamie
For the sake of completion, here is the output from the commands that Suresh asked me to run on the Seeed Studios dual M.2 interface.
** See if kernel nodes are creates > ls /dev/memx* : this should list two node memx0 and memx1 if two devices are connected
(mx) jcpole@aipi:~ $ sudo ls /dev/memx*
ls: cannot access ‘/dev/memx*’: No such file or directory
** See if you are able to print out device info > cat /sys/memx0/verinfo
(mx) jcpole@aipi:~ $ cat /sys/memx0/verinfo
cat: /sys/memx0/verinfo: No such file or directory
** Check if mxa manager service is running > sudo systemctl status mxa-manager.service
(mx) jcpole@aipi:~ $ sudo systemctl status mxa-manager.service
● mxa-manager.service - The MemryX MX3 device management daemon.
Loaded: loaded (/usr/lib/systemd/system/mxa-manager.service; enabled; pres>
Active: active (running) since Tue 2026-03-10 19:29:23 EDT; 4min 42s ago
Invocation: 5988446688bf4d0c8f66aff140e9536a
Main PID: 847 (mxa_manager)
Tasks: 4 (limit: 19362)
CPU: 21ms
CGroup: /system.slice/mxa-manager.service
└─847 /usr/bin/mxa_manager
Mar 10 19:29:21 aipi systemd[1]: Starting mxa-manager.service - The MemryX MX3 >
Mar 10 19:29:23 aipi systemd[1]: Started mxa-manager.service - The MemryX MX3 d>
Mar 10 19:29:24 aipi mxa_manager[847]: [thread 847] [warning] [Server] No devic>
** If not try restarting it > sudo systemctl restart mxa-manager.service
(mx) jcpole@aipi:~ $ sudo systemctl restart mxa-manager.service
(mx) jcpole@aipi:~ $ sudo systemctl status mxa-manager.service
● mxa-manager.service - The MemryX MX3 device management daemon.
Loaded: loaded (/usr/lib/systemd/system/mxa-manager.service; enabled; pres>
Active: active (running) since Tue 2026-03-10 19:35:26 EDT; 13s ago
Invocation: 3b1d179e5ea340c09b8095e54c906284
Process: 1976 ExecStartPre=/bin/sleep 2 (code=exited, status=0/SUCCESS)
Main PID: 1979 (mxa_manager)
Tasks: 4 (limit: 19362)
CPU: 24ms
CGroup: /system.slice/mxa-manager.service
└─1979 /usr/bin/mxa_manager
Mar 10 19:35:24 aipi systemd[1]: Starting mxa-manager.service - The MemryX MX3 >
Mar 10 19:35:26 aipi systemd[1]: Started mxa-manager.service - The MemryX MX3 d>
Mar 10 19:35:26 aipi mxa_manager[1979]: [thread 1979] [warning] [Server] No dev>
** Before all that, please check and do share output of > sudo lspci -vv -d 1fe9:0100
(mx) jcpole@aipi:~ $ sudo lspci -vv -d 1fe9:0100
(mx) jcpole@aipi:~ $
And confirm if you ran sudo mx_arm_setup command as part of the arm installation steps : https://developer.memryx.com/get_started/install_runtime.html
Yes, I can confirm that I did run the mx_arm_setup command.
Interesting - when I switch to the N04 hat, it works just fine. I’m able to run the “mx_bench —hello” command, and it returns the expected output.
(mx) jcpole@aipi:~ $ mx_bench --hello
Hello from MXA!
| Device ID | Chip Count | Freq | Volt |
|---|---|---|---|
0 | 4 | 500 | 690
I’m really hoping that I was doing something wrong with the dual M.2 hat.
Have a great night!
Jamie
Hi Jamie,
The 2x M.2 Gen 3.0 HAT from seeedstudio arrived and I tried out some things with it. The short version: good news is it works, bad news is it may be impractical due to performance & heat.
I set it up in this order:
Installed a fresh Raspberry Pi OS (the latest Debian trixie based one)
Enabled PCIe Gen 3.0 speed by adding these to /boot/firmware/config.txt:
[all]
dtparam=pciex1_gen=3
sudo raspi-configAttached the HAT and put 2x MX3 cards in.
The devices both enumerated and showed up in lspci:
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 30)
0001:01:00.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01)
0001:02:00.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01)
0001:02:02.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01)
0001:02:06.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01)
0001:02:0e.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01)
0001:03:00.0 Processing accelerators: MemryX MX3
0001:06:00.0 Processing accelerators: MemryX MX3
0002:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 30)
0002:01:00.0 Ethernet controller: Raspberry Pi Ltd RP1 PCIe 2.0 South Bridge
MX3 driver install, etc., reboot, and then acclBench --hello showed the 2 devices.
So the good news is this particular HAT does seem to work. However I did notice that seeedstudio had an earlier version of this board that used a different PCIe switch chip, the ASM1182E. This 3.0 version instead has an ASM2806 switch chip. The boards otherwise look completely identical. Is there a chance you have the ASM1182E version?
Although it was able to be set up okay on my end, I don’t think I’d recommend using this HAT regardless.
The main issue is the slots are too close to each other for heatsinks to fit on them. As such, the chips get extremely hot: sitting on my desk, under full load they spiked up to 100C and started to thermal throttle. A custom solution, like sticking metal heatsinks directly on the chips and/or having a fan blowing on them, could help here but the effectiveness may vary.
The other reason, which is harder to address, is the performance vs. 1 MX3 M.2.
With 1 PCIe 3.0 lane, the theoretical max bandwidth is 1000 MB/s, though in practice it’s a little bit lower.
For some models, such as some 640x640 input size YOLOs, this I/O bandwidth becomes a bottleneck before the inference capacity of the MX3.
You can test if models are I/O limited by using the --set_freq XXX flag to acclBench. Start low, say 300 MHz, and increase it. Eventually the FPS will plateau out, because while MX3 FPS increases with frequency, the PCIe bandwidth is fixed to the same amount. YOLOv8-nano, for example, will cap out around 119 FPS with a single MX3 M.2 on the RPi 5. So when I ran YOLOv8-nano across 2x M.2s, the total was 125 FPS – basically the same as a single card.
I benchmarked a few models, and some do see slight improvement, such as MobileNetV1 increasing from ~1300 FPS on a single M.2 to ~1900 FPS on 2x M.2. However, at 1900, the FPS saturates again, because after increasing frequency in acclBench, I can’t get it above ~1900.
For models that aren’t bandwidth limited, the situation is different. YOLOv8-medium, which has far more computation per frame than the nano variants, was getting ~35 FPS on a single MX3 M.2, at the RPi default of 500 MHz.
When trying to run 2x M.2 cards on YOLOv8-medium at 500 MHz, the system crashed – the total HAT power consumption must have exceeded whatever the Pi can supply (I think that’s like ~10W).
However, running the modules slower at 400 MHz, I was able to reach 60 FPS total using the 2x M.2s.
If your model is both not bandwidth limited and not heavy/power consuming, there could be an advantage with 2x M.2. The models I tried (YOLO detect/segment/pose, and classification models) didn’t fall in this category, but there could be models out there for which this is true.
So overall, while it does work, I’m not sure the issues [PCIe bw, heat] outweigh the cost vs. using a single MX3 M.2.
The 2x M.2 3.0 HAT could definitely be useful if you want to mix 1 NVMe SSD + 1 MX3, though.
As for a bigger question if I may ask: what is the end application you plan to use the multi-MX3 setup for? If the target application requires high FPS on a single system, perhaps a different form factor like an x86 computer could accommodate a 4x M.2 carrier card? Or another suggestion is to use multiple Raspberry Pis with 1 MX3 each, if your application can distribute work across systems on a network.
Interesting - I was not able to get the Raspberry Pi to “see” the devices using this Seeed Studios board.
I checked the board, and this one DOES have the ASM2806 chip.
I did notice that there was no way to use the heat sinks, but I haven’t put these under load yet, so I wasn’t aware that they could hit 100C. That will definitely cause a problem.
If you don’t recommend using this board with two MX3 devices, I’m willing to go with that. Spreading the load across multiple Raspberry Pi devices, each with a single MX3 is possible - I just didn’t want to have to write a protocol to make that happen. It’s “easier” to have two on one device. Switching to a larger PC form factor will not work - the Raspberry Pi is about the largest device I can accommodate.
The application has nothing to do with video, but I can’t really go into detail because of the potential eventual customer.
Thanks very much for working with me on this! I sincerely appreciate the effort that you and Suresh expended! My next step is to get two MX3 devices on two separate Raspberry Pi 5 devices talking. The application will need significant modification, but I was actually initially headed in that direction before I found the MX3. One quick question - is there any intent to get the SDK and tools working on newer versions of Python?
Once the application is finished, I’ll do my best to meet up with you guys to show it to you. I think you’ll be pretty impressed.
Again, thanks very much, and have a great day!
Jamie
Hmmm, odd that ASM2806 wasn’t working. Did the switch itself show up in lspci? (The 0001:01:00.0 PCI bridge: ASMedia Technology Inc. ASM2806 4-Port PCIe x2 Gen3 Packet Switch (rev 01) lines)
Regardless it’s probably still best to go with 1 per Pi ![]()
For the Python versions, yes we definitely plan to update the SDK for 3.13 and 3.14 in the future. We’re limited to 3.9 - 3.12 as of now due to versions of the compiler’s dependencies, such as Tensorflow and ONNX. But in the next-next SDK (version 2.3), we plan to update these and bring 3.13/3.14 support.
In the meantime, we’ve been using either of these to get Python 3.12:
uv before all your commands.$PATH. No need for a uv prefix, but it does build Python from source so the install takes a while.Let me know if you have issues with these.
And we’re excited to hear how the project goes!
Great - thanks very much!
I’ll check to see if the switch showed up in lspci.
Suresh steered me down the uv road, so that’s what I’ve been working with. I usually use conda, but uv seems to do pretty much exactly the same thing.
Have a great weekend!
Jamie