Setup MX_Bench --hello

Get the following response on trying to verify install on a raspberry pi5 - removed directory and user info leaving just the mx/

(mx) ~/mx $ mx_bench --hello
Traceback (most recent call last):
File “mx/bin/mx_bench”, line 8, in
sys.exit(main())
^^^^^^
File “memryx/runtime/benchmark.py”, line 611, in memryx.runtime.benchmark.main
File “mx/lib/python3.11/site-packages/grpc/_channel.py”, line 1181, in call
return _end_unary_response_blocking(state, call, False, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “mx/lib/python3.11/site-packages/grpc/_channel.py”, line 1006, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = “failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:10000: Failed to connect to remote host: connect: Connection refused (111)”
debug_error_string = “UNKNOWN:Error received from peer {grpc_status:14, grpc_message:“failed to connect to all addresses; last error: UNKNOWN: ipv4:127.0.0.1:10000: Failed to connect to remote host: connect: Connection refused (111)”}”

Verified MX3 is seen

~/mx $ lspci
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 21)
0001:01:00.0 PCI bridge: ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch
0001:02:01.0 PCI bridge: ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch
0001:02:03.0 PCI bridge: ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch
0001:02:05.0 PCI bridge: ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch
0001:02:07.0 PCI bridge: ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch
0001:03:00.0 Non-Volatile memory controller: Micron Technology Inc 2550 NVMe SSD (DRAM-less) (rev 01)
0001:06:00.0 Processing accelerators: MemryX MX3
0002:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2712 PCIe Bridge (rev 21)
0002:01:00.0 Ethernet controller: Raspberry Pi Ltd RP1 PCIe 2.0 South Bridge

Hi Joe, it looks like the mxa_manager service might not be running, or there could have been an issue loading the driver.

Can you please try the following and let us know the results:

  1. Check that the driver is loaded with sudo lsmod | grep memx → should print that the module is loaded
  2. Make sure the device initialized too with ls /dev/memx* → checks if the /dev node exists
  3. If the first two pass, try restarting the manager with sudo service mxa-manager restart
  4. If that still doesn’t work, do sudo service mxa-manager status and let us know the output to help with further debug.

Thanks!

I can’t respond with results or email back as I keep getting rejected

Sorry, new users can only put 2 links in a post.

Hmmm, sorry about that! I’ve just updated the site settings to be more permissive. Can you please try again?

Hi Joe,

Thanks for the screenshot. It looks like the driver might not have been loaded properly. Just to confirm—did you run the following setup command for ARM systems after the initial installation?

sudo mx_arm_setup

If not, please run the above command and then reboot your system.

After rebooting, try the following checks to verify the driver is properly loaded:

lsmod | grep memx
ls /dev/memx*

If the memx device doesn’t show up, try manually loading the driver with:

sudo modprobe -v memx_cascade_plus_pcie

Then, run the verification commands again:

lsmod | grep memx
ls /dev/memx*

At this point, you should be able to see the memx device node (/dev/memx*).

If the device still doesn’t show up:

Let’s try reinstalling the MemryX runtime drivers from scratch.

  1. Uninstall Existing Driver:

     sudo apt purge memx-*
     sudo rm /etc/apt/sources.list.d/memryx.list /etc/apt/trusted.gpg.d/memryx.asc
    
  2. Ensure Kernel Headers Are Installed:

     sudo apt install linux-headers-$(uname -r)
    
  3. Add MemryX Signing Key

     wget -qO- https://developer.memryx.com/deb/memryx.asc | sudo tee /etc/apt/trusted.gpg.d/memryx.asc >/dev/null
    
  4. Add the MemryX Repository

     echo 'deb https://developer.memryx.com/deb stable main' | sudo tee /etc/apt/sources.list.d/memryx.list >/dev/null
    
  5. Update Package List and Install Drivers

     sudo apt update
     sudo apt install memx-drivers memx-accl
    
  6. Run ARM Board Setup (Important for Raspberry Pi)

     sudo mx_arm_setup
    
  7. Reboot the System

     sudo reboot
    

Once the system is back up, please run:

apt policy memx-drivers

This will confirm the driver version installed.

You can also refer to the full installation steps here for reference: MemryX Driver Installation Guide

Also, please recheck the following to verify everything is set up correctly:

lsmod | grep memx
ls /dev/memx*
sudo service mxa-manager status

Let me know how it goes after the reinstall.

Thank you.

One more thing to note, this is a pi5 booting from the hdd on a hat with the memryx board it seems to work booting from an sd card, fine for development, not production. The need is for hdd boot without sd card for stability. Any thoughts?

Hi Joe,

From the output of ls /sys/memx*, it looks like the driver isn’t loading. Since you mention a hdd hat, may I ask which hat you’re using?

There’s a chance the hat could be the source of the incompatibility if it’s splitting the 1 PCIe lane of the RPi 5 between hdd and MX3.

Could you please share the hat info, plus the output of:

sudo lspci -vv -d 1fe9:0100

This lspci command will check that the MX3 has enough PCIe resources allocated to it by the hat.

sudo lspci -vv -d 1fe9:0100
0001:06:00.0 Processing accelerators: MemryX MX3
Subsystem: MemryX MX3
Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 41
Region 0: Memory at 1b80000000 (64-bit, non-prefetchable) [size=16M]
Region 2: Memory at 1b81000000 (64-bit, non-prefetchable) [size=16M]
Region 4: Memory at 1b82000000 (32-bit, non-prefetchable) [size=1M]
Capabilities: [80] Power Management version 3
Flags: PMEClk- DSI- D1+ D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [90] MSI: Enable- Count=1/1 Maskable+ 64bit+
Address: 0000000000000000 Data: 0000
Masking: 00000000 Pending: 00000000
Capabilities: [b0] MSI-X: Enable- Count=512 Masked-
Vector table: BAR=4 offset=000fe000
PBA: BAR=4 offset=000fdf00
Capabilities: [c0] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <1us, L1 <1us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 26W
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x2, ASPM L0s L1, Exit Latency L0s <256ns, L1 <8us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 5GT/s (downgraded), Width x1 (downgraded)
TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range B, TimeoutDis+ NROPrPrP- LTR+
10BitTagComp- 10BitTagReq- OBFF Via message, ExtFmt+ EETLPPrefix-
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS- TPHComp- ExtTPHComp-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [100 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [150 v1] Device Serial Number 60-0e-15-24-00-00-00-9a
Capabilities: [160 v1] Power Budgeting <?>
Capabilities: [1b8 v1] Latency Tolerance Reporting
Max snoop latency: 0ns
Max no snoop latency: 0ns
Capabilities: [300 v1] Secondary PCI Express
LnkCtl3: LnkEquIntrruptEn- PerformEqu-
LaneErrStat: 0
Capabilities: [900 v1] L1 PM Substates
L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1+ ASPM_L1.2- ASPM_L1.1+ L1_PM_Substates+
L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
L1SubCtl2:
Kernel modules: memx_cascade_plus_pcie

Thanks for the link and log. Reading through them, it looks like this 4x M.2 adapter lacks MSI/MSI-X support, which is part of the PCIe Gen 3 standard. Unfortunately, the MX3 relies on MSI-X in order to communicate with the host.

The Raspberry Pi 5’s PCIe controller is actually a Gen 3 compliant controller, but it runs at Gen 2 speeds. Therefore it has MSI-X support, which is why the MX3 works when connected directly to the Pi – just the speed is lower.

The 4x M.2 adapter linked above, however, uses a PCIe Gen 2 compliant-only switch chip (ASM1184e), so it doesn’t have the required MSI-X support.

I looked around for a 3.0 compliant multi-M.2 HAT, and there’s one I found that uses the ASM2806 switch chip and has MSI-X support:

I haven’t tried it myself, but based on the specs it should work with the MX3 M.2 + an NVMe SSD.