PXI

cancel
Showing results for 
Search instead for 
Did you mean: 

PXIe 1075 not showing up in MAX

Hi,

 

I am fighting an extremely frustrating issue with getting MAX to detect my 1075 chassis paired with a PCIe 8371. The chassis is NOT detected in MAX but the PCIe card shows up. Also in device manager, NI SMBus controller is either missing or shows up as hidden (randomly).

 

Computer Hardware

Motherboard: Supermicro M12SWA-TF

CPU: Ryzen Threadripper Pro 3955WX

OS: Windows 10 Pro x64

 

Software

Win 10 Pro x64

LabVIEW 2019

PXI Platform services and NI Switch Executive up to date 2022 Q3

 

I have tried flipping the BIOS Compatibility switch on the PCIe cards (with the appropriate software), installed/reinstalled PXI Platform services multiple times, tried the PCIe card on different slots (all slots are x16) and followed the correct sequence of steps in turning on/off the computer + 1075 chassis. I feel if I can get the NI SMBus to show up in device manager this problem will be solved. I am happy to reinstall everything and start from scratch this weekend if someone can give me their magic formula to get this working. I have not started deploying critical software/hardware yet, so I am more than happy to blow everything up this weekend to make this work.

 

Attached are a few pictures and diagnostic information. Please help!

 

 

 

 

0 Kudos
Message 1 of 8
(1,582 Views)

Hi Mr_potato,

 

Based on "Device manager 1.PNG" the problem may not be the drivers, assuming the yellow bang on the "PCI Express Upstream Switch Port" is part of the MXI chain. If you see that again/still, please select "View -> By connection" from the menus at the top. It should expand the tree (in the new view) to that switch port to give it some context. Also, in the properties page, go to the details tab and select "Hardware Ids". 

 

"aita 1.PNG" isn't consistent with that, though. Because of that, and your experience with the NI SMBus controller coming and going, and it sounds like an intermittent connection. What are the MXI LEDs showing during all this? Are they green the whole time? Do they flash yellow/amber when you move the cable around?

 

- Robert

 

 

0 Kudos
Message 2 of 8
(1,530 Views)

Hello Robert,

 

Thank you for the reply.

I have attached a couple of pictures with the connection tree view. I have not been able to get rid of the yellow (!). I have installed the latest AMD Chipset drivers and all the drivers from Supermicro's website.

https://www.supermicro.com/en/products/motherboard/M12SWA-TF

I have also noticed that I am intermittently losing internet connectivity on the 10GbE port. Maybe these issues are connected?

 

0 Kudos
Message 3 of 8
(1,517 Views)

There's disagreement between device manager and aida. If the two pictures are from similar boots (i.e. not an intermittent everything-showed-up boot for the aida screenshot) then it looks like the hierarchy is discovered (good) but later Windows didn't like something about the PXIe-8370 upstream port (bad).

 

One thing that changes Windows' behavior quite a bit in low-level PCI is described in this article. It's pretty easy/quick to try (and undo):

 

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019KfQSAU&l=en-US

 

- Robert

0 Kudos
Message 4 of 8
(1,511 Views)

Another thing you could try (albeit much messier and with a slimmer chance of fixing things) is to install Windows 11 with the 22H2 update. Microsoft has been updating their PCI(e) drivers, which is generally a good thing, but there are more fixes in the Win11 branch than the Win10.

 

- Robert

0 Kudos
Message 5 of 8
(1,508 Views)

No luck with a Win11 install. Reverted back to a fresh install of Win 10. However, here is something interesting that could hopefully lead to a solution.

 

My particular motherboard has several root port complexes as confirmed by the NI root bus detection utility. They are grouped strangely as 32,32,32,160 addresses(?). Now I am not sure if these are addresses or actual lane groupings. I went ahead and mapped each electrically x16 PCIe slot to a complex by populating them with a device. My findings:

Bus groupings #1 [0, 31], #2 [32,63], #3 [64, 95], #4 [96, 255].

PCIe x16: Slot 1 = Bus 33, Slot 2 = Bus 97, Slot 3 = Bus 67, Slot 4 =(does not exist), Slot 5 = Bus 33, Slot 6 = Bus 1, Slot 7 =Bus 65.

 

So, theoretically, if I have the 18 slot chassis (needs 25 addresses) connected via the PCIe 8370/71 on slot 2 and the graphics card (GTX 1080) in any of the other slots, I should be good to go. But, it does not work! Fresh install of LabVIEW 2019 with the accompanied PXI platform drivers.

 

Is this now a bust? Should I try my luck again with the MXI Compatibility tool and the DIP switch on the 8371?

 

0 Kudos
Message 6 of 8
(1,467 Views)

Just one clarification -- the PXIe-8375 requires 25 bus numbers, but the 8370/71 adds about 10 more so you need 35. It looks like slot 2 is your only usable slot (the MXI Compatibility tool can't change/correct those allocations). Have you tried the MXI Compatibility tool (and DIP switch) when in slot 2?

 

- Robert

0 Kudos
Message 7 of 8
(1,458 Views)

Yes, I am currently in slot 2 with the DIP switch activated. I also went ahead and made sure that useless PCI hidden devices (from previously swapping ports) were removed from device manager. I've also updated to the latest PXI platform services and the latest NI Switch executive. As a last resort I am going to ask Supermicro if they can send me a new BIOS that can help. Note that I have disabled EFI OPROM in the BIOS.

0 Kudos
Message 8 of 8
(1,428 Views)