PXI

cancel
Showing results for 
Search instead for 
Did you mean: 

NI PXI-8360 with NI PCIe-8362 and Dell PowerEdge R530 Incompatibility

Dear readers of this topic,

 

I am trying several days to get working the following combination of hardware. We have two custom PXI crates with PXI-8360 crate controllers. Then one card to PC server Dell PowerEdge R530 unit with PC controller card NI PCIe-8362 and two channels on that card leads to two chopper cables and finally to two PXI crates. Then the PXI crates has the backplane to clock and trigger distribution between the crates. 

 

Ideally the two crates will be connected in parallel to two ports of the NI PCIe-8362 with total 17 XIA DGF DAQ Digitizer modules. The crates are Wiener 14slot custom crates for CompactPCI/PXI. The digitizer cards in PC are visible as PLX9054 custom board PCI/PCI bridge one for each card.  

 

Crates, modules, cards were tested individually and together on two different working consumer PCs and three different Operating systems during this operation set with NI PCIe-8362 to standard mode. On two different computers (consumer PCs with quarter of the price of the DELL server I was without problems able to install and test all 17 digitizer modules and I have working setup with real acquisition of data from the digitizers. Only the inter-crate clock and trigger distribution did not work for me, but I was not paying attention to this so maybe it was just not properly configured. The main conclusion from this is that I was able to see in those two computers all 17 PCI-PCI bridges to the XIA DGF digitizers and I was able to communicate with each module. 

 

Unfortunately, on the Dell PowerEdge R530 (2x Xeon E5-2660 v4, 64GB ECC RAM,8xSAS 4TB drives), I was not able to pass the boot of the Windows 10 x64 Professional (BSOD all the time when the crate is on) or over the CentOS 7 x64 boot (kernel panic due to PCI-E bridge as shown in the photo). My approach was like this:

 

1) I have updated the Dell PowerEdge BIOS to latest 2.4.2version UEFI 2.4, CPLD 1.0.3, System management engine 3.1.3.38

2) I have started to test the setup with only one crate, one MXI link port 1 and One Digitizer module installed. 

3) Kernel panic or BSOD remains.

4) Server with disconnected MXI links, but with the NI PCIe-8362 card pass the boot correctly into both OS.

5) Server have the BSOD or Kernel panic on setup with PCIe-8362 + PXI-8360 also with only empty crate, or with one module, or with fully populated chassis. Tried different slot in PCIE bus (server has two slots) the same situation in all cases.

6) Bios compatibility mode on. Both operating systems boot without problems. After NI bios compatibility software package  v1.5 the Windows 10 goes to BSOD at every boot. Maybe mainly due to the package is up to Windows 8.1 only (is it true?). Removed the package in the safe mode then the same situation. Will be beneficial to reinstall to windows 7 x64 Pro instead? 

 

Anyways for us is crucial to get it to work on the CentOS 7 x64 as-well. I have installed on the CentOS the NI PXI platform services v17 but in the lspci command I do not see any NI hardware or any XIA DGF digitizer PCI-PCI PLX9054 bridges. The NI PXI config says 0 controllers or modules as-well. Is there similar package like bios compatibility for Linux? Or how to get it working. I have not found reliable source of information for Linux installation of this in this mode. 

 

Then I have started to do random things according to the internet like disabling address translations in bios, disabling some things in bios, use bcdedit tried to disable some features in windows like PCI Express Native Control set to disabled etc. Each change I tested so It was long but every solution failed. So, I reverted everything to default state.

 

7) I have reinstalled the clean Win10 x64 Pro with no difference.

 

Do we need to buy different NI PCIe card for the DELL PowerEdge R530 PC? 

 

Please help.

Thank you.

 

 

Leo S.
0 Kudos
Message 1 of 11
(4,181 Views)

Leo,

 

Thanks for the details on the debugging steps you've taken.  Can you tell me the part numbers of the PXI-8360s?  It should be within a letter of 191373H-01 or 157298B-01L.

 

The Dell servers are pretty thorough at detecting errors and enabling error detection in system components.  Many other PCs mask hardware errors or only look at a subset.  From the image you sent you can see that PERR# and SERR# (Parity Error and System Error) are both enabled.  Many hardware errors get mapped to SERR#, and it's probably what's causing the BSOD.  Is there a BIOS option for disabling SERR#?

 

The Linux output doesn't show the actual bridge with the error.  It turns out there are 2 bridges on the PXI-8360, and the debug output is from the wrong one (PCI function 0).  The second bridge (PCI function 2) connects to the chassis, and that's probably where the error is being routed or generated.

 

There's no Linux equivalent of the NI BIOS Compatibility software.  Setting the "BIOS Compat Mode" switch in Linux will hide everything behind the PCIe-8362.

 

- Robert

0 Kudos
Message 2 of 11
(4,147 Views)

Dear Robert,

 

thank you for advices. 

I looked into the crates and the NI PXI-8360 cards part numbers are both the 191373J-01L

 

The Linux kernel panic screenshot was done when one crate with one module was present on the MXI bus. Today I have tried different BIOS settings with no success. I can disable the pre-boot driver for the slot - no changes. I can disable the slot - boot OK but card and link is offline - useless. I have tried connected both fully populated crates, link is solid green on both PXI-8360 cards as always, the kernel panic screen is absolutely the same as with one crate and one module or empty crate. I tried also disable the lifecycle controller of the Dell PowerEdge R530 with no changes. The PERR or SERR settings or something similar sounding I did not find. Tried something which it can be but no success.  Actually I am not expert on such non consumer BIOSes so maybe it is hidden under something else what sounds totally different.  🙂 

 

So If I understand fully, when NI PCIe-8362 BIOS comp. mode is on, the Linux is not a usable platform for DAQ from crates yes? 

 

And yes in the NI PCIe-8362 is two PCI-PCI bridges shown in different PC in Device manager as a drivers to a two ports of MXI link on the back of the PC. 

 

Thank you for your help and any further help will be appreciated.

Leo S.
0 Kudos
Message 3 of 11
(4,133 Views)

Leo,

 

I've looked at a similar system (Dell rackmount with BIOS 2.4.2).  I was able to boot with a PCIe-8362 + PXI-8360 connected.  Inside the BIOS I found a setting for whether to boot BIOS or UEFI.  The Dell was set to "BIOS" when everything worked.  I changed to "UEFI" (which takes a different Windows installation) and saw bluescreens when the PXI-8360 was connected.  I haven't figured out why yet.

 

Do you have a requirement to use the UEFI boot?  If not, can you try setting that to BIOS boot instead?  It will probably mean reinstalling the OS (I don't know if there's a way to switch between the two -- not my area of knowledge).

 


So If I understand fully, when NI PCIe-8362 BIOS comp. mode is on, the Linux is not a usable platform for DAQ from crates yes? 

Yes.

 

- Robert

0 Kudos
Message 4 of 11
(4,123 Views)

Dear Robert and all readers,

 

I have tried the Legacy BIOS settings in the Dell PowerEdge R530. It did not help at all. I have reinstalled the OS but It was not necessary. You can observe this problem even if you try to boot the installation disk of the OS (the kernel panic page is absolutely the same and also the same like on UEFI boot). With crates switched off everything is running ok and I tested the Linux for stability etc. I tried various settings in BIOS but it is not promising. (Test was one MXI link to one empty crate with NI PXI-8360). To proof that our HW is OK I have done diagnostics of the Dell PC with green result. Then, I have plugged every tested setup to second PC. Using CentOS v7 recovery environment booted on that PC from the CD I am able to boot and see using lspci command all three configurations tested one by one which I wanted to try one by one on the setup with DELL. 

1st configuration is NI PCIe-8362 with one MXI link to one crate empty using NI PXI-8360

2nd configuration is NI PCIe-8362 with one MXI link to one crate using NI PXI-8360 with 4 DAQ modules {our slave crate}

3rd NI PCIe-8362 with two MXI links to two crates using two NI PXI-8360 with total 17 DAQ modules {our intended setup with two crates with distributed clocks and triggers}

One of tested BIOS settings are on the photos in the attachment. 

 

So, that starts to be quite frustrating issue. 

 

Thank you for your suggestions and time. 

Yours Sincerely,

Leo Schlattauer

 

Leo S.
0 Kudos
Message 5 of 11
(4,115 Views)

Leo,

 

This issue stems from a change in the PCIe spec at version 1.1.  An additional concept, Role-Based Error Reporting, was introduced that changed the behavior of some types of errors.  The relevant one here is for Unsupported Requests from non-posted requests (like config reads).  In 1.0a, if enabled, these would return an ERR_NONFATAL message.  In 1.1 and later these would be treated as Advisory Non-Fatal and wouldn't return ERR_NONFATAL.

 

The PCIe-PCI bridge on the PXI-8360 is a multifunction 1.0a device.  Functions 0 and 2 are used.  When the OS is scanning for devices it will read all 8 function numbers.  Since functions 1 and 3-7 aren't implemented they're classified as Unsupported Requests and will return ERR_NONFATAL when the Unsupported Request Reporting Enable bit is set (which the Dell BIOS sets).  ERR_NONFATAL is triggering the BSOD.

 

The proper BIOS behavior would be to check the Role-Based Error Reporting bit in the Device Capabilities register before setting Unsupported Request Reporting Enable in the Device Control register.  Because it doesn't, the bridge is going to report reads to unimplemented functions as Unsupported Requests and will send ERR_NONFATAL in response.

 

There are bits in config space that block the message, but there's not a good way to set/clear bits after the BIOS configures them and before the OS scans the PCI bus.  If you know of a way then I can point you to the right bits.

 

- Robert

0 Kudos
Message 6 of 11
(4,082 Views)

Dear Robert, 

 

currently I am in touch with Dell Technical support. If you can provide a details you mentioned above it will be beneficial to maybe solve the case. Obviously the DELL statement is that the cards are not DELL certified and this is not their mistake. But still I try to use their help to solve the problem if possible. 

Leo S.
0 Kudos
Message 7 of 11
(4,063 Views)

Leo,

 

I sent you a private message with more technical details.

 

- Robert

0 Kudos
Message 8 of 11
(4,058 Views)

Dear Robert and all,

 

based on workaround provided from DELL customer service I have tried kernel boot parameters like: pci=nomsi with no sucess, pci=noaer also not helped. I searched around this topic and found option pci=nommconf which did the trick! So I have added to the GRUB_CMDLINE_LINUX_DEFAULT="pci=nommconf" to have it pernament and now I am able to boot the operating system every time. The lspci command shows all the 17 inserted DAQ modules in two crates like PLX PCI/PCI bridges. Now I need to test the system. I am not sure if the extended adress space is used in this setup so I will see if it will work on further tests or not. But unfortunately I did not find on the internet if this parameter somehow exists for the windows bootloader aswell... 

Leo S.
0 Kudos
Message 9 of 11
(3,968 Views)

Dear all,

 

two years later we are using our spectrometer with ordinary PC. 

 

This DELL PowerEdge R530 gets new BIOS update recently, no help. UEFI needed due to GPT disk array (13TB in RAID 10) and dualboot needed (yes uncommon but needed in our case). So I have working server with dualboot Win10Pro x64 and CentOS 8 with all upates. Is there any solution to pass the UEFI boot procedure without the bluescreen causing the ERR NONFATAL from MXI link ? Nothing worked so far. I am ready to buy different MXI link cards, but is there any solution SW or HW (free or paid) which will work and which allows me to connect two crates full of XIA PIXIE-16 DGF 100MSaS/14bit rev.F digitizers to the Windows OS as in the ordinary PC with old PCIe rev. which we using now?  (terribly slow) 

 

Thank you 

 

Leo S.
0 Kudos
Message 10 of 11
(2,509 Views)