For my application I would like to integrate a SOM into a PCIe device. I believe that a PCIe interface core is available from Xilinx, but is this or similar useable on the SOM? Does a reference design for PCIe exist? I'm exploring posibilities at the moment so I don't yet have the SOM dev kit to try things.
This is something my group thought about when the SOM became avaliable about 1 yr ago. There are numerous exciting possibilities that are avaliable if this could be implemented via the SOM. The main issue is the NI implemetation.
We've had to, in several projects, implement work-arounds based on the NI implementation of the SOM/LabVIEW capabilities that are avaliable in the Zync.I think the limitaion is you don't have DMA write capability in the NI/SOM tools...and PCIe would require this as a baseline.
Dear Mr. Jack
The Zynq has an 2 core Arm processor that they have DMA structure.
So is it possible to use them anyhow to solve the PCIe problem?
Yes. Here on the SOM forum,I was given the link to the DMA toolkit to allow for reads/writes to the shared memory pool of the SOM.
In my particular issue, we need a common memory pool between the FPGA and ARM (Real-time). The only option the NI SOM tools offer is a FIFO...which is NOT a shared memory mechanism. A FIFO is a data exchange mechanism, very different!.
I am stating this to add insight to the 'perception' of what mechanisms are provided by the SOM in NI's view. It's very simplistic to offer a FIFO in lieu or shared memory access.
I did also see that the FlexRIO [FPGA] has DMA tools...they are NOT avalabile for the SOM. I'm projecting here, but wondering why these verbose FPGA tools are not automatically avaliable for the SOM. My assumption is some decision in NI for price/feature trade-off.
The PCIe bus acces is a very compelling feature for the SOM...and NI. They could conceptually, provide 'drivers' for their PCIe DAQ cards, which would open a whole world of OEM Read-time backbone systems.
Further, It would be nice to see NI offer for the SOM, the bus interface toolkit from the cRIO so that cRIO modules could be connected to the SOM. This would open the OEM market to NI more...as they now give OEM's access to the entire cRIO module eco-system.
Just thinking here...!
The sbRIO-9651 uses the Zynq-7020. This specific Zynq part has no high speed tranceivers, because of this it can not support PCIe signaling directly to the sbRIO-9651. The core from Xilinx only works with Zynqs that contain high speed tranceivers. To make a PCIe device with the sbRIO-9651 you would need to use a PCIe capable device (such as an FPGA w/ high speed tranceivers) on your carrier board, that can interface to the sbRIO.
I hope this helps,
What specific DMA tools from FlexRIO are you referring to? I'm not an expert in FlexRIO by any means, but I'm not aware of any DMA features that FlexRIO has, that SOM does not (aside from P2P FIFO's, which wouldn't be useful). You might be thinking of onboard DRAM. Some of our FPGA products have one or more DRAM banks connected directly to the FPGA instead of the host. This DRAM is not connected to the system bus so the host does not have access to it. This is a hardware feature, not a software feature. You need a DRAM chip physically connected to the FPGA. It's also not just Flex-RIO, some of our cRIO and R Series products also have onboard DRAM.
On the other hand, Host Memory Buffer on NI Labs (which I'm guessing is the DMA toolkit you might be referencing) is a Labs feature where we created VIs that allow the FPGA application to access the system bus. We also added entry points in our RIO driver C API to allocate memory and send the physical address to the FPGA application, so that the FPGA and host can get random access to the same block of memory. Having a shared pool of memory is not just useful for sbRIO customers, but for the entire RIO platform, and so making an API that makes sense for everybody takes more effort than something just for SOM customers. There are also a lot of trade-offs between different implementations such as throughput, latency, usability and FPGA resources. We are continuing to invest research into this area but there are many other long term projects on our roadmap that it is competing with. So, hopefully we will ship something soon that will satisfy everyone but until then, all we have available so far is the implementation in Labs.
The NI SOM does have DMA write (and read) capabiltiies within LabVIEW FPGA. The limitation that I've seen people run into is the higher-level data model that is attached to NI's DMA implementation. The DMA implementation on NI SOM is a FIFO where each DMA channel FIFO must be in a single direction, either RT to FPGA or FPGA to RT, and processed/accessed in order. Some SOM users have wanted to use the DMA as a Random Access shared memory space between RT and FPGA, or as a large, high-performance data buffer read and written entirely from the FPGA. The host memory buffer on NI Labs that Neil has mentioned is the closest thing to a flexible, random access shared memory implementation on NI SOM, but it is a 'labs' implemenation that is being tested and pioneers for potential future supported implementation.
As Nathan mentioned, the PCIe limitation is completely hinged on the electrical limitation of the Xilinx Zynq-7020 SoC used in the SOM. The Zynq-7020 is not capable of PCIe because it does not integrate Multi-Gigabit Tranceivers (MGTs). Other Zynq SKUs (not available through NI, such as the Zynq-7012/15/30/35/40 & 7100) do integrate Tranceivers and therefore could support PCIe.