Hardware Developers Community - NI sbRIO & SOM

cancel
Showing results for 
Search instead for 
Did you mean: 

sbRIO-9651 FPGA DDR Read?

I have a FPGA application for Bit Error Ratio Testing of multiple high speed Gigabit video transmitters/receivers.

The transmitted data (one 4K video frame) is algorithmically generated at 120Hz and higher within the FPGA.

This works well as the FPGA can sufficient resources to support the multiple transmitters/receivers in parallel.

I would like the option to replace the algorithmically generated test data/patterns with user supplied images.

The idea is LabVIEW Real-Time could implement a timed slide show of test images read from a USB stick.

LabVIEW Real-Time would write the video frame into one of two DDR memory regions and then inform the FPGA of update.

The FPGA would read the last updated DDR memory region whenever it needs to output a video frame.

My issue is this does not seem to be feasible with LabVIEW.

1. There are insufficient block RAMs to hold the test data/pattern from the last FIFO read in the FPGA.

2. It seems LabVIEW only supports FPGA DDR access using FIFO reads/writes.

    I see no function for FPGA read from DDR memory.

3. For FIFOs, LabVIEW Real-Time would have to do continuous DDR FIFO writes.

    The FPGA ARM processor running LabVIEW cannot cope with high data rates.

Any ideas or suggestions?

The Xilinx suggested solution is to learn more about FPGAs and use their tools (use their AXI video direct memory access IP).

It looks like this IP is available in the LabVIEW FPGA Xilinx function palette.

But I have failed to find any example or guidance on how to use it.

Message 1 of 8
(7,652 Views)

Kevin,

I would agree that the current NI implementation of the Zynq does not allow DRAM access via Xlinx AXI bus directly from the FPGA. It appears that NI Zynq implements the AXI bus for DRAM access for the RT and they only way to access DRAM via FPGA is FIFO.

I too am dissappointed, as we also need to do FPGA processing on large dataset - which would be the poster-child for FPGA-AXI-DRAM access incorporated in the Zynq by Xilinix.

I understand the rationale for this, you'd have to manage the DRAM memory yourself between the FPGA, RT inclusive of the DRAM needs for USB, CAN and Ethernet buffers, etc.

Our initial testing has found that using FIFO's and the RT side as a 'memory manager' is somewhat burdensome on the RT, but it is the course we are taking...hoping there is enough CPU left on the RT side to perform the RT required tasks.

In fact the Xilinx example for the same Zynq chip does have video processing as an example.

I think this will open up in the future. NI needs to consider 'opening-up' the SOM more to the Xilinix native functions, which are very mature to provide for applications such as ours. I am not sure if there are forces internal to NI that are limiting the SOM as to not 'steal' from other NI offerings. They should consider that the 'OEM' market of the SOM is completely seperate from a 'retail' product offerings.

From the conversations I've had with NI guys they want to hear about our need and they want to push the envelop.

Regards

Jack Hamilton

0 Kudos
Message 2 of 8
(5,385 Views)

Jack and Kevin,

My understanding (from the perspective of an engineer in R&D, I don't speak for NI) is that the problems are more technical (and prioritization) than business related and we are investing into overcoming some of these barriers. It's very challenging to completely unleash the power of something as flexible as Zynq while providing a consistent experience across all of our products.

I've been doing a lot of research on your problem, because it really bothers me. You are correct that at a low level, DMA shouldn't require processor cycles to read or write memory. I've been researching what the processor is doing during DMA FIFO's and most of it is flow control and memory copying.

Applications that do well with this sort of look-before-you-leap approach are things like monitoring, datalogging, and streaming. Applications that really suffer from this approach are things like control loops (too much jitter, not high enough loop rates), and the block storage (it sounds like both of your applications fall into the latter category).

In addition to external clients that struggle with this are internal RIO clients that need high performance control loops or block storage. They manage to resolve their problems through external DRAMs connected to the FPGA and writing a CLIP to interface with it. SOM customers could potentially do this as well. You could connect a DRAM chip and write a memory interface to it in your CLIP. This is super effective for the control loop applications that push the entire control loop onto the FPGA but still problematic for control loops that need the processor, and block storage where the data originates from the processor.

We would love to do better at block storage and control loop applications though so there was some research done on something called the Host Memory Buffer. The Host Memory Buffer was designed primarily to solve the control loop application and increase the performance of loop rate and jitter. We released a prototype on NI Labs

https://decibel.ni.com/content/docs/DOC-41463

This is a pretty powerful API. The FPGA can read and write to any physical address some of which may cause your system to hang and some of which might brick your device. But it enables much higher performance. So use with care.

It was posted relatively recently and as always with new features, if you like it, then communicate your feedback to marketing so we can continue to improve it.

0 Kudos
Message 3 of 8
(5,385 Views)

Hi Kevin,

I'm on a similar situation as yours. I'm porting an application from LabView PC to the myRIO platform, and my intention is to perform as much processing as possible within the FPGA while minimizing the usage of the ARM processors.

I have to work with a considerable amount of samples so in my case it is absolutely necessary to use the DDR memory banks to perform these calculations. In case I don't find a proper way, I guess that I will finally use the RT side to perform the calculations or as memory manager as Jack describes, but first I need to be completely sure that it is not possible to access the memory straight forward from the FPGA...

Have you made any advance on this topic?

Thank you in advance,

Gonzalo.

0 Kudos
Message 4 of 8
(5,385 Views)

Gonzalo,

Thanks for your post. Lets please keep the pressure on this topic. Were jumping thru alot of hoops FIFO'ing data between the RT and FPGA as a work-around.

Regards

Jack Hamilton

0 Kudos
Message 5 of 8
(5,385 Views)

I once checked Zynq datasheet, it says DDR memory can be configured to access by arm alone, by fpga alone, or both, but seems it needs to configure at HW level, if my memory servers me right.

0 Kudos
Message 6 of 8
(5,385 Views)

The FPGA is capable of writing to DRAM, and we already configured the hardware to allow this. DMA FIFO's and the Host Memory Buffer interface rely on this. To write to memory you need FPGA logic that is capable of sending data down to the DRAM as well as software support to keep the OS out of your way. If you just write to random addresses, I believe you'll get a kernel panic. You'd also have to handle memory incoherency between the processors cache and DRAM.

DMA FIFO's handle all of this for you by providing a simple interface on the FPGA side and the RIO driver on the processor side. It is true that there are applications (random access buffers, low jitter control loops), that don't fit this paradigm very well. The Host Memory Buffer interface was the result of a research project specifically targeting low jitter control loops that require the RT. It has both the driver support and the FPGA logic necessary to allocate a buffer on the DRAM and give the FPGA unrestricted access to read and write to it. The HMB interface was released on NI Labs. Given that it was research and hasn't (yet) been productized, it has some significant limitations (it only can be used through the C API).

We are continually researching better ways to present good data transfer paradigms and I expect that in time this research will result in unlocking more of the capabilities of the zynq chip and SOM. Customer feedback is helping to increase the level of investment in this type of research.

TLDR: The FPGA can access DDR memory, but presenting a clean, safe, and usable transfer paradigm is nontrivial. Currently all we have is DMA FIFO's and an NI Labs release of HMB.

0 Kudos
Message 7 of 8
(5,385 Views)

nturley,

Thanks for the response. I'd like to detail, that a FIFO is a data "Transfer"mechanism, and not a memory pool access system.

There is very little substitution one can do with the FIFO when you need to store and access a datatable from within the FPGA.

I agree, users would have to deal with memory addressing and the potential over overwriting memory. But this is part of the challenge with working at this low-level.

I can conceptualize, a routine, that will allocate a pool of memory, with checks and protection from overwirting *outside* that memory block. This would not cause a catastroptic memory overwrite, but would allow the user to read & write memory within their block allocation (and mess that up, if they are sloppy)

Regards

Jack Hamilton

0 Kudos
Message 8 of 8
(5,385 Views)