From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA FIFO Host to Target FPGA, C API, timeout 50400 error

Solved!
Go to solution

HI

 

Issue:
    When I try to capture large sets of data from my FPGA I get a 50400 error. My current design seems to work for a low amount of elements (<~500k) which is about 4ms worth of 120MSps data, however I need to capture up to 10 seconds of data. I have used this kind of capture configuration running at 2MSps before and it works great.

 

My setup:
    Currently I have a 32bit counter, running at 120MSps, as my data source which feeds my DMA FIFO. The user sets how many elements to capture (numScansCap) and then will trigger the acquisition with "SoftTrig". Once SoftTrig goes high, I one-shot it and latch that pulse high until the correct number of elements have been captured. The latched signal is anded with the Data valid signal and then feed into Input Valid on the DMA FIFO (See screenshot-1).

On my Host I have written code that writes to numScansCap, reads back numScansCap, stops the DMA FIFO, and configures the FIFO with that number (See screenshot-2). Then when the function SoftTrig gets called it will; start the FIFO, set SoftTrig high, AquireReadElements for numScansCap, and then stops the FIFO (See screenshot-3).

 

What I’ve tried:
    I had the FPGA FIFO size at the default 1024 but after consistently getting timeout errors I thought the FPGA FIFO might be overflowing. I have since set it at 262144 elements, however I’m still getting timeout errors.

    In C, I’ve tried using ReadFIFO once for reading how many elements there where and another to capture those elements. This had the upside that I never got a timeout error, however I would miss elements. For example I would set it to capture 6.5M elements but I would only receive 5.5M.

    I setup "number of elements to write" to latch high when equal to zero to indicate an overflow, but I’m unsure if this is really true. I see the overflow flag go high even after small successful captures.

    I've read the NI LabVIEW High-Performance FPGA Developer’s Guide but I have no other ideas on what I am missing.

 

Thanks for the help,

Alan

 

Hardware:
FPGA/ADC: PXIe-7975
Chassis: PXIe-1062Q
Controller: PXIe-8381

I believe that my chassis is my limiting factor in data throughput(~1GB/s), however I should still be able to transfer 480B/s with this chassis.

Download All
0 Kudos
Message 1 of 4
(3,600 Views)

I'm noticing you are configuring the buffer in C for 4096kS. That sounds too small. Maybe increase it to something bigger like maybe 120MS and try grabbing 30M samples at a time?

 

Another way to check overflow on the FPGA is to watch "ready for input" and "input valid". If "ready for input" was false the previous iteration and "input valid" is true, that indicates an overflow.

Message 2 of 4
(3,564 Views)

Hi nanocyte,

 

I was trying to capture a shot of 500k samples and I read that "NI recommends that you increase this buffer to a size multiple of 4,096 elements if you run into overflow or underflow errors" so I hard-coded 4.096M but I still got the same timeout error. Normally I would configure the buffer to numScansCap*5.

 

I didn't show it here, but I have looked at "ready for input" and it does go low during a long shot. I inverted and latched "ready for input" and for a shot I would loop; Reset my latch, use ReadFIFO to read the number of elements, use ReadFIFO to read those elements out, and then read my latched signal. Sure enough, after a few transfers I would get an overflow. I had set my FPGA FIFO to a depth of 131.072k and would get ~40-60k elements were available to read on each iteration of the loop so I was not sure why I was getting a overflow error. This is a problem because I cant lose any data during the shot.

0 Kudos
Message 3 of 4
(3,476 Views)
Solution
Accepted by topic author dfinkenthal

I think your best way forward is to find the limits of what's possible. Do the experiment where you set the buffer to 120M. Set your numSamples to 120M make sure at least that works losslessly and doesn't trigger the overflow flag I described.  If that doesn't work, shrink the numSamples to the 131k. That should definitely work. If it doesn't it'll be a good starting point for an AE to debug. If it does work but the first test doesn't, try putting 120M samples in the buffer slower and do zero byte reads to poll how fast the host buffer fills. That will give you a good idea of what your limits are and, if you need to, you could look into what's causing those limits.

Message 4 of 4
(3,451 Views)