LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA FIFO Target to Host, C API, 52000 error on large transfers

Solved!
Go to solution

Hi

 

Issue:
    On my Host system I have written code that stops the DMA FIFO, and configures the FIFO with a number of samples (See Screenshot1) and then attempts to start the FIFO. When I try to capture large sets of data (over 1GB) from my FPGA I get a -52000 error. I am assuming this is because there isn't enough continuous memory to allocate. I've tried Why Does My DMA FIFO on My Host Fail to Allocate Memory with Error -52000 or -50352 on a 64-bit Oper... with various different settings for the size of non-paged memory but it didn't have any effect. I've also tried looping the transfer by acquiring smaller chucks of data (~500MB per iteration)  and using fwrite to save into a file, but this was too slow.

 

Any suggestions on what to try if my host cannot allocate enough memory for the DMA FIFO transfer?

 

Thanks for the help,

Alan

 

My System:

Windows 7 x64

16GB RAM

2TB HDD

Qt5 with MinGWx32
FPGA/ADC: PXIe-7975 w NI 5753
Controller: PXIe-8381

0 Kudos
Message 1 of 7
(3,400 Views)
Solution
Accepted by topic author dfinkenthal

You mention you are using MinGWx32.  Are you compiling your program as 32bit? 

 

FIFOs for that device require contiguous virtual memory, but non-contiguous physical memory.   Available contiguous virtual memory in a 32bit process is generally low as it is subject to a lot of fragmentation.  If you compile and run your program as a 64bit program, you should be able to allocate larger FIFOs (likely up to just under 4 Gigs).

 

Another thing to note, for best performance I would recommend reading from the FIFO using smaller sizes than its configured size.  Reading the entire FIFO all at once forces it to wait until the FIFO is full before you can start reading data.  This also means that data has stopped transferring while you are accessing the FIFO because its full.  If you read in small segments (say 128MB each), then once you are done with a segment, the segment is freed up to be overwritten by the FPGA.  With a little benchmarking you can find a size that allows you to grab data out of the FIFO without the FIFO actually having stopped transferring data.

 

 

Message 2 of 7
(3,342 Views)

I do not use the C API much but I know the LabVIEW API to FPGA has a FIFO.Acquire Read Region (Invoke Method).  Does the C API have this as well?  May help with memory management.


Certified LabVIEW Architect, Certified Professional Instructor
ALE Consultants

Introduction to LabVIEW FPGA for RF, Radar, and Electronic Warfare Applications
0 Kudos
Message 3 of 7
(3,339 Views)

The C API has a similar concept in AcquireFifoElements.  Instead of region based though it is element based (so no out of order release).  It can be used to avoid an intermediate copy, as in you could acquire elements, and then directly fwrite those elements. 

 

If ashkinez's application is logging to disk though, I'm not sure they would see much improvement by using AcquireFifoElements.  Logging to disk is slow and they mention they are using an HDD.  I would recommend getting an SSD and logging to that.

0 Kudos
Message 4 of 7
(3,333 Views)

Terry:

I am using "NiFpga_AcquireFifoReadElementsI16" which like Michael mentions is similar to "Acquire Read Region". I use this to get a pointer to the data and then use fwrite to log it. I get the error when calling NiFpga_StartFifo if I have configured the FIFO for over ~500MS ( >1000MB).  

 

Michael:

I'm going to try both your suggestions, Ill report back soon.

 

Also, I am using a SSD. I wrote HDD out of habit, my mistake.

0 Kudos
Message 5 of 7
(3,329 Views)

Hi,

Not so much used to the C API, but to me your chunck looks pretty big. Did you try to poll data every 100ms (hoping that do not reach the 500MB) and send acquired data into a queue. The output of this queue would led to another thread taking care of writing data into the file. The queue shoudl be big enough to handle the slow execution of fwrite.

 

This way you're always getting the data out of the DMA FIFO host memory scope, and require less data to 'reserve' for this process. Another thing I have in mind : if you use a first read of 0 elements to determine how many elemets to read at the next DAM read function call it may require a lot of CPU (DMA polling is CPU demanding). So it is better to read the same number of elements every N ms if you can.

CLA, CTA, LV Champion
View Cyril Gambini's profile on LinkedIn
This post is made under CC BY 4.0 DEED licensing
Message 6 of 7
(3,325 Views)

You can get really high throughput with the TDMS Async Write; we used it on a project and the limits were as high as the hardware specs indicated.  Again below is a LV link but I believe there is a C API to TDMS.

 

http://zone.ni.com/reference/en-XX/help/371361N-01/glang/tdms_advanced_write/


Certified LabVIEW Architect, Certified Professional Instructor
ALE Consultants

Introduction to LabVIEW FPGA for RF, Radar, and Electronic Warfare Applications
0 Kudos
Message 7 of 7
(3,321 Views)