From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
04-30-2014 03:44 AM - edited 04-30-2014 04:05 AM
Hi,
I've a 7961R FPGA Module and a 5761R Digitizer (sampling at 250 MS/s - in Multi sample CLIP mode) working in a PXIe-1078 chassis.
I'm using an external trigger signal connected to the TRIG input of the 5761R and I need to acquire 40 us of data from it's four analog input channels
after each trigger. For the first tests of the hardware, I've modified the "5761R Getting Started Multi Sample CLIP" project to adapt it to my needs.
I've two versions "working" right now, one acquiring from only one input channel and another acquiring from two (using two FIFOs to simplify things).
Everything works fine but there's a limit in the sampling points I can acquire until I get and underflow code (-50400). This value is approx. 12000 or 48us
for the one input channel version and approx. 6100 or 24.4us for the one using two. Obviously, I need to acquire more channels and more time so I've a
problem here.
As I've not much experience with FPGA I don't know how to solve this error. My attemps consisted in changing the FIFOs size on the FPGA part and I've also
tried reading the samples in smaller chunks instead of all at once but without luck.
I hope someone could lent me a hand with this. Thanks in advance!
Jordi Bach
05-18-2014 08:37 PM
Hi Jordi.Bach,
I couldn't see your codes as I have older version of LabVIEW. This reply is based on my experience of streaming data using NI 5761R.
From what you have mentioned here, the data transfer from the FPGA to host doesn't happen as fast as you want without configuring it properly on the host side, though this should not happen. That is the cause for the underflow. The example code that is available with the software is only good for a test case with a small number of data points to be transferred to the host and is unstable for generic purposes.
In order to solve the issue, based on my experience, you need to invoke node 'DMA FIFO.Start' in your code. On the host side, you need to design a state machine inside the while loop or the timed loop depending on the type of host (Windows or Windows RT) with FIFO configuration in one state and acquisition in different state. Remember to set the FIFO buffer side on the host real huge so that there is no overflow of data. I can give you the exact solution in my next chat if you can send the file saved for previous version. I am using LabVIEW 2012.
Provide me the details of FlexRIO controller and PXIe chassis too. The transfer rate is bottlenecked by the controller transfer rate and the back plane of the PXIe chassis.
Also, remember that the output from trig input channel from NI 5761R to NI 7961R is much faster than the data coming to the 7961R. You need to figure the delay between the two channels and make corrections in your FPGA code if you haven't done that.
I hope you have solved the issue by this time through some other source.
Regards,
Badri
05-19-2014 04:44 AM
Hi Badri,
First of all thanks for your help.
I'm still struggling with this issue so I'm attaching the project for 1 channel acquisition converted to the 2012 version.
Concerning the details of the system, I'm working with a PXI-1078 chassis with 1 link PCIe == 250 MBps. I already know that I'm
demanding a lot more of this during the acquistion time (40us at 2000 MBps = 250MSps/ch * 2 bytes/S *4 ch) but the rate of repetition
of the input pulses (trigger) is quite low (20 Hz) so the effective rate is under the PCIe capabilities (1.6 MBps equivalent). Knowing
that, the embedded memory should play is part by keeping the data stored until transfered, the 7961R has approx. 4700kbits
and each "burst" of data takes theoretically 640kbits.
About the trigger and analog input delay is already solved, thanks for the point.
Best regards,
Jordi
05-19-2014 03:20 PM
I need the details of the controller that you use so that I can understand why are you doing it in your code.
I am still not able to open it in LV 2012 f3. I will install LV 2013 in a different system and check it. I need a couple of days prior to giving you a possible fix.
05-20-2014 10:01 AM
Sorry, I forgot to tell you about that, the controller used is the PXIe-8135.
Thanks for the all efforts!
05-21-2014 05:36 PM
I have made some modifications so that you system should work theoretically.
05-21-2014 05:43 PM
You might get a overflow error. I expect it to happen because of the buffer size on the FPGA side as the transfer rate is constrained to 250 MS/s from the FPGA to the controller. You might have to change the buffer size to 8000 or 16000 to avoid overflow.
05-21-2014 07:14 PM
I modified the code again to ensure you will not have any trouble. I think you will not see any issue when you compile and run it.
05-26-2014 07:52 AM
Hi Badri,
Thanks for the VI, I've tried it and unfortunately it's not working. The problem is in the read from fifo method on the host side when using 0 samples
at the input. I know that this is usually used to retrieve the number of samples left for read and to use it as a event/state machine control but I don't
know why is not working in this particular case (I've already faced the same problem before and the elements remaining stays always at 0 no matter what).
However, I've been rearranging this VI to the way I suppose it would work and, surprinsingly, now the program can read around 40KS from 1 channel.
Beyond that value I get contamination between succesive trigger acquistions (the overflow indicator stays in green and the number of elements remaining
to read starts to increase). I've been also working with the fifo size (on both the host and the fpga side) but I don't see a clear effect on the max. acquisition time.
I'm attaching the project just in case you want to take a look and now I'm trying to acquire from two channels to see what happens with the acq. time.
Thank you so much for your help!
05-26-2014 07:40 PM
The biggest constraint in your system is the data transfer rate of the PXI backplane. You will hit overflow at some point if you start to stream data instead of data burst. I redesigned the FPGA code such that only 16000 samples are read, after which it goes back to ‘Wait for trigger’ and wait for the trigger input unless the stop button is hit, in which case the FPGA will go to idle state.
If you could successfully transfer 40kS without overflow while your intention is to transfer 16kS (32kB) of data per channel per burst, I think you should be able to overcome issues with small tweaks. Send the read data to a software FIFO instead of a graph. LabVIEW utilises a lot of memory while graphing. This slows down the read cycle in the host and causes FPGA to overflow. Moreover, I suggest you to read every 64us of data together, which is 16000 samples. Right now, you are not sure whether there is enough sample to read, which can lead you to underflow. Hope it doesn’t. I would also suggest you not to stop the DMA FIFO transfer in between the while loop. That can cause data to overflow too. But, if that is the only way for your VI to work, then you would need a state machine in the host VI to ensure that you stop the DMA transfer after you read all data points.
You can design 3 more channels just the same way, if the first channel starts to work.
Hope the code works.