LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

High Speed Acquisition, Processing, Then output Latency on a PC

I am trying to specify hardware to purchase. I have a project where the objective is acquire two analog channels at 10 MS/s (12 bit/ or 16bit data transfers). (e.g.. PCI-5105) Process one waveform to modify it, write the other to file for low duty cycle analysis, then output the modified acquired waveform into one 12 bit KHz analog output and 9 digital outputs simultaneously (various multifunction daq devices meet this spec).  These inputs and outputs must happen over a long time. Many minutes uninterrupted. The digital outputs must be accurate to 0.1 us, that is true to the incoming waveform characteristics. (10MS/s).  I have created a bandwidth budget for all the hardware involved.  Incoming is 20 MS/s (2 channels at 10MS/s), PCI bus bandwidth is 133 MS/s divided by the number of devices transmitting, Write to hard drive is about 30 MB/s (15 MS/s). The processor is GHz with windows. I have drafted a LabVIEW code methodology using producer/consumer queuing, and win32 file writing and 4 while loops with various appllied execution thread priorities. I expect the output would need to be delayed with a buffer to allow for a variable delay due to processing.  I would like to estimate the "latency" between the input and the output.  that is what would be the shortest sustained period of time between the input and the output stream.

 

The incoming waveform is a square-wave-like pulse stream.  The conversion is to find a pulse-width and amplitude of each pulse adjust the timing slightly with fixed parameters and convert it by then algorithmicaly configuring 1 analog output and 9 digital outputs in a modified way that has been synchronized to the incoming pulses.

 

The question is do I wish to try to do this on a PC or should I go with a PXI based RT system.  Even if on a PXI RT system then how can the question of what will the latency be needs to be answered before purchase?  Suppoing even the RT system is inadequate.

 

The desired range of latency is hoped to be in the millisecond range.

 

Experienced judgement would be appreciated.

 

Thanks John

0 Kudos
Message 1 of 9
(3,706 Views)

The processor is GHz with windows.

Sorry, but you won't get it done with Windows.

Windows is not capable of dedicating the kind of time you need to your task.  If somebody clicks on a different window, or some other event like that happens, you just lost 300 mSec that you won't get back.

 

Message Edited by CoastalMaineBird on 03-20-2009 04:18 PM
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 2 of 9
(3,698 Views)

If latency is your most important spec, than FPGA is where it is at.

 

Our FPGA boards have 40Mhz clocks, and most code can be done in a couple clock cycles depending on what you are planning to do with it.  Only problem here is that the fastest our AI lines on these boards run is 750kHz.  I have successfully advised customers to provide their own high-speed ADC's from another source, clock it with a digital line at 10Mhz, and use 12 bits x 2 channels of the 96 digital lines to read the signal from the ADC.

 

Processing will be extremely fast on these boards, but will take a couple hours to get used to FPGA programming with LabVIEW.

 

The PCI-7831 has 96 DIO lines at 40Mhz and 8x16bit outputs at 1Mhz. You can use DMA over the PCI bus to transfer the data to your computer to log at your requested speed.

 

Downsides, as I mentioned, you would need to provide adequate ADC's, some time to devote to learning FPGA(not terribly difficult if you are proficient with LV), and of course the FPGA module software.

 

Another option is using a high speed card with a RTSI bus on it to pass data to the R-Series card, but that would increase cost significantly.  Others may have an opinion on how to implement that system.  A PXI chassis has this built into the backplane and is probably easier to implement, but of course all of this depends on your price/ease of use decisions, but I would definitely recommend FPGA to reduce latency if that is your main concern.

 

Let me know if you have any questions on this and I'd be happy to help guide you.

Rob K
Measurements Mechanical Engineer (C-Series, USB X-Series)
National Instruments
CompactRIO Developers Guide
CompactRIO Out of the Box Video
0 Kudos
Message 3 of 9
(3,643 Views)

Rob,

 

I think I follow. The solution sounds very attractive. I have not seen the idea of bypassing the ADC and porting the data from elsewhere via the digital lines.  I have 14 years experience with LV but have not had direct experience with FPGAs.  Taking the time to learn is not a concern.  And the total cost sounds quite good.

 

However I do not follow what kind of ADC I might look for.  I suppose the PCI-5105 has no way to port the data via digital lines?  Does NI have solutions there?

 

Can the module you reference PCI-7831R do simultaneous input-output streaming?  I will be reading up on LV FPGA and the device you mentioned

 

The 'latency" is very important and it is a dedicated applicaiton so this sounds like a compelling solution. SoI do want to know more.

Thank you

0 Kudos
Message 4 of 9
(3,628 Views)

Rob,

 

Sorry I did not follow the RTSI bus idea using the high speed digitizer I do now, though I have not done that before, but I still want to understand the methodology of providing the ADCs.

0 Kudos
Message 5 of 9
(3,595 Views)

Manliff,

 

Many vendors can provide an ADC evaluation board.  This board automatically provides a set clock rate, or allows an external clock, to control the conversions.

 

One particular board I have has a parallel setup built in.  This board is only running at ~15kS/s at 18 bits, but I know there are much faster boards out there.  The parallel setup allows me to have each bit have a dedicated digital line.  With an R-Series card, I can poll these digital lines simultaneously at 15kHz and read the conversions.  The same method would be used for faster ADC's.

 

One thing to keep in mind...many ADC's have a serial output interface, so if you are trying to use an ADC at 10MHz and 16 bits, then the bits are actually coming out at over 160MHz on one DIO line, which is faster than the R-Series can sample.  That is why it is good to look for an evaluation board that can output a parallel signal for you.  Since it has the serial conversion and clock rates built in, it is easy to just poll all the digital lines at once instead of a single line at a very fast rate.

 

TI also has many ADC...here are some 16-bit ADCs with built-in parallel interfaces.  I didn't see an evaluation board, so you may have to build some signal conditioning yourself, however. I may order one to evaluate in order to do it myself.

Rob K
Measurements Mechanical Engineer (C-Series, USB X-Series)
National Instruments
CompactRIO Developers Guide
CompactRIO Out of the Box Video
0 Kudos
Message 6 of 9
(3,575 Views)

Hi Manliff,

 

I agree with Rob's idea to combine external ADCs using the FPGA to process and output your data.  Our digitizers are not able to export data over the RTSI lines (RTSI is used for control signals only).

 

I should also mention that the 133 MB/s bandwidth on the PCI bus is a theoretical limit, practically you can expect around 60-80 MB/s of sustained throughput depending on your system--still plenty for your application, I just thought I'd point that out just in case you need to expand the system.

 

-John 

John Passiak
0 Kudos
Message 7 of 9
(3,550 Views)

I ran this idea by my collegue, he was skeptical that a 40 MHz process could crunch the signal fast enough.  The process is reasonably simple:

Channel A 

Incoming analog pulse sampled at 10MS/s find the start, stop, max value (to 12 bit), with that data, configure an output of 1 analog DC level output (also 12 bit) and 9 digital settings (switches), all the digitals must be adjusted using an algorithm of preset numbers so that all have appropriate synchronization. The output is accurate to 0.1 us.  The incoming stream is about 15 minutes long and must not be interrupted so that the output is a perfect translation of the incoming

Channel B (incoming analog)

record to 10 MS/s (12 bit) file (disk)

Can a 40 MHz clock do those steps?  Am I correct in guessing that all those steps must be done in 4 clock ticks (or is it 3 ticks) so that a back up does not occur?

The process can be buffered, in fact, has to be buffered to match the longest pulse width,

I am also thinking that some sort of careful triggering may help reduce data to process.

but once a "Pulse" has been recieved can this be done fast enough.

 

I do see that higher level FPGA boards have faster clocks albeit only in the PXI platform. Nevertheless it woulld be good to know how an estimate might be done.

 

Does the FPGA module simulated target tell how many ticks are needed for a program?

Thanks,

0 Kudos
Message 8 of 9
(3,530 Views)

John,

 

You are incorrect that all the processing must be done in 4 clock ticks.  Even though the FPGA runs at 40Mhz, we are able to pipeline processes so that it is doing multiple things at once, with pure parallelism.  Buffering would be done with FIFO's on the board itself, but you are limited in space for how much data you can store in a buffer, especially at those rates.

 

Unfortunately, the process is not simple.  You are sampling at high speeds for extended periods of time while logging all of the data.  I know alot more about FPGA, maybe some others can chime in with thoughts on datalogging, but assuming a 16 bit datatype in LV(no 12 bit datatype, unless you want to package booleans into U8's), you are looking at 20MB/s logging for 15 mins, which is 18 GB of data.  Disks theoretically should be able to stream at this rate, assuming no inturruptions.

 

The biggest thing here is you wanted as little latency as possible.  The high speed PCI and PXI cards we have can handle those inputs just fine, and can probably log the data, but the processing is still being done on a computer, which is subject to delays.

 

We should be able to find out how many clock cycles we want to process, but the size we want to have for a buffer is limited, depending on everything we have to do.  Also, you should definitely have knowledge of ADC's before attempting to set one up to go at 10Mhz, while it is possible, at that speeds you have to start thinking of noise and circuit considerations, etc.

Rob K
Measurements Mechanical Engineer (C-Series, USB X-Series)
National Instruments
CompactRIO Developers Guide
CompactRIO Out of the Box Video
0 Kudos
Message 9 of 9
(3,494 Views)