From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

While loop in microseconds

Solved!
Go to solution

Dear all,

 

I am still quite new to labview and I need your help for a timing question:

 

Here is the Hardware I possess:

_NIPXIe-1073 Chassis

_NIPXI-6259 DAQ

_NIPXI-6722 DAQ

_NI5751 FPGA with the module 2147 (Analogue input) and 2148 (Digital I/O)

 

The software I am using is Labview 2012 64bit.

 

Now here is my question:

 

For now, I am only using the NIPXIe-6259DAQ board which has analogue inputs and analogue ouputs.

What I need to do is record an analogue signal, do a couple of matemathical application on it and send it to an analogue output. I am doing this in a Labview while loop (attached is my vi).

However this entire process of acquisition and output signals need to be performed in 1 microsedonds.

For now with the NIPXIe-6259DAQ board when I try to run this kind of tasks, I can only write 10samples/ms...when I use probes in labview to check how many samples are written.

 

Do see an obvious solution to my problem with the hardware I possess??

Otherwise, what kind of Hardware should I purchase in order to realize my requirements??

 

Many thanks,

Best Regards

 

 

0 Kudos
Message 1 of 7
(4,348 Views)
Solution
Accepted by topic author rihns

For the microsecond turnaround, I really think you will want to go with an FPGA, something in the PXI R Series boards should work for you.  I would not trust Windows at all to be able to turn that data around in a microsecond, nevermind consistantly.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 7
(4,311 Views)

@crossrulz wrote:

For the microsecond turnaround, I really think you will want to go with an FPGA, something in the PXI R Series boards should work for you.  I would not trust Windows at all to be able to turn that data around in a microsecond, nevermind consistantly.


I agree with crossrulz and also want to add that I would not even trust a RTOS to turn data around this quickly.  This type of application is very well suited for FPGA and I think you would see good results if you were to go down that path.

Matt J | National Instruments | CLA
0 Kudos
Message 3 of 7
(4,241 Views)

Depending on the mathematical operations you need to perform, another possible option might be to avoid digitizing the signal at all and instead use an appropriate op-amp based circuit to do the math on the analog voltage. That approach has its own set of pitfalls and limitations of course.

Message 4 of 7
(4,232 Views)

The VI you posted has nothing to indicate that it attempts to approach 1 microsecond. The AI and AO sample rates are 10000, which means 100 microseconds between samples.  You then read an array of waveforms (2 waveforms) with as many elements as are available.  Since you do not use the timing data in the waveform an array of DBL from the AI Read probably would be better.

 

Then you do some math on the waveforms followed by an inner loop which processes only one data element from each waveform, probably the first element, and repeats the process as many times as there are elements. The inner loop should be for loop autoindexing on the Y array of the waveform.

 

You have many calculations which never change during a run repeated inside the loops. Map Setup.vi. -1.3/255.  Move them outside the loop to speed thngs up.

 

All control terminals except the Stop button should probably be outside the loop. The indicators, especially the graphs should be in a parallel loop.  Build XY Graph should be elsewher also. The OS will not update the display thousands of times per second and the user could not perceive the changes if it did.  

 

I do not have the image VIs. It appears that you are pulling one pixel out of a static image for each pair of points read from the AI line. Would it be faster to convert the image to an array of DBL outside the loop and just use Index Array? .

 

If you set the AI Read to read only one sample per channel, then you do not need the inner loop and the AO Write would be 1 Channel, 1 Sample.

 

A clean loop like that could probably run at 10 kHz on RTOS and maybe, with occasional jitter, on Windows.

 

Lynn

Message 5 of 7
(4,201 Views)

Thank you all for those replies and Johnsold for those very helpfull insides and code analysis!!!

 

Johnsold, after seeing my code, do you think this would be doable in an FPGA on a microsecond level?

 

Many thanks,

Best,

Renaud

 

0 Kudos
Message 6 of 7
(4,119 Views)

Renaud,

 

I do not have any experience with FPGA, but based on Jacobson's response earlier FPGA seems like a good possibliity.

 

Lynn

0 Kudos
Message 7 of 7
(4,078 Views)