LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need to speed up while loop

b3anb0_0-1730998872103.png

I have the while loop here that I need to speed up to have more frequent readings, but no matter how I change the timer parameter the waveform readout is still at the same speed. Does anyone have any tips?

0 Kudos
Message 1 of 11
(394 Views)

One Big Loop is not a proper architecture for high speed data acquisition.

 

You need to use a proper program architecture like a Producer/Consumer for high speed DAQ.

========================
=== Engineer Ambiguously ===
========================
Message 2 of 11
(382 Views)
  1. Read the waveform in binary format.
  2. Make sure verbose response is off,
  3. Save data in TDMS file.Save the raw data instead of scaling it and add a scaling information to the channel, so it's properly scaled when used.
0 Kudos
Message 3 of 11
(380 Views)

The timeout is unnecessary.  Get rid of that.

 

Your data collection should be its own loop.  Initialize your VISA resource once and then just maintain it.  Also, your VISA R/W probably each have their own set of delays happening.  Are you accounting for that?

 

Once you have data, send that to a separate loop where it is dumped to file/disk.

 

Someone else mentioned Consumer/Producer.  Recommend reading up on that.

0 Kudos
Message 4 of 11
(375 Views)

While I typically don't like to analyze pictures of diagrams instead of the actual VI, especially if the code is messy (hidden wires, backwards wires, overlapping items, no error handling, terminals without label, etc etc.), I took a quick glance, but there is a lot of code smell.

 

As others have already said, such a monolithic loop is a very poor architecture if speed is important. As a first step we should decide where the bottlenecks actually are. It could well be that your visa back&forth communication is the slowest part. For example we don't even know how fast the instrument responds to each query. Do you really need to set up the instrument every single time before getting another reading?

 

You basically get a spreadsheet string that you parse into a 2D array, apply some scaling, then convert it back to a spreadsheet string to be appended to a text file. Some of the glaring inefficiencies will probably be fixed by the compiler (e.g. dividing an array by 25, then multiplying the new array with a scalar control value would be the same as multiplying the array with the scalar division of "control value/25". (No this is not the problem for your slowness, just clean programming!). And yes, scanning/formatting operations are both lossy and expensive, writing to a binary file would be orders of magnitude faster.

 

If the slowdown in is the analysis/saving (which I doubt!) you should separate your data gathering and analysis into two independent loops (as has been suggested) linked by a queue. Of course if the non-acquisition part is truly much slower than the acquisition, you eventually run into memory problems no matter what.

 

I assume you placed that 0ms wait intentionally because it is not the same as "no wait". A 0ms wait allows a task switch so other parts of the code have a chance to get a slice of the pie. I doubt it makes a difference here, though. In your case, the loop time is the time needed for the slowest independent code segment (you only have two independent segments: (1) the wait, (2) the rest of the loop code)

 

Can you explain how you deduce speed from the "waveform readout". There is no waveform datatype anywhere and the data contains no timing information (any chart/graph would assume a generic dt=1 no matter how fast the loop spins or how much data you get with each iteration).

 

 

0 Kudos
Message 5 of 11
(354 Views)

First try to assess how much time you are required to read the data from the VISA functions. disable for a moment the file saving part and try to optimize just the instrument communication portion. 

 

Check for all necessary commands you really need to send at each iteration. Right now your code is sending / receiving a  series of commands/results. Verify what exactly part of commands you actually need to send in order to read the data.

as someone else already suggested, investigate different archictecture for implementing your code. Study Producer consumer  and dominate it how it works. It  will paid off in the next projects you have to do. 

Message 6 of 11
(324 Views)

You've received lots of good advice already, but here's my 2cents.

 

It looks like you are reading data from a scope.  Which make and model?  

 

The slowest part is probably the data transfer, scope to computer.  As someone said, use binary.  Also, take the minimum number of points needed to represent your data accurately.  Don't use 1 million if 100 is all that's really needed.  That will speed things up considerably!

 

Craig

0 Kudos
Message 7 of 11
(308 Views)

I'm not really familiar with interfacing oscilloscopes, so my question may not be relevant.

How many bytes do you actually receive? 999999?

If it's less, the full VISA timeout will elapse before the VISA Read terminates. Default VISA timeouts are typically long.

Paolo
-------------------
LV 7.1, 2011, 2017, 2019, 2021
0 Kudos
Message 8 of 11
(265 Views)

Do the configuration before the loop, only do the data read in the loop and offload the write to a secondary loop through a queue.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 9 of 11
(238 Views)

@pincpanter wrote:

I'm not really familiar with interfacing oscilloscopes, so my question may not be relevant.

How many bytes do you actually receive? 999999?

If it's less, the full VISA timeout will elapse before the VISA Read terminates. Default VISA timeouts are typically long.


That's not correct, that 999999 is just a memory buffer allocation size for the read, not a number of bytes to read.  It does not affect VISA timeout.  

0 Kudos
Message 10 of 11
(229 Views)