LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

timed sequence inside for loop not finishing as expected

Hello All

Using Labview 2012 SP1 fully updated as of today and a USB-6009, I wish to create a timed sequence which iterates the following steps within a for loop:

 

1) Create a multichannel analog volt output task, write output voltages, clear task to release device.
2) Create a multichannel analog volt input task, spend the rest of the frame waiting some delay time for the controlled system to reach steady state (if the elapsed time is already greater than the delay time, then do not wait).

3) Read multichannel voltages, clear task to release device.

4) Wait until the remaining specified iteration rest time has passed to start the next iteration (if the elapsed time is already greater than the iteration rest time, then do not wait).

 

I've implemented the above in the attached program, along with several data indicators to monitor frame execution times. I am new to labview, this is my first time using time sequences, so if I'm doing something boneheaded please do tell me (I suspect that I should be using a timed while loop with frames instead of a for loop with a timed sequence in it...) I can't for the life of me figure out the following:

 

A) Why the loop only ever performs one iteration but ends with no errors. Based on the time and error indicators it appears to be working correctly. However I'm not seeing any data in the waveform graph, but this is also the first time I've used a waveform graph so maybe I just have it misconfigured. The voltage output step is definitely working correctly in that the box output voltage is changing.

B) The global end time for that iteration is reported to be extremely long (about 20,000 seconds) when only about 2 seconds is passing. Perhaps this is a bug.

 

C) Why the iteration time is set to be 1 second but it takes about 2 seconds before the program finishes. Perhaps this is related to A.

 

Any help at all would be greatly appreciated. I wouldn't be surprised if I made some sort of fundamental error with the task handling or something.

0 Kudos
Message 1 of 3
(2,246 Views)

A)  It stops after 1 iteration because you inverted that status.  What you really should do is wire the error cluster directly to the termination terminal.  Then it will stop when there's an error.  You currently have it setup to stop when there is no error.

 

I don't see why you are using the Timed Sequence.  You have delays in there, which just totally messes up the point of using TIMED Sequence.  It looks like you can just use data flow to perform all of your seqeuncing.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 3
(2,237 Views)

Thanks for the reply! Yes, the boolean inversion of the status definitely fixes the for loop. I looked at that twice thinking it was ok. Guess that's what I get for coding while tired. 

The reason for using the timed sequence is that this model is but a very small component of a much larger program. That program will require these tasks to occur after a hardware trigger. The tasks will also require microsecond resolution since they must stay in sync with other processes so we'll have to buy a more expensive DAQ that can accept a hardware clock. And, from what I've been reading, microsecond resolution seems to require real time daqmx vi's running on a real time operating system (or at least that was true at one time, is that still true?) At the moment this is just a proof of concept prototype.

I still don't understand item B in my original post, though. Contextual help states that the global end time gives the total time elapsed for the previous iteration of the timed sequence in units of nanoseconds. However I can deduce by watching the speed at which the iteration index i is changing that each iteration is taking ~1.5 seconds now, but the global end time is given as 141584317321314 ns, or 141584.317321314 s. This is obviously incorrect. Am I doing something wrong or is this just a bug?

As far as item C, I assume that the extra ~0.5 s is time taken for the program to do the stuff at the end of the for loop. But these very simple operations shouldn't be taking 0.5 s, more like a few ns or at most a few us. I'm wondering if there's something I'm doing that is making it particularly inefficient. For instance, should I be using a multiframe timed while loop instead of a timed sequence inside a for loop? I'm going to compare these on my own in a minute here, I'm just wondering what the best practice is.

0 Kudos
Message 3 of 3
(2,198 Views)