LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Stop Retriggerable Analog Output Properly

Solved!
Go to solution

I am working on writing a LabVIEW code to control the experiment I am building.  The general analog output schematic is shown in “AO Waveform.jpg”.  In summary, I will have a 10 Hz TTL pulse triggering my analog output voltage (retriggerable).  After each trigger, a 1-period sine wave is generated, and then the voltage goes back to low again and waits there until the next trigger.  I made an example VI (“Properly Terminate AO.vi” and “Properly Terminate AO.jpg” for anyone who cannot access LabVIEW).  I am currently having trouble with stopping the AO task.  I need to make sure that I always stop the program in a manner that ensures the AO voltage ends at the minimum (i.e. not in the middle of a sine wave, but only after a sine wave has completed and is waiting for the next trigger).

From looking at previous forum discussions, it seems that the most reasonable methods of accomplishing this are either:

1) Have the code check to see if the current waveform has completed and is waiting for the next trigger, and only stop if this is true

or

2) Stop at any point, and then have the code check to see if it was in the middle of writing out a sine waveform; if it was, continue writing out the rest of that waveform and then stop for real.

 

The most relevant previous post I was able to find was: https://forums.ni.com/t5/LabVIEW/stop-analog-continuous-waveform-generation-propperly/td-p/3054310/p....  In the discussion at the end, they seemed to have answered this exact question.  However, it was just text back and forth without any code, and I did not understand the conversation well enough to convert it into a code of my own.  So I was hoping to get help either turning the idea they discussed into a useable code, or get another idea that works just as well.

 

Technical hardware/software details (I am using):

- LabVIEW 2017

- A PCIe-6374 Multifunction I/O Device

- A Digital Delay Generator (Stanford Research Systems DG535) to create the 10 Hz TTL pulses

 

Thanks,

Gregory

 

0 Kudos
Message 1 of 7
(2,551 Views)
Solution
Accepted by GregoryP

I didn't immediately think of a good way to do Option 1.  I can now imagine a scenario with a dummy task that borrows the AO sample clock signal and also uses DAQmx Events to signal the app every N samples (where N is the number that AO generates for each trigger).

 

Meanwhile I tackled a version of Option 2.  But instead of the more difficult job of tracing the remainder of the sine wave, I gave myself the easier job of simply ramping linearly to the min value.  (Finishing off the sine wave should be doable *IF* one can count on the result from querying a stopped task for the total # samples generated.  I don't know if you can.  The last time I explored this it wasn't working reliably, though that was admittedly around 15 years ago.)

 

After the AO task stops, I use a little trick that lets me make a simple AI task that actually *measures* the AO voltage without any physical wiring.  Then I make a ramp signal that goes from that measured voltage to the min value.  I generate that ramp, stop AO, then remeasure with the AI task to confirm.

 

There are several additional comments in the attached LV 2016 code.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 7
(2,507 Views)

Hi Kevin, 

 

Thanks for your help.  For my specific purposes, it is important that the AO waveform be a smooth, continuous function to minimize jolting forces on the instrumentation being controlled by this AO voltage, so an Option 1 solution is preferable to me.  From your idea, I made the attached code (in case it might be helpful to anyone reading this in the future).  I used a counter task to make sure the AO task was only stopped after the AO waveform was complete. 

 

Thanks again, 

Gregory

 

0 Kudos
Message 3 of 7
(2,470 Views)

 

What you posted looks pretty good, just be aware of the following things which may or may not be an issue for you:

 

1. After the GUI stop button is pressed, the AO task will respond to at least 1 more trigger.  Due to the 10 msec software polling rate, it'll sometimes respond to 2 more.

 

2. There's some timing uncertainty for how quickly the counter task will *recognize* that it's done, and how quickly after that the AO task will be stopped.   It's possible that the AO task might have started responding to yet another trigger during this uncertain time interval. 

    It appears that you intend AO to generate for 20 msec out of every 100 with 80 msec of idle time to absorb your timing uncertainties.  Odds are that once in a while, Windows will exceed that.

    I'd also recommend doing some careful timing tests with that counter task.  Right now you're defining both the low time and the high time to equal the full AO duration.  On the other hand when generating a single pulse, low time is irrelevant as the unwired input for initial delay is used instead.   (Note that the written text and the diagram are a little contradictory.  From experience, the text is correct.)

 

Bottom line: your code will respond to 1 or 2 more triggers after clicking "stop", but it most likely will *usually* (perhaps almost always) stop AO generation during the idle time between triggers.

 

If *usually* isn't going to cut it, answer back and let's see if we can't come up with an approach that'll be more reliable.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 7
(2,374 Views)

Hi Kevin, 

 

Thanks for your response. 

 

1. Yes, I am aware of this.  Thank you for pointing it out.  But for my application, it is fine for the AO task to respond to a few more triggers before actually stopping. 

 

2. I did not realize that Windows could exceed tens of msec before reacting appropriately and stopping the AO task.  My final application could generate waveforms as long as 50 – 60 msec, and thus only have 40 – 50 msec of idle time between triggers. 

 

I am not familiar with how to perform timing tests, so recommendations would be appreciated. 

 

I did not fully understand the CO Pulse Time VI.  I just used an NI example code, and when I tested this, it resulted in the AO waveform stopping after completing the entire waveform.  But again, recommendations on ways I should do it, or how to do it better, would be appreciated. 

 

And finally, if you have any thoughts or ideas on a more reliable approach to ensuring the AO task only stops during idle time, I would love to get your input on that. 

 

Thanks, 

Gregory

0 Kudos
Message 5 of 7
(2,351 Views)

I started to put a little something together with DAQmx Events, but had some misgivings that there'd still be exposure to Windows' (occasional lack of) responsiveness, so I decided to rethink it.

 

I haven't got it all worked out, but where I'm headed is to use a counter task as a retriggerable finite pulse train that acts as the sample clock for the AO task.  Then there's gonna need to be some further stuff to make sure that AO generation only stops at the end of a waveform.  I have some ideas, but I've got to start piecing it together.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 7
(2,325 Views)

Time to admit defeat, at least temporarily.

 

I only had a brief time to try out my code on real hardware and I couldn't get the behavior I wanted.  I remain puzzled why not.   I tried a bunch of little tweaks and things, but I'll post a version of my original plan and try to explain the thinking -- perhaps there's a branching-off point that'll lead to success?

 

Here's the intended idea:

1. Configure a CO task as a retriggerable finite pulse train.

2. Configure an AO task to use that pulse train as its sample clock.  However, configure the AO task as a Continuous Sampling task.  The why comes later.

3. Configure the AO to "Do Not Allow Regeneration".

4. Configure the AO task buffer sizes (both the task buffer and the onboard buffer) to equal 1 "chunk size" worth of AO samples.

5. Configure a DAQmx Event to be fired after every "chunk size" worth of samples has been transferred from the (task) buffer.

6. Write 1 "chunk size" worth of AO samples to the AO task.

7. Start the AO task first, then the CO task last.

8. Enter an event-handling loop.

9. In the code for the DAQmx Event, which should fire whenever the task buffer has transferred its chunk down to the board's onboard buffer, write another 1 chunk size worth of sample to the task.

10. If the user clicks the stop button, set a flag.  When set, the DAQmx Event will no longer write any new data to the task.  Coupled with the "Do Not Allow Regeneration" property, one can expect a buffer underflow error.

11. To actually *see* that error, we need to interact with the task.  So I made a 50 msec timeout case that simple queries the total # samples generated.  I expected it to give me an error when I reached the buffer underflow condition.

   But the error shouldn't occur until all the samples I've written to the AO task have managed to be generated.  As long as each "chunk size" brings the AO output gently back to 0, you should be good.

 

But it didn't seem to work.  As I recall, it seemed as though the DAQmx Event never fired at all.  And the query for # samples generated never returned an error.  That doesn't make a lot of sense to me.  I went back and doubled the task buffer size as another experiment, but that didn't help either.

 

Code is below, not sure if/when I'll take another crack at it.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 7 of 7
(2,296 Views)