Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronizing Digital output, Analog output and Analong Input using Ni-DAQmx

Solved!
Go to solution

Hello

 

I am programing a pulse sequence which contains some Digital pulses and analog voltages and during the pulse sequence I need to read an analog voltage. All these channels needs to be synchronized I am using Ni-DAQmx.

 

Please seed the VI attached.

I have to use LabView 2013

 

  • In the attached VI, the length of the pulse sequence is 1227 micro secs.
  • I am using a 1 micro sec clock to sync, so the length of the input arrays for each channel is 1227 elements long.
  • at various instance in this pulse sequence I need to collect data (an analog voltage) and I am using Analog IN for it.
  • after every Pulse sequence the input data for both analog and digital channel changes(not happening in the VI right now) so I need to put everything inside the for loop.
  • In this example I am changing the input data 10,000 time so I am running the FOR loop 10,000 times.
  • I am generating my own clock using a counter
  • Essentially all the channels DO, AO, and AI should start at the same time, run for 1227 us and stop at the same time and this should repeat 10,000 times.

The Issue,

  • Here all the channels and firing and the AI is getting data
  • the problem is, non of these are synchronized (because obviously I am not doing it, since I don't know how to do it)

I see a bunch of post regarding synchronization but non of them address my issue specifically.

 

0 Kudos
Message 1 of 11
(3,467 Views)

There are a LOT of threads about sync and many of them ARE relevant and useful.  But not nearly all, and I can understand how your search might have led you to the wrong places.

 

Right now I can only give you some brief bullet points.

 

1. First, creating a clock with a counter and having all the other tasks use it as a sample clock makes a fine basic approach.  Good work there.

 

2. But you must *also* be careful about LabVIEW dataflow, and make sure that AO, DO, and AI are all started *before* the counter task.

 

3. With that in mind, there's a conceptual problem with having AO and DO finite while CO and AI are continuous.  

    My guess is that you're making a mistake here.   Can you explain more about the intention? 

 

4. If you know all your AO and DO patterns in advance, there's no need to keep starting and stopping those tasks in a loop.

 

5. Do more searching here.  I'd especially suggest you look for the username "John_P1".  Some years back he was very active and posted a TON of really great DAQmx-related examples.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 11
(3,434 Views)

Thanks for the reply Kavin,

 

about Point 3:

I don't have any constraint making all finite, I just don't get them to pulse and Read if I make CO and AI finite.

 

Point 4:

I know the patterns of the AO and DO in advance, I could definitely append them one after other for all the 10,000 patterns and run it with out loop. But in the future I might have to do something else between the patterns (unrelated to AO and DO) so I am putting it in the loop

 

The MAIN problem in my program is I am not doing anything the Sync these channels except putting them in the loop and that is not enough.

 

I am attaching a sample pulse sequence here, in reality I will have few more channels and little more complicated pattern and I need to scan one of the parameter, for example the amplitude of the AO or the distance between the pulse in the DO, So I am putting it in the For loop.

 

 

In this sample I have an AO and DO patter and collecting the Data using AI through out. The channels output and input need to be perfectly synchronized and every instance.

 

Hope this explains my issue better.

 

Thanks and Reagards

 

 

 

 

 

 

 

0 Kudos
Message 3 of 11
(3,424 Views)
Solution
Accepted by Sannar

I had a few minutes over lunch to do some fairly minimal cleanup on your posted vi.  I don't have LV 2013 installed, so I hope the back-save comes through cleanly.

 

The data you feed to DO and AO looks to be constant arrays of 0 values -- I assume you'll change that part as needed.

 

Otherwise, most of the rest of the changes were small tweaks to task configs and sequencing their starts and stops to be sure they'd all be sync'ed to the master clock you create with your CO task.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 4 of 11
(3,411 Views)

Hey Kevin,

 

Thanks So much, It worked like a charm right away. I was breaking my head over this for a month now.

 

AO and DO data is not all zeros it's 2  25us pulses 100 us apart (for testing. Pic attached) and they are perfectly synced, also the AI channel is.

 

One question. Is there a reason you left all four timers outside the For loop? I put them Inside the loop and it was still working fine, Because this way I can also change the sample size for each cycle of the for loop.

 

And the way the syncing works is, for every cycle in the loop, it waits to collect the errors from all the channels at the Start Task of the CO, before executes the tasks, is that it? If yes, this looks like the syncing is a by-product of collecting all the errors (which in itself is not that important). I would assume the syncing should get the preference.

 

In any case it works for me perfectly. Thanks once again

 

Sankar

 

 

0 Kudos
Message 5 of 11
(3,383 Views)

If you might change the # of samples from iteration to iteration of the For loop, you are right to put the calls to DAQmx Timing *inside* the loop.  It will be *ncessary* in that situation.  I thought you were delivering the same pattern & # samples every iteration, making it safe (and slightly more efficient) to put them outside.

 

As to syncing and errors: I had dual intentions there.

- merging errors before starting the counter uses LabVIEW's *dataflow* as the sequencing mechanism.  It's usually considered more LabVIEW-like to rely on dataflow when feasible rather than resort to things like Sequence structures.

- in apps involving close sync of multiple tasks, an error in any one of those tasks is usually going to mean that the rest of the tasks won't be useful without the one that errored out.  I often consolidate errors from multiple tasks for this additional reason.

 

One last tidbit: in most systems I've dealt with that sync'ed output and input, I configured the outputs to generate on the *leading* edge of the sample clock while the inputs capture on the *trailing* edge.  This gives the system a little bit of time to respond to the new stimulus.  You don't *have* to do this, but I've usually found it be preferable.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 11
(3,346 Views)

Thanks much for posting this. It is greatly helpful.

Can you please advise what would be the best way to delay AI with respect to AO in this particular example?

 

0 Kudos
Message 7 of 11
(2,307 Views)

Sure.  There's a presently-unwired 'active edge' input terminal available in your calls to DAQmx Timing.   Because the CO pulse train you're using as a sample clock is specified to idle low, that means each pulse will produce a rising edge first and a falling edge second.   So you can just specify 'Rising' for the AO task (and probably also the DO task) and 'Falling' for the AI task.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 11
(2,295 Views)

Thank you very much for sharing your widsom. Is there a way to change the amount of time delay between the AO and AI?

0 Kudos
Message 9 of 11
(2,288 Views)

Absolutely.   That's the beauty of using a counter pulse train that you control as the sample clock.  You'd typically define a frequency and duty cycle, where the duty cycle ranges between 0 and 1 (but not exactly 0 or 1).  A larger duty cycle means a larger pulse width and more delay from rising (AO) to falling (AI) edge.  IIRC, you defined your pulse train in terms of high and low time so delay is defined simply by the high time, and it's a simple calc to choose a low time that gives you the right frequency.

 

With only a single AI channel, you can feel pretty free to use a duty cycle as large as 0.9 or more.  If you had multiple AI channels in your task on a multiplexing device, you'd need to be more carefully about the AI multiplexing operations "spilling across" the next AO sample.  The default behavior of the multiplexer is to spread the AI conversions widely across the entire sample period.  You might find yourself needing to purposely speed that up by setting a higher rate for the "convert clock" that gets your AI sampling done before the next AO sample can exert its influence.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 10 of 11
(2,275 Views)