LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Generate AO with NI DAQ 9260 in a lock-in amplifier, ERROR 200288

Solved!
Go to solution

Hello,

 

I am trying to implement a lock-in amplifier with 9174, 9260 and 9215. The system is very simple: the 9215 samples the DUT and reference signal then the sampled signals are multiplied and summed over a specified integration time. My program is a simplified version of this one. Since the DUT signal and reference signal are in phase, so I did not have to implement an IQ signal path. There are two loops: one was dedicated to the signal acquisition and the other one performs lock-in function and AO, and the data was shared by the queue. This was implemented and tested with USB-6211 and everything was fine (maybe there are issues I did not notice). One day the 6211 stopped working so I purchased those compactDAQ modules. Here comes the problem.

 

In the previous program for USB-6211, I used single channel and single sample for the AO so I didn't have to add the sample clock. This does not work for 9260(correct me if I am wrong). I believe the 9260 uses the delta-sigma DACs inside (otherwise it's impossible to achieve the 24bit resolution). So I have to specify a  sample clock for the AO (solution comes from here). What happens now is that the AO only updates during the first iteration and never updates later on. There is an error 200288 coming up. I can feel the AI and AO loops are fighting with each other, but I just don't know how to fix it.

 

I am new to both labview and DAQs, but this forum has helped a lot. I really appreciate your help. 

0 Kudos
Message 1 of 16
(177 Views)

I think the problem you're having is described here: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9KaSAK&l=en-GB

 

In particular, I think you need to call DAQmx Stop after each cycle.

I'm a little suspicious about the location of your Start call too - you haven't wired autostart but I think by default it might be true (although perhaps this is device dependent, I got different results in a simulated case).

 

Here's a quick example "matching" your code:

Example_VI.png

 

If I don't set "autostart" to false and leave it unwired, then I get an error for calling Start on a running task. So I expect you want to call Stop once per Start.

 

Here's a modified version that works:

Example_VI.png

Note that you are allowed to call Stop on a Stopped task repeatedly (at least on my cRIO, I was testing this last week).

 

These tests are done with a simulated NI PXIe-6358.


GCentral
0 Kudos
Message 2 of 16
(124 Views)

Thank you. 

By calling the stop after each iteration, I am able to get rid of the 200288 error. I've changed the location of the start as well. However, the AO only updates after I press the stop button. Sounds like the lock-in loop is blocked by the acquisition loop. Here is my updated version. 

0 Kudos
Message 3 of 16
(93 Views)

Can you execute your program with some additional debugging indicators and report their values/behaviour?

 

It'd be good to know:

  1. If "i" in the bottom loop is changing
  2. If the "Wait until Done" is executing quickly or slowly - you can monitor this by probing e.g. the error wire on either side of the function, the probe window should show times for the data appearing, and you can get the duration from the difference
  3. If the "i" in the top loop is changing
  4. If the AO value that is updated (after you press stop, as you currently described it) is the value you expect, or a wrong value

I expect it won't make a huge difference, but you might consider doing the calculations in the top loop rather than the bottom, and changing the datatype of the Queue to be a scalar double (then using the Build Array in the bottom loop to get 2 samples, as you're doing now). So moving the Index Array, Multiply and Add Array Elements into the top loop.

I'm a bit suspicious of the Multiply on the Waveforms - maybe you need to Unbundle the Y array first?


GCentral
Message 4 of 16
(90 Views)

I took a look and decided to tidy up the diagram to reduce the wire crossings and bends.  That makes it easier to follow.

 

The first functional change I made was to "commit" the AO task before the lower loop because this will reduce the overhead incurred during subsequent stop/restart cycles.  The other thing I did was to let the lower loop terminate AO task errors, which were previously being ignored.

 

Non-functional changes included wiring error wires directly into boolean functions b/c it's no longer necessary to explicitly unbundle the 'status' field.  I also added wires to the iteration terminals 'i' to give you something to probe during debug.  And that's all.  Everything else was just housekeeping work.

 

Start with the attached code and follow all the excellent debugging advice from cbutcher.

 

I see only 3 places code can get "stuck" for an extended time.  The DAQmx Read call in the upper loop could get stuck waiting for samples until the timeout expires (default value when unwired = 10 seconds).  Similarly the DAQmx Wait Until Done call in the lower loop also has a default 10 second timeout.  And the Dequeue function in the lower loop has a default *infinite* timeout.  It will never end until something arrives in the queue OR there's a queue error (such as that caused by releasing the queue ref after the upper loop terminates, perhaps after clicking the Stop button).

   (There's also an Enqueue in the upper loop with a default infinite timeout, but from experience, Enqueue's never hang except in the case of fixed-size queues that are full.  You did not make a fixed-size queue though.)

 

The fact that the upper loop is even responsive to your GUI Stop button tells me that the upper loop is probably iterating.  Given that, I see no reason for the lower loop to get stuck at the Dequeue.  That only leaves 2 options I can see.  Either an ignored error along the AO task chain, or 10-second timeouts at DAQmx Wait Until Done. 

   Neither option exactly corresponds to seeing AO update once and only once right after clicking Stop.  So work through cbutcher's debug suggestions and let us know what you see.

 

 

-Kevin P

0 Kudos
Message 5 of 16
(79 Views)

Thank you for the tip and I actually found something unsual.

1. "i" in the bottom loop changes. but not as fast as the top loop. For example, if I set the integration time to be 0.01s, which translates to an update rate of 100 cycles per second. The top loop updates as expected but the lower loop is much smaller. The second and third row in the screenshot refer to bottom and top loop, respectively.

jinch_0-1617086250817.png

2. What I get from the probe window is shown above. It seems I can't find the details of data appearing?

3. Top loop changes as expected, 100 iters per sec for 0.01s integration time setup.

4. AO value that is updated when I click the stop buttom is correct, which matches the number in the front panel.

 

0 Kudos
Message 6 of 16
(67 Views)

Ok, so assuming that your top loop is running at the intended rate, that screenshot was taken after around 4s?

Your bottom loop is therefore taking ~0.1s per iteration, which is much too fast to be timing out, but too slow to be useful maybe?

 

Have you tried with Kevin_P's modified VI? His "commit" node might help you here (I have no idea how much of a difference this will make, but generally anything he says about DAQmx is gold...)

 

I'd still suggest moving the signal processing into the top loop - at the least it will reduce the memory space required by the queue, which might be significant if the bottom loop is lagging behind (because your unbounded queue will keep growing).

 

I'd also be tempted to remove the "Get Queue Status" in the bottom loop - I don't think it's doing anything for you (the Dequeue will already get the error when the top loop stops, and the chance of the queue being removed between the two calls is presumably small (although also indeterminate)).

 

What is the desired behaviour if the bottom loop is running slowly?

Would slowing down the top to match the bottom be useful? I don't fully understand the process you're carrying out, and the dependency between the two loops (physical, not LabVIEW).

Is the Analog Output an input to the DUT? If so, you might need to signal the acquisition loop when a given target lock-in value is achieved?


GCentral
0 Kudos
Message 7 of 16
(60 Views)

Now that we're down into the 100 msec per iteration realm, my memory's jogged.  I was in a thread before where I couldn't replicate another user's long turnaround time when stopping and restarting a task in a loop.  It turned out that I was using a PCIe X-series device for my tests while the other user had a USB-connected device (I *think* cDAQ, but not sure).

 

I can't seem to find that thread now, but I did find this one that also referred back to it.  The 80-some odd msec timing number rings a bell.  My speculation then (and still now, lacking info to the contrary) was that this might be an inherent limitation for many (perhaps most or all) USB-connected devices.  I know that in the original thread, the difference in stop/restart overhead between my desktop board and the USB-connected device was pretty dramatic 

 

In the big picture here, it looks like you'd probably be better off with a very simple SAR-style AO module that you run in software-timed, unbuffered, on-demand mode.  You'd get some timing jitter from software timing, but you'd remove 2 sources of latency -- the stop/start overhead and the Delta-Sigma filter delay.

 

 

-Kevin P

0 Kudos
Message 8 of 16
(53 Views)

Hi Kevin,

I think you are right about the latency. I tried to reduce the speed of the top loop by increasing the time of integration and the lower loop starts to be able to follow up with the top loop, e.g., 2 iters per sec. Later I sample the AO in the acquisition loop and I found something interesting.

jinch_1-1617172727121.png

The AO is supposed to stay constant during one cycle and increase or decrease by a small amount at the next cycle (because AO is the integrator output). Now I am observing a small dead zone in delta_t and this delta_t increases as the loop becomes faster by reducing the integration time. Eventually when loop is going too fast, e.g., 100 iters per sec, the "duty cycle becomes 0", so I am only able to read zero at the output. I wonder if this is something that is directly related to the latency.

 

Finally switching to a non-delta-sigma DAC is an option, if there are no better ones. But if I would like to stick to the delta-sigma AO for higher resolution, is it possible to optimize the code to reduce the latency? Or would a cRIO controller work better?

 

Thanks!

 

0 Kudos
Message 9 of 16
(45 Views)

1. Yes, the screenshot was taken around 4s.

2. The 0.1s delay is probably limited by the device itself, according to Kevin.

3. I have tried Kevin's modified VI and compared moving the signal processing to the top loop, it turns out there was much of a difference. All of them "work" when the loop is slow enough (2 iters per sec), but I just noticed there is a dead zone period. The plots compare the AO output in one cycle.

jinch_1-1617174314908.png

Moreover, by moving the calculation to the top loop, it seems the dead zone period is even longer.

4. Get queue status is removed by Kevin.

5. Slowing down the top seems to work better, but the dead zone is quite annoying. Because the labview system works in a feedback loop: the output of the DUT and the reference are sampled and multiplied (lock-in) to generate an error signal to control the DUT, so that the output of DUT follows the reference. Even though I can make the loop slower, the dead zone may not be desirable as it creates a sudden jump.

 

So now the problem is if we can get rid of this dead zone and increase the speed if possible. If not, I will probably try to get a non delta-sigma DAC module.

 

0 Kudos
Message 10 of 16
(40 Views)