Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with running simple AI and DO tasks concurrently on PXIe-6366 (Errors -89137 and -200744)

Solved!
Go to solution

Hi! I'm trying to make a readily expandable QMH VI for running various experiments around the lab. The base function is just controlling some AO and DO lines while reading in AI, and I want it to be able to run on any DAQ cards as long as the user selects the channels within the VI. So far I have a heavily modified version of the "Continuous Measurement and Logging" template, and once this is rock-solid I'll add in some other consumer loops like serial instruments and other saving file formats. 

 

I've run into a problem when using an PXIe-6366 card for both AI and DO operations, where I get an error -89137 because both tasks (all AI, AO, and DO channels are in three big tasks generated from a config file) use the same resource. When I implement the solution from Error -89137 When Using Multiple NI-DAQmx Tasks or Terminal Routes - National Instruments by setting the ref clock to the onboard clock, I get a new error, -200744. If I try using a different source such as the 6366/100MHzTimebase, I get the original error. Currently I'm using a PXIe-4300 and the PXIe-6366 for AI, the PXIe-6366 for DO, and a PXIe-6736 for AO, and I've duplicated the error using simulated PXIe cards. I'm able to get around it by serially starting and stopping the AI tasks (seen in the VI by going to the settings menu and clicking on the "Per-Channel AI Tasks?" button, this affects a conditional box in the "Acquisition.lvlib:Configure Hardware.vi" file), but this significantly slows down the VI even with a few channels, and my ultimate use will have several times as many channels (you can see how poorly it replicates the sine-sweep on the simulated channels). 

 

How can I go about fixing this issue? Also, is the QMH approach I currently have in the VI the appropriate method for this use-case? It's replacing an older VI that was much more basic and had race condition issues, but I'm inexperienced in Labview and open to other approaches.

 

Thanks!

0 Kudos
Message 1 of 8
(1,577 Views)

I've done a *lot* with DAQmx and have almost never needed to explicitly configure the Ref Clock.   I'll bet you won't need to either.   I can't really examine your code well as I can't install DAQmx for my LV 2020 Community edition, else I'd lose DAQmx support for my ongoing work-related LV 2016 projects.

 

1.  My PXI experience is limited, but my understanding is that the very vast majority of PXIe cards will *automatically* phase-lock to the Ref Clock on the PXI chassis backplane.

 

2. Many apps that sync across multiple devices don't *need* to depend on Ref Clock syncing anyway.   I've gotten by rather well by simply routing sample clocks and timebases. 

    I use triggering *far* less than what's typically recommended in articles on sync.  Especially when it comes to multi-device apps, triggering is often a "fool's gold" version of sync because beginners think that's *all* they need to do.  In reality, triggering only sets the starting moments in sync.  Everything after that depends on clocks and timebases.   In a PXIe system, those should *also* remain synced by the backplane Ref Clock.  In desktops, devices will drift relative to one another even if their start times are sync'ed with a trigger.  

 

3. You say you're inexperienced in LabVIEW, but your code sure doesn't agree.  Overall, it looks really good -- clean and well-structured.  Nice work!

 

4. I'm not familiar with the PXIe-4300.  Does it have a DAQmx API or does it have a special scope-style driver?   If DAQmx, I'm much more confident that you can achieve the needed timing sync without dealing with explicit Ref Clock config.

 

5.  So for starters, try getting rid of any code that tries to configure the Ref Clock.  And then do some testing that will let you know whether the devices are already automatically sync'ed.  (Sorry, I can't inspect this because I have no hw drivers installed for LV 2020).   If not, come on back and I bet I can help you figure out a non-Ref-Clock way to accomplish the sync you need.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 2 of 8
(1,536 Views)

Thanks for the reply! For your points:

 

1. I originally wasn't configuring the ref clock, and I only put that in when the NI error description page suggested it. I removed it all from the project, and still get the original error. 

 

2. I'm using this as a simple monitoring and control system, basically replacing an old-school bank of multimeters, knobs and switches. Because of that, I've not used triggers at all other than in a monitoring VI for postprocessing data, and all the tasks are either set to auto-on or just run with the start task function. I'm not trying to have anything fully synched for synchronized high speed data and control, just replacing the oldschool method with an update rate of at least ~10Hz. Putting all the AI channels in individual tasks, and starting then stopping them serially fixes the issue, but reeeally slows everything down to the point where it's not very useable with 30+ input channels.

 

3. Thanks! My lab has been trying to push towards proper Labview methods, whereas in the past we just put everything in one giant loop with no considerations for flow or speed. Beginning of this quarantine gave plenty of time to learn some of this stuff. 

 

4. The PXIe-4300 is just a 8-channel AI PXIe card, it's compatible with the DAQmx VIs. 

 

I made a simpler VI (attached) that I've been able to reproduce the error on, and have found that the problem only occurs when:

 - There are multiple AI channels, on different cards and 

 - One of the cards providing AI is also providing DO.

 

When I select AI channels from a simulated PXIe-4300 and PXIe-6366, but put the DO lines on a separate simulated card (PXIe-6341 or PXIe-6535), everything's fine. If I have all the AI and DO lines on a single card, everything's fine. But if I pull AI from any combination of cards and one of those cards is also providing DO, I get the -89137 error. Changing the reference clock doesn't seem to help, I either get the original error or new errors depending on which source is selected, but no resolution. 

 

Because I don't have any high sync requirements, is there a simpler way to go about this that won't run into the error while running these tasks concurrently? And is there a way for me to save the VI to be compatible with your 2016 HW drivers? I've saved and attached the simpler VI for 2016 labview, but don't know if that includes hardware driver compatibility. Also, I really appreciate the help, I've been scratching my head on this for a while. 

Download All
0 Kudos
Message 3 of 8
(1,510 Views)
Solution
Accepted by Stephen.Samples

I think the DO task observations are a red herring.  I doubt it has any real bearing on your errors, at least for the case of the stripped down example you posted with the Ref Clock config getting skipped.

 

I did a search to find a longer article about error -89137 -- perhaps that's what pointed you toward explicit Ref Clock config in the first place?   I still say no, don't do it.  It's apparently pretty finicky, so let's just stick with basic fundamental grass roots stuff.

    The article goes on to talk about the concept of "channel expansion", a feature you're using whether you realize it or not.  It sounds like this might *also* be a possible cause for your Ref Clock related errors, even in the absence of attempts to do explicit config.

    Further, if you haven't power cycled or gone into MAX to reset your DAQ devices lately, it's conceivable that prior attempts to explicitly set up Ref Clock routing preventing subsequent attempts to do it again -- either explicitly with property nodes or implicitly due to your use of channel expansion.

 

So, go into MAX and do a reset on all your DAQ devices.  It's quick.

 

Then try your example program again *without* doing any explicit Ref Clock config.  Run it with debug highlighting on so you can see if/when/where errors are generated.   I'm not real optimistic that the device reset was all you needed, but it's worth a quick try.

 

My next suggestion is to avoid the use of channel expansion and set things up a little more old school.  This is the kind of approach I've used successfully for a very long time, before automatic "channel expansion" came around.  It's a bit more manual and cumbersome but it works.

 

Have a separate AI task for each individual DAQ device.  One's gonna be the timing "master", it shouldn't really matter which one.  The master task will be queried for its internal "Sample Clock" terminal and this will be shared with all other devices that need to be sync'ed.   Finally, start the tasks for all the other devices first and the master task last.

 

Here's an illustration I posted a while back which shows how to share a sample clock this way from an AI to a DI task.  You can do it the same way regardless of task type AI, AO, DI, DO.

 

Do it this way, don't bother with Ref Clock stuff, and your sampling will be sync'ed across devices.  Promise.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 4 of 8
(1,504 Views)

Thank you! That seems to have done the trick. I made the little test VI separate out the channels by device and make device-specific tasks all referencing the first task's clock terminal, and it seems to be working great with concurrent AI, AO, and DO on multiple different devices for each. I integrated it into the larger project, and the original error is no-longer. I hadn't heard about channel expansion before, but I'll definitely use this per-device method in the future. I tried sharing the clock terminal from the AI tasks to the AO and DO tasks and got an error, but with only the AI tasks explicitly put on the same clock there aren't any errors and that's great for my purposes. 

 

I appreciate the help!

 

For reference if anyone else ever comes across this problem, I've attached the VI I made to test this approach. It works for me for simulated and real cards. 

0 Kudos
Message 5 of 8
(1,484 Views)

A few thoughts:

 

I wouldn't *always* avoid channel expansion, it usually works fine and is much simpler than the manual method I suggested this time.   You might even want to try it out again for just the AI tasks alone, since that's all you're sync'ing together now anyway.

 

Both the AO and DO are software-timed tasks that are not sync'ed to one another or to the AI tasks.  Since it looks like you have pre-defined notions of what to generate with AO and DO, I suspect you *want* sync but weren't able to figure out how to make it work.

 

If so, please modify the code to show your attempt to share the sample clock with the AO and DO tasks, then use the File menu to "Save for Previous Version" back to LV 2016 or so.  You'll be prompted to put it in a different folder, so be sure that's the one you seek out and post here.   Also, describe exactly what error(s) you see and exactly where you first see them show up, based on your debugging.

 

They *can* all be made to sync to a common clock if you need them to.  If you save back to a version I can edit, I can help.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 8
(1,476 Views)

I previously had the issue where I couldn't run some M series DAQ cards along with my other IO cards because they don't support multidevice tasks, so this method is great for avoiding that. But you're right, if I'm making something quick on a system that supports it, the channel expansion method is certainly simpler.

 

I've put the sample clock configuration from your link here in the DO and AO tasks. AO seems to work fine, but DO gives me some issues.

 

When I try setting the timing and starting the DO tasks, I get a -200462 error after the start command because the output buffer is empty. I tried outputting all false values before setting up the timing and starting the tasks (currently in a diagram disable structure in the VI), but still the same error. If I get rid of the "start task" VI for the DO setup then the code works momentarily until I get a -200288 error. Not configuring the DO tasks' timing and relying on software timing for those seems to work, but as you said it's not fully synced and in the future when I start doing more precise measurements, there may be some issues.

 

Is there some way I'm incorrectly configuring these DO tasks? Other discussions online seem to suggest issues with triggering, although I'm not configuring that here.

0 Kudos
Message 7 of 8
(1,467 Views)

There are several details you're going to need to think about carefully and then narrow down the troubleshooting.

 

1. For DO, even when you write to the buffer before starting the task, you're only writing 1 sample.  The first write will set the buffer size and for you that's only 1 sample which may not even be supported for a buffered output task.  (I know that in other corners of DAQmx, there are several places where the minimum # allowed is 2.)

 

2. Also for DO, you aren't specifying Continuous or Finite sampling in your call to DAQmx Timing.  The default will be Finite with a default # samples of 1000.

 

3. Finite tasks typically don't regenerate, so the DO task will probably end immediately after starting.  It may also never run due to a possible 1-sample buffer size error.  Either way, the task will no longer be running when you get downstream and try to write again, leading to the -200288 error.

   The downstream write is also only writing 1 sample/channel at a time, which is usually a mistake with buffered tasks.

 

4. You never write data to the AO task before starting.  I would expect you to see the -200462 (or similar) error there.

 

5. The AO also never specifies Finite vs Continuous sampling.

 

6. The AO task has the same downstream problem of only writing 1 sample/channel at a time.

 

General advice:

    It's time to take a couple steps back and get more familiar with output tasks that have sample clocks and buffers.  Take a couple shipping examples for AO (both finite and continuous), copy them to a new location, and start tinkering.  It's gonna make a *lot* more sense to learn how to deal with a single output task all by itself than to try to learn it in the middle of a fairly complicated integration project.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 8
(1,457 Views)