LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Can't get 3 PXIe tasks to start simultaneously

First I will say that I am not running this test vi on the hardware - I'm simulating the modules in MAX. Maybe the test vi will give different results on the actual hardware?  The hardware is as follows:

 

PXIe-1084 Chassis

Slot 1 - PXIe-8381

Slots 2 - 16 PXIe-4300's

Slot 17 - PXIe-4302

Slot 18 - PXIe-4353

 

The task is to acquire data from the three different module types simultaneously at their max sample rates.

 

What I am seeing is a 200ms spread between the time the first task acquires data and when the last task acquires data. The two slaves tasks are within 5oms of each other but I would expect all to be snchronized.

 

Does the technique I am using to synchronize the tasks look correct?

The distribution of the trigger - correct?

 

I don't have a designated "master" module like I remember doing with SCXI.  I have a master task compromised of the 4300 modules - is this a problem?

 

Any comments on the code, synchronization, trigger distribution, would be appreciated and helpful

 

Thanks

  

 

0 Kudos
Message 1 of 3
(866 Views)

First and foremost, simulated devices can be handy, especially for confirming syntax and capabilities, but they are NOT in any way suitable for drawing conclusions about sync and triggering.  They were never meant to be.  Tasks simply behave as though all clock and trigger signals are present and available from the instant you start them.

 

Beyond that, a few comments on the code but first some background.

 

I've been in several sync-related threads where people ran into problems following an approach that looks just like yours.  I'm pretty sure they're all starting from a common example.  Almost inevitably, the solution is to rip out most of the sync-related code that's based on Ref Clocks, Sync Pulses, and Master-Slave designations.  Sometimes the triggering as well.

 

I've been dealing with sync for NI DAQ devices for a very long time and have almost never needed to do any explicit config of the the Ref Clock or Sync Pulses.  Please review this thread I was in recently that has some commonality with yours (PXI system including a 4300).   I'll bet you don't need ANY of the code dealing with Ref Clocks or Sync Pulses.  Rip it out.  There's a pretty good chance you should keep the stuff related to the start trigger, considering that it appears you have different sample rates among your devices.

 

The other thing to be careful about is the Read loop.  With 3 tasks running at different rates, you'll want a solid plan for making sure your reads are keeping up with sampling as well as staying synced up with one another. 

   Your present approach is to request fixed-size blocks of data from each task on each loop iteration.  Your default values for the different tasks' '# samples to read' each represent the same nominal amount of time, 1 second.  That's the right idea, but let me be the voice of experience and say that if I were setting this up, each task would be would get its own Read loop and I'd enqueue the data to a common consumer loop that dequeues it and manages it thereafter.

 

   Your attempt to use Build Array to combine these 3 different-sized data blocks is a *BAD* idea.  Don't do it.  The simplest alternative would be to have your tasks read data as a 1D array of waveforms.  These *could* be combined (in a consumer loop) using Build Array.   But I probably wouldn't do that either.

   Instead I'd make a typedef'ed cluster, with one named element for each of the tasks' data blocks.  It could be either 3 distinct 2D arrays of DBL or 3 distinct 1D arrays of waveforms.  This approach provides more logical access to the data from the distinct tasks.

 

A different read-syncing approach can generate variable-length data blocks but can prove to be just a little more robust in some edge cases.  This method keeps all the reads in a single common loop.

   You would time the loop using one of the Wait functions.  You'd need to use some kind of dataflow sequencing to make sure that the reads can't happen until the waiting is done.  Then wire in the special value -1 to all the calls to DAQmx Read, meaning, "give me all the buffered samples available right now".   You'll get approximately the expected #, given your wait time, but there'll be some variation due to reliance on the software timing involved in the Wait functions.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 2 of 3
(822 Views)

Thanks for the insight, appreciate the information.  I will try some of your suggestions on the hardware this week and let you know the results and prob ask some additional questions.

 

Thanks!!!

0 Kudos
Message 3 of 3
(807 Views)