I am attempting to acquire data from an assortment of PXI-4472b, PXI-4496, PXI-4495, and PXI-4462 cards using dynamically launched reentrant VI's. Each VI is responsible for acquiring data from a task configured for a single card. I am reading (sample rate/4) samples for each task. If I launch 4 VI's that are acquiring data from any combination of 4 cards (1 per VI), the execution time of each VI is .25sec as one would expect. But if I bump up the number of VI's to 5, the execution time of the reentrant VIs becomes very erratic. Execution time becomes more erratic as I further increase the number of VIs/cards that I acquire data from, to the point of causing buffer overrun errors on some of the tasks. If I close enough VIs so that only 4 remain running, timing becomes butter smooth again at .25s.
I am no where close to the bus bandwidth limitation. This is directly coupled with simultaneously acquiring data from 5 or more cards, regardless of card type or sample rate. I can run the cards at very low rates and still have this same behavior. Incidentally, if I wrap the DAQmx read in a non-reentrant VI, the timing remains smooth even when acquiring from more than 4 cards. Does anyone know what's going on with this?
Could you post a simple example code. Also, have you tried running multiple read tasks within one while loop in a single VI?
Here is simple code that demonstrates the behavior. Just make copies of it, point each copy to a seperate task, and run. I'll give your idea a shot and report back...
I tried running 5 DAQmx reads within as single VI (in parallel) and the timing is much more stable. The problem with this is that it doesn't lend itself to dynamically building and launching acquisition tasks from a config file, which is why we have the acquisition tasks in reentrant VIs.
I still don't understand the nuts and bolts of why the execution of DAQmx read would become jittery when running in separate VI's with 5 or more other DAQmx reads. An application engineer thought this may be tied to the number of cores on the machine since it is quad-core, but after disabling first 1 core and then 3 cores, the behavior still persisted at 5 or more running VI's.
Is there a standard coding practice when acquiring data from high card count PXI devices?
This isssue is definitely timing related. When all the task configuration and read/write VI are executing from within one top level VI LabVIEW knows that all the tasks need to be configured and commited first before read/write functions execute. When each task configuration and read/write is performed inside a separate VI you might have a situation when you already start reading in one of you VIs but the other the tasks in other VIs haven't even been started or configured resulting in poor timing of you data aquisition. You need to set the order in which your VI's execute through a sequence structure or by including a wait function in each VI to free up the CPU and allow it to execute other VIs in parallel to improve timing.