From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Multi-device synchronization of multiple tasks using two X Series PCIe-6363 cards

Solved!
Go to solution

Hi everyone,

 

I am using the nidaqmx-python module to control two X Series PCIe-6363 cards ('Dev1' and 'Dev3').

A RTSI cable is connecting the two PCIe cards, and is registered in NI-MAX, with the two cards added to it.

 

 

In the past, we could use a single card, since we needed:

  • 1 Counter Output channel, used as a common clock source
  • 2 Counter Input channels, acquiring photoncounts
  • 4 Analog Output channels, continuously generating waveforms
  • 3 Analog Input channels, continuously acquiring data, sharing

All of those signals where synchronously.

 

 

Recently, we needed to use a supplementary Analog Output channel that has to be synchronized with the other 4 AO channels. Hence the second PCIe-6363 board and their mutual connexion through the RTSI cable.

 

I could easily modify the AO task so that it contains 4 channels from the first card, and 1 channel from the second card. By creating a task in NI-MAX, with all of those channels, I could verify that they run synchronously. However, when using other tasks, I now get the following error:

 

nidaqmx.errors.DaqError: Specified route cannot be satisfied, because it requires resources that are currently in use by another route.


Property: DAQmx_RefClk_Src
Source Device: Dev1
Source Terminal: None

 

Required Resources in Use by
Task Name: AO
Source Device: Dev1
Source Terminal: 10MHzRefClock
Destination Device: Dev1
Destination Terminal: RefClockInternal

 

Task Name: AI

 

Status Code: -89137

 

 

Even though I have tried to export the sample clock to the RTSI by using:

nidaqmx.AOtask.export_signals.export_signal(signal_id=nidaqmx.constants.Signal.SAMPLE_CLOCK,
                                                                          output_terminal='/Dev3/RTSI0')
And using this port as the AI task sample clock:
task.timing.cfg_samp_clk_timing(
                    rate=self.samplefreq_actual,
                    source='/Dev3/RTSI0',
                    samps_per_chan=self.samples,
                    active_edge=nidaqmx.constants.Edge.FALLING,
                )
 
I would greatly appreciate some feedback, ideas, or links to dedicated documentation about this problem.
 
Thanks for taking the time of reading this.
0 Kudos
Message 1 of 5
(1,774 Views)

I guess that a code example would help so I have attached one.

 

It works without any problem when the AO task contains physical channels from the first card: 

"Dev1/ao0:3".
 
However, the error appears whenever I try to add channels from multiple cards, for instance:
"Dev1/ao0:3,Dev3/ao0"
 
If I use the AO task only, I can use physical channels from both cards without any problem.
0 Kudos
Message 2 of 5
(1,718 Views)
Solution
Accepted by topic author jss13

I don't know the Python syntax, but was able to follow along with the script to a decent degree.

 

It looks to me like you're pretty close.  You're already configuring your AI and AO tasks to use a counter output (that you can control) as their sample clock.  And you're already starting the AI and AO tasks before starting the counter task.  All those things are important steps.

 

Not all task types and not all DAQ devices support multi-device tasks.  I'm not sure about the 6363, though I'd tend to expect such support for AO.  However, for DAQmx to *accomplish* such things, it needs to be able to route and share important timing signals from one board to the other.  And to do *that*, you would need your PCI/PCIe devices to have their RTSI timing buses connected together internally with a RTSI cable connection *AND* you'd need to configure MAX to inform it of that connection.   Only then will DAQmx be able to figure out and manage the necessary routing for you.

 

Without the RTSI cable and config, the only option would be to route '/Dev1/Ctr3InternalOutput' to some '/Dev1/PFI#' terminal, physically wire that to some '/Dev2/PFI#' terminal, and then configure the AO task on Dev2 to use that wired up '/Dev2/PFI#' terminal as its sample clock.  Doable, but more of a pain.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 3 of 5
(1,700 Views)

Thanks for that useful answer that was of great help in making it work.

 

My cards were already properly connected using the RTSI cable, which was also properly configured in MAX for the two PCIe-6363 cards used.

 

The issue might originate from the number of tasks available for those cards.
This NI resource states that, for the PCIe-6363:

  • Hardware-timed tasks: 1 per device
  • Software-timed tasks: 1 task per AO channel

 

In my previous sample code, I had a resource reserved error whenever my AO task contained physical channels from both cards, and no issue when it contained physical channels from a single card (I guess that every task is hardware timed).

 

I could solve the problem by creating another AO task containing the physical AO channels from the second card, and timing/triggering it properly. I have attached the updated sample code for further reference, in case that it helps others that might encounter a similar problem.

 

Finally, upon using the nidaqmx.system.system.System().connect_terms() method, it does not look like any physical wiring between terminals is necessary (because of the registered RTSI cable I guess ?).
nidaqmx.system.system.System().connect_terms(source_terminal="/Dev1/Ctr3InternalOutput", destination_terminal="/Dev1/PFI7")

 

0 Kudos
Message 4 of 5
(1,685 Views)

According to this help page, your devices *should* support multi-device tasks for AO (a.k.a. "channel expansion").

 

So although you have a workaround available by separating into 1 AO task per device, let me suggest you also try the multi-device task again *after* removing all attempts to explicitly configure "sync_type" or anything related to reference clocks.

 

A lot of NI's sync literature demonstrates such explicit config but I've personally dealt with multi-task and multi-device sync numerous times and virtually never do any such configuration.  I'll usually just share a sample clock or timebase, often generating it myself with a counter. I only occasionally find it useful to also share a trigger -- the clock sharing alone is usually sufficient when one chooses to sequence the task starts strategically.

 

Give it a try, see if multi-device AO works out when you leave out any explicit config of sync_type or ref_clk.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 5
(1,678 Views)