Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Linux: programmatically perform multi-device synchronization in PCIe setup through RTSI - StartTrigger did not work like Labview examples

Solved!
Go to solution

Hello,

 

I could make it work in other ways but I would like to understand what I am missing when trying to reproduce: https://forums.ni.com/t5/Example-Code/DAQmx-Multidevice-Synchronization-with-PCI-Synchronization-Usi... with Ubuntu22.04 and nidaqmx-python

 

Basically, I have:

- one device is master. I configure its 10MHz PLL to be exported on RTSI0 and its startTrigger to go on RTSI1

- one device is slave, its is getting its Ref Clk source from RTSI0 and is configured to start acquisition on digital edge of RTSI1

 

But there is a chicken and egg or race condition:

  • if I start master then slave, slave locks correctly on RTSI0 10Mhz clock but does not see the start trigger of master (right, the trigger pulse was sent before slave started)
  • if I start slave then master, slave outputs an error message about not being able to lock the PLL (right, master had not exported yet its clock)

 

My final solution is in-between:

  • master exports 10MHz clock on RTSI. Slave will sync on it
  • Slave exports its startTrigger on RTSI. Master will wait on its digital edge
  • I start master, master exports clock and waits for trigger. I start slave, slave is able to lock PLL based on exported clock and exports the startTrigger pulse. Master gets it and starts acquiring synchronously

 

In the above Labview link, master seems to own everything (I also found Matlab examples aligned with this). Is Labview doing sequential stuff to make it work with a 100% master and 100% slave ?

 

0 Kudos
Message 1 of 12
(558 Views)

In the LabVIEW example, a stacked sequence structure ensures the slave task is started before the master task.

ZYOng_0-1723506707210.png

 

The master should be owning everything and exporting both reference clock and start trigger signal.

The 10MHz reference clock is continuous. The slave task will connect to it when started. The master task does not have any trigger configured thus it will send out the start trigger right away when started. You should start the slave task first so that it is ready to receive the start trigger signal from the master task. 

 

-------------------------------------------------------
Control Lead | Intelline Inc
Message 2 of 12
(539 Views)

This was my expectation. However, if the 10MHz is continuous, the fact to export it on RTSI (for the slave) did not seem to start until Master is started, thus the issue

 

Basically I did:

  • task_master.timing.ref_clk = "OnboardClock"
  • xxx.export_signal(10MHz, "RTSI0")

I also configure slave but if master is not started, PLL lock fails at start.To be honest, I did not check with an oscilloscope RTSI0 but if PLL lock fails on slave, I assumed 10MHz was not exported yet

 

I also do some cfg_samp_clk(...) for sampling timebase but that did not seem to help (well, to be honest again, I introduced it a bit later, so not 100% sure I restested the theoretical sequence after using it)

 

At least I can summarize that calling export_signal(...) to RTSI0 did not export immediately the signal, there may be some APIs to also call. But now that things are working through my master - slave - master sequence, I can retest with everything triggered from master.

 

If someone has an example in python or even in C, that would help

0 Kudos
Message 3 of 12
(505 Views)

Can you describe more background about your app and the need for this kind of "next level" sync, all the way down to the phase of a 10 MHz clock?

 

I ask because the very vast majority of apps I've encountered in the wild wouldn't get any definitive benefit from these more elaborate schemes as compared to much simpler methods for "good enough" sync.  Occasionally, an app really *does* call for single-digit-nanosec sync, but in my experience they're pretty uncommon.

 

Dunno your specific devices, measurement types, or sample rates, but I would tend to consider things in roughly the following priority order:

 

1. Shared sample clock alone, export from master to RTSI, import from RTSI to slave, start slave first.  This is for apps where both devices can work with the same sample rate.

 

2. Shared sample clock *timebase* and start trigger.  Both tasks derive sample clocks from the same timebase, both are started before issuing the trigger.

 

3. If necessary, get into the muck of syncing both devices internal timebases to a common Ref Clock, perhaps also coordinating with a Sync Pulse.  And then also a shared start trigger -- start both tasks before issuing the trigger.

 

In all my years, I think I've only had 1 app that really called for method 3.   Though granted, I've mostly dealt with low to mid speed capture, not generally exceeding low 100's of kHz.  I can see how the nanoseconds would matter more with sample rates above 1 MHz.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 4 of 12
(493 Views)

The export of the 10MHz clock is not done via DAQmx API but is configured in NI MAX.

Real-Time System Integration (RTSI) and Configuration Explained

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 5 of 12
(474 Views)

On windows... but not on Linux according to  https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Y2XSAU&l=fr-FR 

On Linux you declare the RTSI cable in ni-hwcfg-utility and then you do the rest programmatically

 

In fact, on Linux, you do most on your own, I log in TDMS format and am using npTDMS to read the output for example. But you have Labview

0 Kudos
Message 6 of 12
(460 Views)

Hello,

 

We are measuring voltage and intensity of various shunts + some other voltages through 2x cards (voltage on 1 card, intensity on other card to be in sync). We have gone through similar thinking than what you did. We have a potential annoying case where the 2x cards would not be similar thus we would like to control convert clock accurately and synchronously (but like card 2 is having a known ratio to card1, 500kHz vs 1MHz as an example) rather than letting the card evenly space the samples. Thus sharing the 10Mhz and the startTrigger allowed us full control and sync

  1. we need to sync the 2 cards so we thought we definitely needed a common startTrigger  
  2. from the doc "AI Sample Clock Timebase is not available as an output on the I/O connector" and we thought there would still be a race condition between getting this timebase and waiting for shared startTrigger (again, Labview does not seem to care so something is not understood on our side 😉 )
  3. What is the clean easy way to start tasks then have the trigger ? Currently we have task.start() implicitly starting the trigger. One of my solutions was to use a "simple pulse" counter output as a delayed trigger. I think there is a cfg_time_start_trig() API that would be better, but you need a date so you need to put quite some margin. I would think that the right control of the trigger is making the trick in Labview

 

At least it confirms the various possibilities offered by the cards. We can live with our current solution but being more elegant would also probably give us more insight on the cards.

0 Kudos
Message 7 of 12
(454 Views)

My apologies. I forgot that you are using Linux. NI MAX is just an executable built on the NI System Configuration API. It is possible to access all features in NI MAX using the NI System Configuration API, although it might take some effort to figure that out.

 

Based on the example you shared initially, you can still configure the slave task to use the reference clock from the master task. Python does not have a good example for multi-device synchronization but there is a good C shipping example which does that properly. Please refer to case 2 in the attached example.

 

I think there is a cfg_time_start_trig() API that would be better

Time Trigger is explicitly for TSN synchronized DAQ like cDAQ-918x and cRIO-904x/5x. None of the PCI(e) DAQ supports Time Trigger.

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 8 of 12
(443 Views)

Back from vacation...

 

This looks like a good example to compare with nidaqmx-python + Linux for people knowledgeable in my setup:

- I have similar python calls for DAQmxCreateTask() and DAQmxCfgSampClkTiming() called for master and slave => I would say this is easy translation

 

- my issue would be to translate this part because it probably heavily relies on NI MAX, as we don't have any mention of RTSI into it:

DAQmxSetRefClkSrc(masterTaskHandle,"OnboardClock")); 
DAQmxGetRefClkSrc(masterTaskHandle,str1,256));
DAQmxSetRefClkSrc(slaveTaskHandle,str1)); -> it probably uses NI MAX config to create a RTSI path and exports/imports

I am currently doing something very explicit:

task_master.timing.ref_clk_src="OnboardClock"

task_master.export_signals.export_signal(nidaqmx.constants.Signal.TEN_MHZ_REF_CLOCK, "/Dev1/RTSI0")

task_slave.timing.ref_clk_src="/Dev2/RTSI0"

and I have also added xxx.triggers.sync_type  = yyy for MASTER and SLAVE

 

In ni-hwcfg-utility, RTSI cable is created and all lines are in auto routing (can't find the doc that explains auto vs manual here)

What is the right way to play with RTSI in Linux nidaqmx-python ? This is probably the real topic I should have opened 😉

 

- I also have a translation for below, it does not raise error and when included, I see that that the receiver waits for the emitter so this looked good to me, but again this is playing with RTSI:

GetTerminalNameWithDevPrefix(masterTaskHandle,"ai/StartTrigger",trigName) -> task_master.export_signals.export_signal(nidaqmx.constants.Signal.START_TRIGGER, "/Dev1/RTSI1")
DAQmxCfgDigEdgeStartTrig(slaveTaskHandle,trigName,DAQmx_Val_Rising) -> task_slave.triggers.start_trigger.cfg_dig_edge_start_trig("/Dev2/RTSI1")

 

 

BTW, I found a 3rd solution (other than single pulse counter or using slave triggering): I create an additional analog output task, declare its ref clk as OnboardClock then start ao task, start slave (which is configured to start on RTSI1 trigger), start master (exports trigger on RTSI1)

  • no slave PLL lock fail message
  • slave starts acquiring when master starts

Makes sense to me because it is like putting master tasks in 2 tasks that I can start independently. But well, I am only doing work-around 😉

  • slave does not acquire until

 

 

0 Kudos
Message 9 of 12
(395 Views)
Solution
Accepted by topic author ft_06

@ft_06 wrote:

DAQmxSetRefClkSrc(masterTaskHandle,"OnboardClock")); 
DAQmxGetRefClkSrc(masterTaskHandle,str1,256));
DAQmxSetRefClkSrc(slaveTaskHandle,str1)); -> it probably uses NI MAX config to create a RTSI path and exports/imports

 

 


It wasn't using any NI SysCfg API here. It merely passes the string value retrieved from the master task to the slave task.

Here is the code tested using two simulated PCIe devices.

"""Example of AI multitask operation."""
import pprint

import nidaqmx
from nidaqmx.constants import AcquisitionType, TaskMode, LineGrouping, Signal

pp = pprint.PrettyPrinter(indent=4)


with nidaqmx.Task() as master_task, nidaqmx.Task() as slave_task:
    master_task.ai_channels.add_ai_voltage_chan("/Dev1/ai0")
    slave_task.ai_channels.add_ai_voltage_chan("/Dev2/ai0")

    master_task.timing.cfg_samp_clk_timing(1000, sample_mode=AcquisitionType.FINITE, samps_per_chan=10)
    slave_task.timing.cfg_samp_clk_timing(1000, sample_mode=AcquisitionType.FINITE, samps_per_chan=10)
    slave_task.triggers.start_trigger.cfg_dig_edge_start_trig("/Dev1/ai/StartTrigger")

    master_task.timing.ref_clk_src="OnboardClock"
    slave_task.timing.ref_clk_src=master_task.timing.ref_clk_src
    slave_task.timing.ref_clk_rate=master_task.timing.ref_clk_rate

    print("2 Channels 1 Sample Read Loop 10: ")
    slave_task.start()
    master_task.start()

    for _ in range(2):
        master_data = master_task.read(number_of_samples_per_channel=5)
        slave_data = slave_task.read(number_of_samples_per_channel=5)

        print("Master Task Data: ")
        pp.pprint(master_data)
        print("Slave Task Data: ")
        pp.pprint(slave_data)

 

 

-------------------------------------------------------
Control Lead | Intelline Inc
Message 10 of 12
(380 Views)