08-12-2024 04:46 PM
Hello,
I could make it work in other ways but I would like to understand what I am missing when trying to reproduce: https://forums.ni.com/t5/Example-Code/DAQmx-Multidevice-Synchronization-with-PCI-Synchronization-Usi... with Ubuntu22.04 and nidaqmx-python
Basically, I have:
- one device is master. I configure its 10MHz PLL to be exported on RTSI0 and its startTrigger to go on RTSI1
- one device is slave, its is getting its Ref Clk source from RTSI0 and is configured to start acquisition on digital edge of RTSI1
But there is a chicken and egg or race condition:
My final solution is in-between:
In the above Labview link, master seems to own everything (I also found Matlab examples aligned with this). Is Labview doing sequential stuff to make it work with a 100% master and 100% slave ?
Solved! Go to Solution.
08-12-2024 06:56 PM
In the LabVIEW example, a stacked sequence structure ensures the slave task is started before the master task.
The master should be owning everything and exporting both reference clock and start trigger signal.
The 10MHz reference clock is continuous. The slave task will connect to it when started. The master task does not have any trigger configured thus it will send out the start trigger right away when started. You should start the slave task first so that it is ready to receive the start trigger signal from the master task.
08-13-2024 05:01 AM
This was my expectation. However, if the 10MHz is continuous, the fact to export it on RTSI (for the slave) did not seem to start until Master is started, thus the issue
Basically I did:
I also configure slave but if master is not started, PLL lock fails at start.To be honest, I did not check with an oscilloscope RTSI0 but if PLL lock fails on slave, I assumed 10MHz was not exported yet
I also do some cfg_samp_clk(...) for sampling timebase but that did not seem to help (well, to be honest again, I introduced it a bit later, so not 100% sure I restested the theoretical sequence after using it)
At least I can summarize that calling export_signal(...) to RTSI0 did not export immediately the signal, there may be some APIs to also call. But now that things are working through my master - slave - master sequence, I can retest with everything triggered from master.
If someone has an example in python or even in C, that would help
08-13-2024 09:22 AM
Can you describe more background about your app and the need for this kind of "next level" sync, all the way down to the phase of a 10 MHz clock?
I ask because the very vast majority of apps I've encountered in the wild wouldn't get any definitive benefit from these more elaborate schemes as compared to much simpler methods for "good enough" sync. Occasionally, an app really *does* call for single-digit-nanosec sync, but in my experience they're pretty uncommon.
Dunno your specific devices, measurement types, or sample rates, but I would tend to consider things in roughly the following priority order:
1. Shared sample clock alone, export from master to RTSI, import from RTSI to slave, start slave first. This is for apps where both devices can work with the same sample rate.
2. Shared sample clock *timebase* and start trigger. Both tasks derive sample clocks from the same timebase, both are started before issuing the trigger.
3. If necessary, get into the muck of syncing both devices internal timebases to a common Ref Clock, perhaps also coordinating with a Sync Pulse. And then also a shared start trigger -- start both tasks before issuing the trigger.
In all my years, I think I've only had 1 app that really called for method 3. Though granted, I've mostly dealt with low to mid speed capture, not generally exceeding low 100's of kHz. I can see how the nanoseconds would matter more with sample rates above 1 MHz.
-Kevin P
08-13-2024 06:43 PM
The export of the 10MHz clock is not done via DAQmx API but is configured in NI MAX.
Real-Time System Integration (RTSI) and Configuration Explained
08-14-2024 05:44 PM
On windows... but not on Linux according to https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Y2XSAU&l=fr-FR
On Linux you declare the RTSI cable in ni-hwcfg-utility and then you do the rest programmatically
In fact, on Linux, you do most on your own, I log in TDMS format and am using npTDMS to read the output for example. But you have Labview
08-14-2024 06:31 PM
Hello,
We are measuring voltage and intensity of various shunts + some other voltages through 2x cards (voltage on 1 card, intensity on other card to be in sync). We have gone through similar thinking than what you did. We have a potential annoying case where the 2x cards would not be similar thus we would like to control convert clock accurately and synchronously (but like card 2 is having a known ratio to card1, 500kHz vs 1MHz as an example) rather than letting the card evenly space the samples. Thus sharing the 10Mhz and the startTrigger allowed us full control and sync
At least it confirms the various possibilities offered by the cards. We can live with our current solution but being more elegant would also probably give us more insight on the cards.
08-14-2024 08:41 PM
My apologies. I forgot that you are using Linux. NI MAX is just an executable built on the NI System Configuration API. It is possible to access all features in NI MAX using the NI System Configuration API, although it might take some effort to figure that out.
Based on the example you shared initially, you can still configure the slave task to use the reference clock from the master task. Python does not have a good example for multi-device synchronization but there is a good C shipping example which does that properly. Please refer to case 2 in the attached example.
I think there is a cfg_time_start_trig() API that would be better
Time Trigger is explicitly for TSN synchronized DAQ like cDAQ-918x and cRIO-904x/5x. None of the PCI(e) DAQ supports Time Trigger.
08-19-2024 10:38 AM
Back from vacation...
This looks like a good example to compare with nidaqmx-python + Linux for people knowledgeable in my setup:
- I have similar python calls for DAQmxCreateTask() and DAQmxCfgSampClkTiming() called for master and slave => I would say this is easy translation
- my issue would be to translate this part because it probably heavily relies on NI MAX, as we don't have any mention of RTSI into it:
DAQmxSetRefClkSrc(masterTaskHandle,"OnboardClock"));
DAQmxGetRefClkSrc(masterTaskHandle,str1,256));
DAQmxSetRefClkSrc(slaveTaskHandle,str1)); -> it probably uses NI MAX config to create a RTSI path and exports/imports
I am currently doing something very explicit:
task_master.timing.ref_clk_src="OnboardClock"
task_master.export_signals.export_signal(nidaqmx.constants.Signal.TEN_MHZ_REF_CLOCK, "/Dev1/RTSI0")
task_slave.timing.ref_clk_src="/Dev2/RTSI0"
and I have also added xxx.triggers.sync_type = yyy for MASTER and SLAVE
In ni-hwcfg-utility, RTSI cable is created and all lines are in auto routing (can't find the doc that explains auto vs manual here)
What is the right way to play with RTSI in Linux nidaqmx-python ? This is probably the real topic I should have opened 😉
- I also have a translation for below, it does not raise error and when included, I see that that the receiver waits for the emitter so this looked good to me, but again this is playing with RTSI:
GetTerminalNameWithDevPrefix(masterTaskHandle,"ai/StartTrigger",trigName) -> task_master.export_signals.export_signal(nidaqmx.constants.Signal.START_TRIGGER, "/Dev1/RTSI1")
DAQmxCfgDigEdgeStartTrig(slaveTaskHandle,trigName,DAQmx_Val_Rising) -> task_slave.triggers.start_trigger.cfg_dig_edge_start_trig("/Dev2/RTSI1")
BTW, I found a 3rd solution (other than single pulse counter or using slave triggering): I create an additional analog output task, declare its ref clk as OnboardClock then start ao task, start slave (which is configured to start on RTSI1 trigger), start master (exports trigger on RTSI1)
Makes sense to me because it is like putting master tasks in 2 tasks that I can start independently. But well, I am only doing work-around 😉
08-19-2024 07:44 PM
@ft_06 wrote:
DAQmxSetRefClkSrc(masterTaskHandle,"OnboardClock"));
DAQmxGetRefClkSrc(masterTaskHandle,str1,256));
DAQmxSetRefClkSrc(slaveTaskHandle,str1)); -> it probably uses NI MAX config to create a RTSI path and exports/imports
It wasn't using any NI SysCfg API here. It merely passes the string value retrieved from the master task to the slave task.
Here is the code tested using two simulated PCIe devices.
"""Example of AI multitask operation."""
import pprint
import nidaqmx
from nidaqmx.constants import AcquisitionType, TaskMode, LineGrouping, Signal
pp = pprint.PrettyPrinter(indent=4)
with nidaqmx.Task() as master_task, nidaqmx.Task() as slave_task:
master_task.ai_channels.add_ai_voltage_chan("/Dev1/ai0")
slave_task.ai_channels.add_ai_voltage_chan("/Dev2/ai0")
master_task.timing.cfg_samp_clk_timing(1000, sample_mode=AcquisitionType.FINITE, samps_per_chan=10)
slave_task.timing.cfg_samp_clk_timing(1000, sample_mode=AcquisitionType.FINITE, samps_per_chan=10)
slave_task.triggers.start_trigger.cfg_dig_edge_start_trig("/Dev1/ai/StartTrigger")
master_task.timing.ref_clk_src="OnboardClock"
slave_task.timing.ref_clk_src=master_task.timing.ref_clk_src
slave_task.timing.ref_clk_rate=master_task.timing.ref_clk_rate
print("2 Channels 1 Sample Read Loop 10: ")
slave_task.start()
master_task.start()
for _ in range(2):
master_data = master_task.read(number_of_samples_per_channel=5)
slave_data = slave_task.read(number_of_samples_per_channel=5)
print("Master Task Data: ")
pp.pprint(master_data)
print("Slave Task Data: ")
pp.pprint(slave_data)