I've been trying to use the python nidaqmx package to create an acquisition program to simultaneously do the analog input and analog output task on the PCIE-6259 board.
I've been trying to use the general scheme of create task - set sample clock - run task - wait for completion - clear task.
The code is simplified as follows:
from nidaqmx.task import Task
from nidaqmx import constants
import numpy as np
# configuring ai task
ai_task = Task()
ai_task.ai_channels.add_ai_voltage_chan(physical_channel = '/Dev1/ai1')
ai_task.timing.cfg_samp_clk_timing(rate = samp_rate, samps_per_chan = 1000, sample_mode=constants.AcquisitionType.FINITE)
# configuring ao task
ao_task.timing.cfg_samp_clk_timing(rate = samp_rate, samps_per_chan = 1000, sample_mode=constants.AcquisitionType.FINITE)
ao_task.write(np.zeros(1000)) # just give a voltage 0 to output
# run the task
data = ai_task.read(number_of_samples_per_channel = 1000)
So, in the underlined part, I tried to add one more option to both of the two tasks in order to achieve synchronization, (source = 'Dev1/ai/SampleClock'), which is inspired by the labview nidaqmx tutorial. However it returns with error. The error says the hardware doesn't support this kind of routing.
So I checked the actual source used by the default task. They are Dev1/ai/SampleClockTimebase, Dev1/ao/SampleClockTimebase. But it still won't work when I supply one of the above two to both clock source.
I am wondering is there a way to achieve this synchronization, and what is the problem with what I've done?
Solved! Go to Solution.
Under LabVIEW, I'd configure *only* the AO task to use "Dev1/ai/SampleClock". The AI task creates its own sample clock from an internal timebase. You're then telling the AO task to borrow this derived sample clock signal.
I would then start AO *before* AI. That way it's ready to use the 1st sample clock that gets generated after starting the AI task.
Thank you Kelvin,
I've tried to apply this strategy in the python language. However, it isn't making them work simultaneous.
I tried using (source = '/Dev1/ai/SampleClock') option in the ao task. It seems that the ao task and ai task are just working sequencially. And when I check the clock source, ao task's source did become ai/SampleClock but the ai task's source is ai/SampleClockTimebase.
I also tried SampleClockTimebase for ao task and it returns an error: Specified route cannot be satisfied, because the hardware does not support it.
Therefore I am suspicious if setting the source in cfg_samp_clk_timing with python is the exact counter part to setting sampling clock in labview.
The terminology can get a little confusing. A "SampleClockTimebase" is subtly different than a "SampleClock" in that the SampleClockTimebase is the raw timing reference that gets *divided down* in order to produce the SampleClock. The SampleClock signal is used directly by the circuitry to control sample timing.
One other subtlety, just in case it isn't a typo: in msg #3 you designated '/Dev1/ai/SampleClock' with an explicit leading '/' character while in msg #1 you didn't have that leading character. It's going to matter to get that right.
I can't say with certainty which way is right because under LabVIEW, these signal references are available as a drop-down choice including the correct syntax. I *think* that internal signal names (such as you're setting up) do *not* have the leading '/' while external terminals do have it. But I'm not 100% sure about that.
If the python API to the DAQ hardware gives you the option to designate a sample clock, I tend to suspect you'll be able to make these tasks run simultaneously off a shared sample clock. I don't know the specifics of *how*, but it wouldn't make sense to build and implement an API function that couldn't accomplish the purpose it was made for.
Sometimes half the battle with this stuff is knowing what stuff's worth pursuing harder and what stuff isn't. I think this one will be worth pursuing.
If the SampleClockTimebase works differently from the SampleClock signal, why would it be possible to have either configured as the source in the python? I understand in Labview there is a dropdown menu for you to select which clock you want to use but you really can't choose /dev1/ai/SampleClockTimebase from there. The only similar thing in Labview dropdown menu I can see is "20MHzTimebase" and "80MHzTimebase".
I think there need to a leading "/" in the sample clock. I tried both. The result is, with "/" it works as expected while without it, the program reports an error "Make sure the terminal name is valid for the specified device. Refer to Measurement & Automation Explorer for valid terminal names". I also checked MAX, in the Device route tab of my device I find that all routes has that leading "/".
Actually one major problem of this python API is the lack of examples that corresponding to LabView code. If there could be any, it would greatly decrease the learning barrier from someone goes from LabView to python.
I'm now at a LV terminal and I agree, the leading '/' is necessary to reference these clock and timebase signals.
Since you have access to LV, here's another subtle (and frankly, annoying) thing: by default, the LV drop-down filters out a lot of possible choices. To see them all, you first need to right-click the drop-down, choose "I/O Name Filtering...", and then check the box for "Include Advanced Terminals". Now you'll be able to see those Timebase signals.
(If you find this filtering annoying like I do, give a Kudo to this idea on the idea exchange).
Try setting up the app in LV and try with both "/dev1/ai/SampleClockTimebase" and with "/dev1/ai/SampleClock". I expect the former to throw an error and the latter to work, but let me know if it actually works differently. Hopefully once you successfully set up a synced pair of AI / AO tasks in LV, you'll have something to compare to while trying to work out the python syntax.
Thanks Kelvin. Actually I managed to make them work simultaneously. As you said, just make 'ai_task' use 'ao/SampleClock' and start with ai task. After I rewrite python code exactly the same way Labview does, it works out. I am still looking at the original code and there must be something wrong that I haven't noticed.
Thanks for your help!
I am also having trouble simultaneously writing and reading analog data using nidaqmx in Python. From what I understand based on this thread, something like the code below worked for you, is that right?
with nidaqmx.Task() as ai_task, nidaqmx.Task() as ao_task: rate = 10**4 # sample rate in Hz duration = 1 # acquisition duration in seconds npnts = int(rate * duration) ai_task.ai_channels.add_ai_voltage_chan('Dev2/ai0') ao_task.ao_channels.add_ao_voltage_chan('Dev2/ao0') ao_task.timing.cfg_samp_clk_timing(rate, sample_mode=AcquisitionType.FINITE, samps_per_chan=npnts) ai_task.timing.cfg_samp_clk_timing(rate, source='ao/SampleClock', samps_per_chan=npnts) ao_task.write(np.linspace(-1, 1, npnts), auto_start=False) ao_task.start() ai_task.start() ai_task.wait_until_done() ao_task.wait_until_done() data = ai_task.read(number_of_samples_per_channel=npnts)
Since you set ai_task to use ao clock, I believe you need to start ai_task first so that ai_task will wait on ao_task to start. If you start with ao_task, ai_task is likely to be delayed in my opinion. Another thing to mention, Actually the problem I encountered is that I configured ao task and ai task in a different file and used that module in the main program. That somehow created a problem. After I put these code in one single file, it worked out for me. Hope that helps.