08-12-2024 01:30 AM
Hi,
when I was using nidaqmx lib in Python 3.10 with USB6363(maximum 2MS/s), but the largest sample rate is only 500Hz (I was using 10000 sample rate, 25 samples in one time). My code only contains a simple PID controller. Could you help me find if anything is wrong?
sample_rate = 10000
sample_per_channel = 25
# initialize PID & device
PID = PID()
task_AI = nidaqmx.Task()
task_AI.ai_channels.add_ai_voltage_chan("Dev1/ai0:1")
task_AI.timing.cfg_samp_clk_timing(sample_rate, sample_mode=AcquisitionType.CONTINUOUS)
task_AO = nidaqmx.Task()
task_AO.ao_channels.add_ao_voltage_chan("Dev1/ao0", min_val= -3, max_val=3)
# task_AO.timing.cfg_samp_clk_timing(sample_rate, sample_mode=AcquisitionType.CONTINUOUS)
# There is also a problem here. When I try to configure timing for AO using the above line, there is an error: nidaqmx.errors.DaqError: Non-buffered hardware-timed operations are not supported for this device and Channel Type.
# Set the Buffer Size to greater than 0, do not configure Sample Clock timing, or set Sample Timing Type to On Demand.
print("Running task. Press Ctrl+C to stop.")
def update():
global cnt, sum_deflV, ZVolt, last_output
av = task_AI.in_stream.avail_samp_per_chan
row_data = task_AI.read(sample_per_channel)
length = np.shape(row_data)
# print(f"available data:{av}, data read: {length}", end="\r")
PID.setDeltaTime(1/sample_rate*sample_per_channel)
cnt += 1
if len(row_data[0]) != 0:
task_AO.write(ZVolt)
data = np.array(row_data).mean(axis=1)
data_DeflV = data[0]
data_ZVolt = data[1]
setpoint = 0.5
PID.error = setpoint - data_DeflV
PID.update()
output = PID.output
last_output = output
ZVolt += output
print(f"write:{ZVolt:+.6f}V, ZVolt:{data_ZVolt:.6f}V", end="\r")
sum_deflV += data_DeflV
return
Thank you so much!
Solved! Go to Solution.
08-12-2024 07:17 PM
Using a USB device and Python is the worst combination for control.
Firstly, USB has a communication latency of 100 times slower than PCI/PXI.
Reference: Instrument Bus Performance – Making Sense of Competing Bus Technologies for Instrument Control
Python is an interpreter language and can run up to 100 times slower than C. C++ VS Python benchmarks, Which programming language or compiler is faster
For PID control, you should use a PCI(e)/PXI(e) DAQ device via LabVIEW/C/C#, preferably via NI-DAQmx Hardware-Timed Single Point Mode
08-12-2024 07:23 PM
And if you need a very fast PID, you need to implement it on FPGA like with cRIO.
08-15-2024 07:11 AM
Thank you so much!
And if there is anyway to change USB-6363 to a PXI/PCI connector?
08-15-2024 07:13 AM
Thank you so much!
But it's not possible to use python on cDAQ devices, while I need to use machine learning through python on signal control. Is there any methods to deal with this problem?
08-15-2024 07:39 AM
And if there is anyway to change USB-6363 to a PXI/PCI connector?
It would require a change in the PCB so it is not possible. You would need to get a separate module.
But it's not possible to use python on cDAQ devices, while I need to use machine learning through python on signal control. Is there any methods to deal with this problem?
I believe that you are referring to cRIO, as mentioned by santo_13. All cDAQ (except the cDAQ-913x) do not have their own controllers and suffer from the same high latency issue. You can install and run Python on a cRIO. Just select Other programming languages when installing the software when NI MAX.
08-16-2024 03:51 AM
Thank you so much! Now I plan buying a new PCIe device (maybe PCIe-6321 with 2 AO). I have tested the USB sample rate, which is 10kS/s for python reading, and 500S/s for python writing. So how fast would you suppose for a PCIe-6321 sample rate in python? Would it exceed 5kS/s? Thank you again for your kindly help!
08-16-2024 08:26 AM
e1010511@u.nus.edu wrote:
Thank you so much! Now I plan buying a new PCIe device (maybe PCIe-6321 with 2 AO). I have tested the USB sample rate, which is 10kS/s for python reading, and 500S/s for python writing. So how fast would you suppose for a PCIe-6321 sample rate in python? Would it exceed 5kS/s? Thank you again for your kindly help!
The secret to achieve the highest hardware performance is to write/read in large chunks, this allows the hardware to create/use the chunk of data while your application generates or acquires a new chunk of data.