Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

nidaqmx python read and write tasks "Non-buffered hardware-timed operations are not supported for this device and Channel Type."

I recently started using the NI-DAQ USB6363 and I have troubles both with labview interface and the nidaqmx. 

 

My goals are to acquire voltages, apply transformations on them (I wanted a PID but even for subtracting two signals I have problems) and then output the result.

 

With the labview interface DAQExpress I often observe on the output big delays and frequency problems. 

 

I then decided to go with Python, here is my code: 

 

start=time.time()
tot=0
i=1
S=[0]
while tot<300:
   tot=time.time()-start
   with nidaqmx.Task() as task:
      task.ai_channels.add_ai_voltage_chan("Dev1/ai0:1")
      plus=task.read()[1] #the two photodiodes
      minus=task.read()[0]
   s=plus-minus
   S.append(s)
   with nidaqmx.Task() as task:
      task.ao_channels.add_ao_voltage_chan("Dev1/ao0")
      task.timing.cfg_samp_clk_timing(s_freq,                    sample_mode=sampleMode, samps_per_chan=num_samples)
      if i==0 or i==1:
          task.write(S[i], auto_start=True)
      else:
          task.write(np.mean([S[i],S[i-1]]),      auto_start=True)
    i+=1

 

This works if I do not attempt to change any sample rate (remove the line about timing), except that the output is too much discretized for me, too squared. 

However when trying to change the write sample rate, I obtain: 

 

Non-buffered hardware-timed operations are not supported for this device and Channel Type.
Set the Buffer Size to greater than 0, do not configure Sample Clock timing, or set Sample Timing Type to On Demand.
Task Name: _unnamedTask<13339>

Status Code: -201025

 

How can I work on the output sample rate, how can I smooth the signal? 

0 Kudos
Message 1 of 3
(1,883 Views)

Well, you have a dilemma.  As the error message says: when you configure for hardware sample timing, you also need a buffer.  What it didn't mention is that with buffering you also get latency (and generally a variable amount of it).  That's not gonna do your PID control any good.

 

So really, you're kinda stuck with on-demand (software timed) sampling, with no buffer or the corresponding latency but also subject to the timing whims of Windows.  So less consistent timing and lower possible max update rate.  Also not necessarily great for PID, but almost always less bad than the buffered alternative.

 

Also, FWIW, USB-based devices are not as well suited for real time control in general b/c the USB bus has its own issues with latency and timing variability.

 

Here's how I'd try to control timing:  I'd let DAQmx handle it via a buffered AI task, let's just say a sample rate of 1000 Hz.  In my control loop I'd:

- read 100 AI samples

- process as needed (filter, average, exponentially weight, etc.)

- apply control algorithm

- do an immediate, on-demand AO update

- return to top of loop

 

DAQmx will wait for the 100 samples to accumulate before returning them.  As long as you get through the rest of the loop in < 100 msec, you'll wait for the next set of 100 too.  From iteration to iteration, the DAQmx driver (which runs "closer to the metal" and thus has more timing control than normal Windows apps) will keep you paced to start your processing on a 100 msec cadence, giving you a pretty decent 10 Hz loop rate.

 

With a USB device, you'll be limited in how fast a loop rate you can sustain.  It's very likely > 10 Hz but also very likely < 1000 Hz.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 3
(1,869 Views)

I believe I am having a similar issue and would like help. I am working on an instrument that moves a platform, stops to gather charge data, then moves to next point. What we are noticing though is that it seems to be blowing through the "self.task.read(numberOfSamples)" (we want to slow it down). We have tried changing the SAMPLERATE from 10 to 10000 pts/sec without any change, also increasing the number of samples just seems to overload the RAM on our computers. Do you have any suggestions?

 

# Create and start task
self.task = nidaqmx.Task()
self.task.ai_channels.add_ai_voltage_chan(physicalChannel)
self.task.timing.cfg_samp_clk_timing(SAMPLERATE,
sample_mode=nidaqmx.constants.AcquisitionType.CONTINUOUS,
samps_per_chan=numberOfSamples)
vals_float = []
vals = self.task.read(numberOfSamples)
for x in vals:
if probe_units == "kV":
vals_float.append(float(x))
else:
vals_float.append(float(x) * 1000)
print(f"vals len: {len(vals_float)}")
self.task.stop()
self.task.close()

 

0 Kudos
Message 3 of 3
(1,790 Views)