Multifunction DAQ

Showing results for 
Search instead for 
Did you mean: 

DaqWriteError about buffer size zero when writing only one sample

Go to solution



I'm writing an application, using python-nidaqmx, it has to output a waveform continuously, while acquiring AI data, on PCIe-6361. The waveform is user-defined. It works typically fine... except if the waveform is "too simple", and contains only 1 single sample. In such case, writing on the AO task fails with such error:

  File "/home/piel/.local/lib/python3.8/site-packages/nidaqmx/", line 398, in write_int16
    return self._interpreter.write_binary_i16(
  File "/home/piel/.local/lib/python3.8/site-packages/nidaqmx/", line 5586, in write_binary_i16
    self.check_for_error(error_code, samps_per_chan_written=samps_per_chan_written.value)
  File "/home/piel/.local/lib/python3.8/site-packages/nidaqmx/", line 6027, in check_for_error
    raise DaqWriteError(extended_error_info, error_code, samps_per_chan_written)
nidaqmx.errors.DaqWriteError: Write cannot be performed when the task is not started, the sample timing type is something other than On Demand, and the output buffer size is zero.
Call DAQmx Start before DAQmx Write, set auto start to true on DAQmx Write, modify the sample timing type, or change the output buffer size.
Task Name: _unnamedTask<0>


Changing to auto_start=True does work, but I can't just do that because I need to synchronize it with the AI task. Can anyone explain more this error, why isn't it possible to have a delayed task with buffer 1. I'm thinking of working around it by having a special case which would duplicate the sample rate, and duplicate the buffer. Is there any other simpler workaround?


Here is an example script showing the issue:


#!/usr/bin/env python3
# Demonstrate error when write task has buffer == 1, and auto_start is False

import time

import nidaqmx
import numpy
from nidaqmx.constants import (AcquisitionType, VoltageUnits)
from nidaqmx.stream_writers import AnalogUnscaledWriter

with nidaqmx.Task() as task:

    data_len = 1
    out_data = numpy.ones((data_len,))  # Using a length of 2 or more works fine
    task.write(out_data, auto_start=False)  # auto_start=True, it works fine

    # out_data = numpy.ones((1, data_len,), dtype=numpy.int16)
    # writer = AnalogUnscaledWriter(task.out_stream)
    # writer.auto_start = True  # auto_start = True, it works fine
    # writer.write_int16(out_data)

    # Run the task for 2 seconds, and stop

    print("Write task completed")






0 Kudos
Message 1 of 6

Post your code on the AI task as well. 

Then I might be able to advise how you can do the synchronization correctly.

If you want to output only for 2 seconds, you should use a Finite Sampling timing instead.

0 Kudos
Message 2 of 6

Thanks for responding. The code attached was the simplest version I could make from our application, to look into the issue. Our application is quite complex, and has a lot of other aspects unrelated to nidaqmx, so I'm not posting it here as-is. However, I can show a more complete example of how we (plan to) do synchronized AI/AO (multi-channels).

If you run the attached example (gzipped because the board doesn't allow me to attach text files), you'll see it fails with the same error. Increasing "data_len" to anything >= 2 make it work fine.

Any suggestion on how to handle this elegantly?


If you have suggestions on how to do synchronized acquisition in a better, they are also welcome!




0 Kudos
Message 3 of 6
Accepted by topic author pieleric

Basically, DAQmx just wants you to have a buffer length >= 2 when you do *continuous* acquisition.  Just fill an array with your 1 unique value and write that to the task.


More generally, a continuous output task will want to regenerate the buffer repeatedly.  What happens in the background is that the DAQmx keeps delivering chunks of data from the buffer down to the device as long as the task keeps running.  With a very small buffer like 2 or 3 samples, this has to happen with frequent delivery of very small chunks (maybe just 1 sample at a time).  This puts an unnecessary burden on your CPU.


I'd recommend that you write a buffer that contains multiple repetitions of your waveform, probably 0.1-0.5 seconds worth.  Then DAQmx can deliver larger chunks less frequently, putting less burden on your CPU.



-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 6

Thanks a lot Kevin for your answer.


Good to know there is a limit of at least 2 samples in the buffer. The error message stated "the output buffer size is zero", but probably it would be more correct to say "the output buffer size is less than 2". I'll see to report a bug about this.


The idea of duplicating the buffer is very obvious... at least once you've stated it! As it's continuous task, I can duplicate the buffer as many times as I want, and it will have the same behaviour. Thanks for the tip 🙂


For context, these values are picked by the user, and it's mostly just to not completely fail when the user types something odd. In practice, such small buffer could be useful for our application, but then the sampling rate would be very slow (eg < 10Hz). So I'm not concerned about the driver having difficulty to replenish the cache.



0 Kudos
Message 5 of 6

As a followup, here are some general thoughts on sync.  Sorry, I only work in LabVIEW and can't really help with details of Python syntax.


Usually when someone's looking to sync AO and AI, they have some kind of stimulus-response system set up.  My preferred method for sync involves these 2 things:

1. Share a sample clock.

2. Start the task responsible for generating the clock *last*, after all other tasks are started and waiting for it.


Additionally, it's often helpful to generate the sample clock with counter task and configure both AO and AI to use that pulse train output as their sample clock.  A couple more notes:

1. I would generally config AO to output on the *leading* edge (typically a rising edge) of the pulse train, and AI to capture on the *trailing* edge (typically a falling edge).

2. I can then use my knowledge of the system to set the pulse train to have the most advantageous duty cycle.  With a single AI channel to capture, I'd often set the duty cycle in the 90-95% range to give the system maximum time to respond and settle after the AO update, and then do my AI capture just barely before the next AO update.  When you have multiple AI channels on a multiplexing device (like yours), there are some further considerations to make sure *all* channels get captured before the system reacts to the next AO update.


-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 6