08-31-2023 09:37 PM
I am using my PC to log the data, codes are attached. However, I always encounter the error shown in the figure.
I know it is somehow related to the sample rate and number of data points to collect. But I have increased my sampling rate to the maximum of the device (51.2k) and reduced the data points to as low as 10 and it still gives me the same error. Also, every time I try to end the program by keyboard interrupt, the program always keeps running for a while(about 10 sec) and then stop showing the error.
Any idea of what may causing this?
########################## the codes start in here ##############################
import nidaqmx
from nidaqmx.constants import AcquisitionType, ExcitationSource
import numpy as np
import time
from datetime import datetime
# from matplotlib import pyplot as plt
# Instantiate variables
sample_rate = 51200
samples_to_acq = 1000
wait_time = samples_to_acq/sample_rate
channel_name = 'cDAQ9185-20FB3D9Mod1/ai0:2'
# comments out triger
# trig_name = '/cDAQ9185-20FB3D9Mod1/PFI0'
cont_mode = AcquisitionType.CONTINUOUS
units_g = nidaqmx.constants.AccelUnits.G
all_data = []
all_timestamps = []
i_flag = 0
with nidaqmx.Task() as task:
# Create accelerometer channel and configure sample clock and trigger specs
task.ai_channels.add_ai_accel_chan(channel_name, min_val=- 100.0, max_val=100.0, units=units_g,
sensitivity=50.0, sensitivity_units=nidaqmx.constants.AccelSensitivityUnits.MILLIVOLTS_PER_G,
current_excit_source=ExcitationSource.INTERNAL, current_excit_val=0.004)
task.timing.cfg_samp_clk_timing(sample_rate, sample_mode = cont_mode, samps_per_chan=samples_to_acq)
# comments out triger
# task.triggers.start_trigger.cfg_dig_edge_start_trig(trigger_source = trig_name)
while True:
try:
# Reading data from sensor and generating time data with numpy
ydata = task.read(number_of_samples_per_channel=samples_to_acq)
current_timestamp = datetime.now()
all_data.append(ydata)
all_timestamps.append(current_timestamp)
i_flag += 1
if i_flag % 10 == 0:
np.save('accel_data.npy', np.array(all_data))
np.save('timestamps.npy', np.array(all_timestamps))
# print(f"Data saved at time {current_timestamp}")
except KeyboardInterrupt:
print("Data acquisition stopped.")
########################## the codes end in here ##############################
08-31-2023 10:46 PM
That's a *very* common error we often refer to as the "buffer overflow" error and the error text's suggestions are not the greatest.
I don't really know Python, but try this: set samples_to_acq = sample_rate.
Reason: You'll be asking to retrieve 1 second worth of samples per loop iteration. Your read loop will want to operate at a 1 Hz pace. So you won't fall behind the device & driver unless your loop code takes more than 1 second to execute. Even though you're appending arrays and saving to file inside your loop, a full second ought to let you keep up. If not, it should at least let you run longer before the error occurs, indicating that you're moving in the right direction.
As posted, you're trying to retrieve 1/50 second worth of sample per loop iteration. So your read loop wants to operate at 50 Hz. Apparently, the combo of growing arrays and (probably especially) file writes takes longer than that, causing you to fall behind further and further until you're behind by the entire buffer size and DAQmx throws that error.
-Kevin P
09-01-2023 08:59 AM
Thanks for your insights, so if I understand correctly, the problem is bc the saving process of the data is too long so that when the machine is writing the data and the buffer overflow.
To solve this issue, I guess I can save the sensor readings into separate data file and empty the reading list. Or use different data type to save the file, correct?
What file format should I use to save super large data? I know LabView is using TDMS, is it good for python as well?
09-01-2023 09:12 AM
To further explain why you should set sample_rate = samples_to_acq, see Understanding and Avoiding NI-DAQmx Overwrite and Overflow Errors and Specifying Number of Samples When Continuously Acquiring with NI-DAQmx for LabVIEW
The samples_to_acq in cfg_samp_clk_timing configures the buffer size, whereas it specifies the number of samples read in each API each task.read.
09-01-2023 09:24 AM
@Liwenhu wrote:
Thanks for your insights, so if I understand correctly, the problem is bc the saving process of the data is too long so that when the machine is writing the data and the buffer overflow.
To solve this issue, I guess I can save the sensor readings into separate data file and empty the reading list. Or use different data type to save the file, correct?
What file format should I use to save super large data? I know LabView is using TDMS, is it good for python as well?
NI-DAQmx driver supports TDMS logging natively. You can see my repo (python-ni-examples/nidaqmx_examples) for the example of logging and reading TDMS.