12-18-2024 09:05 AM
I always explicitly set the buffer size in LabVIEW. I have found through trial and error that if I set the file write size incorrectly, it would reset the buffer size I set earlier. So, sometimes a error downstream can change things. In addition, I do not think you are using the Log Only mode; in that mode the buffer size has to be multiple of the disk sector size,104448 is possible value but the other values are not multiples of 512.
It's working which is good, but if you want a general application for any device you may need to look deeper into some of the inconsistencies.
12-18-2024 09:32 AM
My bad, I failed to register that the TDMS logging feature is *also* being used. I see it back there in the thread, I just didn't absorb it previously.
So definitely heed the comments from mcduff, who's answered more questions about TDMS logging around here than anyone else I'm aware of. It sounds very likely that the subsequent TDMS logging config overrides the prior buffer sizing. Since you're in the "just trying to understand" mode, you could experiment to see how buffer auto-sizing behaves when you account for disk sector size or else when you remove TDMS logging altogether.
-Kevin P
12-18-2024 10:01 AM
With more background now, I understand... why I did not understand that I had the answer in msg#11
"Can you try a direct call to the buffer config function after setting up sample timing and before starting the task?". In C, this is a config function. In python, this is writing to a property. As I was looking for a function, I did not find the property until after a few other messages (and really searching for things like buffer_size, _size, input_, ...)
In fact I understood buffer config function as cfg_samp_clk_timing(...) (thus I put back all parameters within the function call and not as separate calls.... and it did not change anything 😉 )
12-18-2024 10:10 AM - edited 12-18-2024 10:16 AM
I was not using Log only mode when I retested the default behaviour as it required a bit more program changing (I have no callback, where I was dumping the info so I am hacking a timer somewhere to dump the info)
I really used Log only mode this time and it did not change. But it allowed doing a few corrections: this is 14336 and not 14436. Surprise, this is a multiple of 512 😉
And to be more accurate, it is not exactly rate/2, it is roughly rate / 2. I rechecked for 400 kHz sampling rate, it gives 200 704 = 392 * 512 (well, why not 200 704 - 512 ? ). For 750kHz, this is 376832
12-18-2024 11:14 AM
I can say from experience that it is easy to go down the rabbit hole of buffer size, samples to download, etc. Here are some things that I have found; they are corner cases and really only apply to high stream rates.
Below are some screen shots of a simulated instrument running at a high rate using the above two points. It has been running for the last 20 minutes without issue in Log and Read mode, but not saving the data.
The values I had were:
Sample Rate 10MSa/s/ch 8 Channels
N Samples to Read 125952 ~ 12ms of data!
Buffer Size 80609280 ~ 8s of data
File Size 20152320 ~ 2s of data
File size is 1/4 of buffer Size
Buffer Size and File Size both multiples of NSamples to Read and disk sector size
You can see that DAQmx can be efficient, as the CPU load is low.
12-20-2024 05:47 PM
2 points for me:
- N samples to read is what you use in the read loop in message 16 ? Knowing other sizes are more in the range of seconds except this one, do you have other needs than ensuring stability and emptying buffer on time ? again, I have been quite succesful with 1s callback trigger and 20s buffer size but my acquisition rates are far below yours.
- file write size and file size are same thing in your comments ? I found the logging_file_write_size in python, it is true that it would give some control about what the framework is doing. Could be good that I take time to replay a bit with this, this was the only thing I was not setting in Log only mode
given your high acquisition rates, the CPU load is very good (well you are not saving the data also). Or you have 20Ghz water-cooled CPU...
12-20-2024 06:48 PM
@ft_06 wrote:
2 points for me:
- N samples to read is what you use in the read loop in message 16 ?
Yes
@ft_06 wrote:
- file write size and file size are same thing in your comments ?
No. The number of points per channel in the file is 20152320; this is the File Size. The file write size is the number of points for each File Write function, it is a multiple of the disk sector size and the File Size. In this case it is much less than 100ms of data. When I used ~100ms of data, the CPU was higher and it would sometimes give that error. This way is more bulletproof.
@ft_06 wrote:
Could be good that I take time to replay a bit with this, this was the only thing I was not setting in Log only mode
The example I showed was "Log and Read" mode, not "Log Only Mode". The File Write Size is only valid for the Log Only mode. So as a hack, I temporary set the task to Log Only to see allowed values the File Write size, then switch back to Log and Read mode, and set File Write Size to its default value, otherwise you get an error.
@ft_06 wrote:
again, I have been quite succesful with 1s callback trigger and 20s buffer size but my acquisition rates are far below yours.
If it works for you then don't change it. I need to provide a general solution for any NI digitizer for my colleagues. I have to ensure that it can run without problems 24/7 for up to weeks at a time. That is where all these optimizations come in. If I didn't optimize it could still run, but depending on the computer, you would eventually get that buffer error for high sample rates.
@ft_06 wrote:
given your high acquisition rates, the CPU load is very good (well you are not saving the data also). Or you have 20Ghz water-cooled CPU...
Actually, when you use the built in logging feature of DAQmx, it barely moves the CPU meter. It does DMA transfers directly to the disk, that is why the DAQmx manual states "Log Only" mode is the most efficient because it goes straight to disk, no need to read it for a UI update.
12-22-2024 05:51 PM
Thanks.
I have all inputs if I need to tune it more, hopefully we don't run as high acquisition rate for as long as you need 😉
And I have kept a LOG only option in our tool.
Have a good holiday