I only had time to skim, but at a glance the consumer-side averaging code looks like it's the same in both cases, so I can't immediately point to any specific problem.
I'd encourage you to spend time tidying wires, making subvi's, etc. so you can more easily see and compare the two. Again, I have every reason to trust that LabVIEW's math and averaging functions work properly, thus the problems you've observed lie elsewhere in the (your) code. Exactly what and where will probably get easier for you to troubleshoot after some significant tidying up. My recommendation on the biggest things to focus on: keep wires horizontal without bends where possible, avoid unnecessary wire crossing (especially when they're the same wire type), use subvi's (for example, each entire consumer loop could be a subvi).
I spent some time on rewiring and re-organizing and did some testing.This time I also tried some other PCs connected to similar A/D converters (PCI 6221) and there the noise was independent from the averaging "mechanism". The reason for me to try out other computers was that I added some DAQ-property nodes and these did not work properly on my usual "test system" (daqmx.rc was missing). In the end, I deleted these property nodes and tried again on the usual computer: The strange behaviour still exists on this computers. Thus, I deleted LabView and reinstalled a runtime engine + daqmx 9.3 driver. Unfortunately, this still did not solve the problem as I am experiencing the same problems as in the beginning.
To sum up, the problems I described know only persist on this specific computer, but are not present on the other ones. The main difference between the computers is that the one I usually used for testing is rather old and slow, but I do not have any idea how this should influence problem described.
However, I still have two questions related to the data acqusition:
1. I checked the size of the buffer for the continous task via the property SampQuant.SampPerChan and only got a number of 1000 for a sampling rate of 10 kHz. According to this information http://digital.ni.com/public.nsf/allkb/E1E67695E76BA75B86256DB1004E9B07, it should be 100 000 by default (I did not change the buffer at all). Should I increase it?
2. I changed the code a little bit and convert the time stamp to a double. However, for a sampling rate of 20 kHz the actual sampling rate is as well 20 kHz. Nevertheless, the time spacing between the data points is not exactly what I expect from the sampling rate. E.g instead of 0.05 s it is
0.0500001907348633 s. For the third data point it is already 0.150000095367432. How can this happen? I just use the t0 of the first waveform in the waveform array and convert it to a double. If the data is averaged further, then I add some time according to the actual sampling rate, but this behaviour is also observed when I just save the time stamps.
Thank you for your support!
Not sure how to help with the difference in behavior on different systems. It continues to seem highly unlikely that the averaging calculations are contributing to the problem. At some point, I'm sure you'll find that the problem is either upstream or downstream of the simple math needed to calculate averages.
1. The property SampQuant.SampPerChan doesn't seem to report back the actual buffer size. There is a DAQmx buffer property node that can be used to set or query the actual exact buffer size. BTW, the link you shared was ambiguous about the default buffer size for an exactly 10 kHz sample rate -- it could be 10k samples or it could be 100k samples. A quick test at this end where I queried the DAQmx buffer property node showed that DAQmx chose 10k samples. When I requested 10001 Hz sampling, DAQmx chose 100k samples.
2. At a certain point, those t0 timestamps need to be understood as a convenient fiction. They are good faith estimates that are often approximately true and often close enough to actually true that they're useful. But they are not, by their very nature, the last word on timing.
Odds are, you're simply seeing the limitations of the precision available in the datatype. With a floating point double, you've got times that are in the order of 10^9 and you're seeing discrepancies in the order of 10^(-7). That's right in line with the precision limits of a floating point double.
It'll be better to calculate your own relative time baseed on querying the task for the actual sample rate and keeping track of your sample #. Then the ~15-16 decimal digits of precision will extend down in the sub-nanosec realm.