Given a circular buffer acquisition technique, what is the best way to calculate a time for each sample? Using the nidaq functions for c++: ... SCAN_Start ... DAQ_DB_Halfready ... DAQ_DB_Transfer ... (dot-dot-dots being other code). I can either take a time sample just before SCAN_Start, and then calculate the time of each sample from the sample rate and scan rate as each half buffer comes in. The problem being that I don't know how much skew there will over the (up to) 2 days that the acquisition is going for, and how long there is between SCAN_Start and the first sample; or, I can take a time sample just before each DAQ_DB_Halfready, and when it is ready, assume that this is the time of the last sampl e in the half buffer, and then calculate backwards the time of each other sample using the sample rate and scan rate. The problem here being a complicated method of calculating the time 'backwards' for a buffer of 10 channels each with 5000 samples, and getting accurately the time of the last sample.
I realise windows 2000 isn't real-time, but as real-time windows is (about 10ms) I'd like to get a time for each sample. Is there a better method? Thanks,
Iain, One thing you could do is to use the Config_DAQ_Event_Message with DAQ Event 1. This setsup a call back everytime NI-DAQ aquires a certain numbers of samples. In the call back, you could call the time sample and record it. Then, once your acquisition is done, you could post process the timestamps for the data from what you know. 1) You know that each datapoint is exactly the same distance appart (hardware timing) 2) you have several time stamps, you can use these to estimate the start time, and them you can add the dt for each sample.
I think this is going to be a pretty good estimate for the level of accuracy you are looking for. If you need very accurate measurements, you may consider looking into using a PXI-6608 synched with a GPS receiver. In this case, you can get time measurements accurate to 300 nS (depending on your GPS receiver.)