Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

syncing AO and AI in real time

Hello,

I have the following set-up. (I'm using visual studio 5 w/ C#. My card is NI E series 6259). I'm outputing the contents of a file through one AO channel on my card (using an async write function), and at the same time acquiring input from 16 AI channels. The data being acquired from the AI channels should change according to the output data, and I would like to understand this change by comparing the two. (the AI data and the AO data). For that reason, I must be sure that  the acquired data is properly sync'ed with the AO. I am worried about the two being out of sync, possibly due to one or the other (AO or AI) "dropping frames" as it were, e.g. the input skipping over a buffer of data, somewhere along the line. I want everything to be as close to real-time as possible, and I want to lose as little data as possible. Most importantly, I need the measure of time according to the AI, and the measure of time according to the AO, to be the same (sync'ing).

I hope that is all clear. Does anyone have any ideas how I might best go about doing this? Right now I just have two separate tasks, for AO and AI. Each runs an async operation, then calls itself again. They are essentially running independently. Initially, one's start (Task.Start()) is triggered by the other, but that doesn't really matter. I don't care how much before the AO task the AI task starts, as long as they are sync'ed once they are both running.

What would sharing sample clocks accomplish? I also had the idea that I could have some kind of counter (a digital spike), sync'ed to the AO data, which I could somehow read to the data file along side the AI data, basically keeping time. Maybe I could use the internal counters and clocks on the card for this. I think I can just hook up a (counter, or DO?) output channel to an input channel, and get something to this effect. Might that be useful? And is there a way to do it without having to run a wire from output to input like that? A way internal to the card?

Any help and other ideas would be very greatly appreciated. If you think you might have some help but are unsure about the details I have laid out, please ask me to clarify! I really do need some help.

Also, would my problems be easier to solve if I was using the MHDDK, and programming at the register level?

Thanks so much!
0 Kudos
Message 1 of 15
(6,097 Views)
Hello scorpsjl,
 
From your description, it seems that you need to synchronize your AI and AO.  This can easily be done by sharing a sample clock between the two tasks.  The only thing to be aware of is that since your AI circuitry only has one analog to digital converter, the channels will be multiplexed.   Only the first AI channel in your scan list will be exactly synchronized with your AO.  The convert clock rate governs how fast the subsequent AI channels will be multiplexed.  This KnowledgeBase contains useful information about how the convert clock rate is determined, and how to change it if necessary.  With slower sample clock rates, the convert clock goes fast enough that the data is near simultaneous. 
 
As for the idea about the counter, I think that synchronizing the AI and AO is more useful.  It seems that you need the AI to be started before the AO as long as you can know when exactly the AO starts, or you need them to start simultaneously.  If you use the AO sample clock for both tasks, the AI will start when the AO starts and they will be synchronized. 
 
Let us know if you have additional questions.
 
Regards,
Laura
0 Kudos
Message 2 of 15
(6,073 Views)
Thanks Laura.

I've gotten my program to work with shared sample clock, and I'll later be able to debug it and hopefully it offers sufficient performance. Is there always an error thrown when the card skips over / misses samples? Because this might ensure that I'm not missing anything. I know there sometimes is.

However, my problem is that the two tasks that I'm interested in, AI and AO, need to operate at different frequencies. Specifically, I want my AO channel operating at about 44 khz (44,000 samples / second), and want my 16 AI channels sampling at around 10,000 samples a second (10 khz). How can I manage this? Is there some way to set up an internal clock running at a least common multiple of the two frequencies, and then have each task sample after a certain  number of ticks of this clock? Alternatively, since my AI frequency doesn't have to be exactly 10 khz (e.g. it could be 11 khz),  I could set the AO frequency to 44khz and then  use its sample clock for the AI. However, I would still need to make the AI operate at only every 4 beats of sample clock. Is there a way to do this?

Also, I'm curious as to the exact mechanisms at work. THe sample clock i know is a hardware mechanism. Presumably at each tick of the clock, one input and one output sample is acquired or output, via an ADC or DAC respectively. Then what happens? How are buffers involved, how is the operating system involved, and might this slow things down? In particular, since I'm assuming not everything can happen "at once" at each beat, in what order do these things happen?

As for the multiplexing and inter-channel delay, as I've said, I plan to sample around 10kS/s * 16 channels. I'll look at the knowledge base page you've cited, but for now, can you tell me around what I should expect for interchannel delay? Again, my card is M-series 6259, pci-express.

Thanks a lot! If you can offer any advice, especially in response to the second paragraph where I actually have a programming question, it would be appreciated. I also have two more grab-bag questions, the first being very important to my application:

-What is the best way to create a graph with constant refresh of the incoming data (In C#). I don't have Measurement studio.

what is the difference between async read/write and just read/write when it comes to AO and AI tasks? Presumably async allows you to call a function at the end of each cycle, but how is this implemented?

thanks again.
-scorpsjl
0 Kudos
Message 3 of 15
(6,059 Views)

Hello scorpsjl,

 

Let me see if I can address your questions.

Is there always an error thrown when the card skips over / misses samples?

 

With a hardware-timed operation using a sample clock, the device will acquire/generate one sample per sample clock.  The only way something will be skipped or missed is if the buffer becomes full before your program can read in all samples.  If this happens, you will get a buffer overflow error.  With a generation, if you are operating in a mode where the data in the buffer cannot be regenerated, you will get a buffer underflow error if you try to write a sample before the buffer has been filled by your program.  


Specifically, I want my AO channel operating at about 44 khz (44,000 samples / second), and want my 16 AI channels sampling at around 10,000 samples a second (10 khz). How can I manage this?

 

The AI and AO sample clock are both derived from the same master timebase on the device.  You can start both tasks using a start trigger so that they start simultaneously and then the respective rates will be derived from that master timebase.

 

Is there some way to set up an internal clock running at a least common multiple of the two frequencies, and then have each task sample after a certain number of ticks of this clock?

 

You could do this using counters.  They would need to be setup for a pulse generation task and after so many ticks of the source, a pulse would be generated that would be used as the sample clock for AI or AO.  You would need a minimum of two counters to do this and they would use the same source.  This is basically the same idea of starting the AI and AO at the same time since the same master timebase is used.

 

Alternatively, since my AI frequency doesn't have to be exactly 10 khz (e.g. it could be 11 khz),  I could set the AO frequency to 44khz and then  use its sample clock for the AI. However, I would still need to make the AI operate at only every 4 beats of sample clock. Is there a way to do this?

 

Yes, you would use the AO sample clock as a source for a counter and setup a pulse generation task that outputs a pulse every 4 ticks of the source.

Also, I'm curious as to the exact mechanisms at work. THe sample clock i know is a hardware mechanism. Presumably at each tick of the clock, one input and one output sample is acquired or output, via an ADC or DAC respectively. Then what happens? How are buffers involved, how is the operating system involved, and might this slow things down? In particular, since I'm assuming not everything can happen "at once" at each beat, in what order do these things happen?

 

The samples are transferred through the onboard FIFO and then into the PC buffer.  The samples are pulled from the PC buffer into your application when you call a DAQmx Read function.  This process is system dependent, but at your rates I do not foresee any problems.  I don’t really understand your question about what order things happen.

As for the multiplexing and inter-channel delay, as I've said, I plan to sample around 10kS/s * 16 channels. I'll look at the knowledge base page you've cited, but for now, can you tell me around what I should expect for interchannel delay? Again, my card is M-series 6259, pci-express.

 

6.25 microseconds.  For slower rates, the interchannel delay for this device is 11 microseconds (1 for the device + 10 that the driver adds).  Once you have more channels than that convert rate can support, the convert clock rate becomes the sample rate * number of channels, which is the range you are in.

 

0 Kudos
Message 4 of 15
(6,040 Views)
-What is the best way to create a graph with constant refresh of the incoming data (In C#). I don't have Measurement studio.

 

This is not a question I know the answer to.  Perhaps another user can comment.

what is the difference between async read/write and just read/write when it comes to AO and AI tasks? Presumably async allows you to call a function at the end of each cycle, but how is this implemented?

 

An asynchronous read or write only executes when the data is ready to be read or written.  A synchronous read or write blocks the execution from the rest of your program.  For example, if you call a read function synchronously and ask for 100 samples it will block execution until all those samples have been read.

 

I hope this clarifies things for you.

 

Regards,

Laura


0 Kudos
Message 5 of 15
(6,037 Views)
Hello,

Thanks to Laura for her response! So, if I want one task to go at speed X and the other one to sync to it but go at speed X/4 (operate once every four clock ticks) is there a simple way to do it?  Or do I need to use counters? And if I do, might you tell me what the simplest counter set-up would be? Thanks!

Finally, I really need to solve this graphing-problem. I need to graph the data as it comes in. I have written a graphing module in C# but it doesn't go fast enough and eventually I get a buffer error. How can I get around this!?

Thanks again,
scorpsjl

0 Kudos
Message 6 of 15
(5,991 Views)

Hi scorpsjl,

The AI and AO sample clock are both derived from the same master timebase on the device.  You can start both tasks using a start trigger so that they start simultaneously and then the respective rates will be derived from that master timebase.  I would recommend this method for your synchronization as it is easiest to implement and the same master timebase is used for both tasks, ensuring synchronization as long as the tasks start at the same time.

 

If you would prefer using a method that uses a counter, you can.   The counters would need to be setup for a pulse generation ticks task and specify two high and two ticks of the source, so that the output level will toggle every two ticks of the source.  The source for the pulse generation task would be the sample clock of one analog task and the counter output would be the sample clock for the other task.  

 

Regards,
Laura

0 Kudos
Message 7 of 15
(5,969 Views)
Hi Laura (or anyone else),

I'm trying to synchronize the tasks by using a start trigger, however, I can't seem to figure out a way to do it. Can I set one task's start trigger to be the start of the other task? I see I can use an analog edge trigger, but this seems unnecessarily imprecise. I'd just like one to start as soon as the first one does.

Thanks a lot for any help.

-Sam

0 Kudos
Message 8 of 15
(5,934 Views)
Take a look at the following example:
 
C:\Program Files\National Instruments\LabVIEW 8.2\examples\DAQmx\Synchronization\Multi-Function.llb\multi-function sync-AI-AO
 
it ships with all versions of LV not just 8.2.  In the synchronization folder there are several other examples which could help you get started.
0 Kudos
Message 9 of 15
(5,925 Views)
I will take a look at those examples. However, they are in LabView, and I'm writing in C# in visual studio. I can probably translate fairly easily between the two to some extent. However, do you think you have any immediate advice on this synching in C#, or some C# examples files?

Thanks,
scorpsjl
0 Kudos
Message 10 of 15
(5,920 Views)