Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Desperate: What's the max speed of a NON-simultaneous DAQ? (PCI-MIO-16E-1 (6070E))

Solved!
Go to solution

I took the PCI-MIO-16E-1 that I have and put it in my Pentium 4, 2.8GHz single core machine.  Then, I used the "Continuous Acquire & Graph Voltage - Internal Clock" example in LabVIEW (there is an equivilent in C: ContAcq-IntClk.c).  I was able to run two channels at 625 kS/s each for over 18 hours by setting the "Samples to Read" property to 100,000 samples.  Setting a higher number for "Samples to Read" gives you a better software buffer to work with.  It looks like out Test Panels code isn't as optimized for multiple channels as it could be.  Running in an actual ADE, with an appropriately high number of Samples to Read should give you better results.  The latest DAQ drivers should be just as efficient with this card, or probably more so.  If you want to do data logging, I would recommend DAQmx 9.0 or 9.0.2 for the TDMS streaming feature that allows you to DMA your data straight from the card's on-board memory to a TDMS file on your hard drive, bypassing user-mode memory.  It's extremely CPU efficient.  There is also an option to place a copy in user-mode memory for display in your application.

 

Regards,

 

 

Seth B.
Principal Test Engineer | National Instruments
Certified LabVIEW Architect
Certified TestStand Architect
Message 11 of 23
(1,712 Views)

Just to add to what Seth wrote, while playing around with a simulated setup, I read the DAQmx Input buffer size property (DAQmxGetBufInputBufSize).  This was set to 100 kS/s by default.  With this setting, data is DMAed fast enough to fill your buffer every 0.16 s.  If your application (or OS) hiccups for longer than this... ie your DAQmx read function is held off for longer than this, then you will overwrite data in the buffer.  Seth's suggestion was to read more samples from that buffer at a time.  Doing fewer, larger reads is often times more efficient than multiple smaller reads, as Seth's results showed.  One other suggestion I would offer would be to actually have DAQmx create a bigger software buffer for you.  You can do this using DAQmxSetBufInputBufSize.  This function takes a task handle, and a U32 which specifies the requested buffer size in units of samples per channel.  If you set this to 1.25M, then your buffer would be large enough to accommodate 2 seconds worth of data (assuming a 625 kS/s rate), so even if your application or OS were to hiccup briefly, there would be a good chance that there was enough room in the buffer to gracefully handle this.

 

I hope this helps,
Dan

0 Kudos
Message 12 of 23
(1,698 Views)

OK. First, thank you! so much! for proving that this card can acquire 2 channels at 625KHz each. I was settling into thinking that I'd have to make 2 passes over my test, and keep looking for an affordable, more-advanced card to do the inputs simultaneously to upgrade to down the road.

 

Now, I've been playing a little, trying to translate Seth's advice about configuring LabView into C. I tried increasing my "Samples to Read" by increasing the "sampsPerChanToAcquire" in my DAQmxCfgSampClkTiming call. It had been 2.5M. I set it to 100M.

 

DAQmxCfgSampClkTiming(taskHandle, "", 625000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 100000000);

 

I would have thought that 2.5M was well above the 100K that Seth suggested (which makes me question whether I'm setting the right thing), but the result is that I can now grab these samples. However, the program just... stops after 8-12 seconds. It's never the same, and it's not throwing an error, so I don't know what's wrong. All I know is that, as I kept increasing the buffer, I got it to take a little more data. I tried to go to 200M, and the library complained that the buffer was too big. (Fair enough!)

 

PLEASE keep bearing with me; I feel I'm so close. Questions:

 

1) Is this really the right way to call this function for what I want to do? Does this call to the sample clock give me 625KHz _per channel_, or should I be setting the actual clock to 1.25M, and letting the driver figure out to divide this between the two? (I'm starting to suspect that much of my testing has been with this value set higher than it should be, but I've been back and forth with it so much, I've lost track.)

 

2) Per the ANSI C documentation, under continuous-sampling mode, that setting ought to be configuring my input buffer, right? Do I need to make the extra call to DAQmxSetBufInputBufSize? (If so, I need to upgrade to DAQmx 9.0 from 7.5, because the call doesn't appear in 7.5's docs.)

 

I'm gonna throw in my code again, just for good measure. It's just a slightly-modified copy of the ContAcq-IntClk example. It seems that my only real variables are the call to set the sample clock, and how I read and write the data back out. (I have to write out 16-bit integers for my purposes.)

0 Kudos
Message 13 of 23
(1,691 Views)
Solution
Accepted by David Krider

Hey David,

 

1) Yes.  The driver will divide down for you.  To verify this behavior, notice that if you try to increase this rate, you should get an error back.

2) You are correct in that the DAQmxCfgSampClkTiming function's parameter is used as part of an algorithm to configure a default buffer size.  You can use the set function to specify a certain buffer size if you want to override the default selection.

 

Looking at your code, it looks like you are reading -1 for the number of samples parameter in DAQmxReadAnalogF64.  When you specify -1, it means to read whatever's available in the buffer.  The problem with that is that it will return very quickly with a small data set, thereby requiring more read calls to read all of your data.  The increased number of read calls has much fairly considerable overhead.

 

As Seth mentioned, this is where you would want to set a specific number of samples to read each time.

DAQmxReadAnalogF64(taskHandle, -1, 10.0, DAQmx_Val_GroupByScanNumber,
            data, sizeof(data), &read, NULL)

 

If you change the -1 to something larger (like 100,000), this should have a positive effect on your throughput by reducing the number of times Read is called (and therefore the amount of overhead you incur inside the loop.  Note as well that writing data to a file inside the loop increases the amount of time in between reads and therefore might affect performance, as opposed to a multithreaded approach with a queue (so data can be written to disk while data is being acquired into memory).  That being said, at these rates, you should be able to keep up without making the operations in parallel.

 

As well, I understand the logic in downgrading your DAQmx version (since the card was released a while back), but I would recommend upgrading since we have continuously made performance improvements over the years.  As a side note, we do test with your board still internally; in fact, I have one of these in my machine (using DAQmx 9.0.2).

 

Finally, as Seth mentioned, a high speed logging feature exists as of DAQmx 9.0 that might greatly help you here.  I notice that you're logging to a binary file, down-converting to a 2 byte representation.  I'm not sure about everything with your application, but if your goal is performance in doing this, note that the TDMS file format is an open binary file format and 2 bytes per sample are written to disk.

 

Hopefully this helps with your performance problems.

 

Message Edited by AndrewMc on 10-13-2009 12:35 PM
Message Edited by AndrewMc on 10-13-2009 12:36 PM
Thanks,

Andy McRorie
NI R&D
Message 14 of 23
(1,686 Views)

Forgive me, but I'm still just a tad foggy on this one point: For acquiring 2 channels at 625KHz, should I set the sample clock (in the C call) to 1250000, or 625000?

 

I'm reinstalling 9.0 right now.

 

You and Seth both mentioned the high-speed logging thing, and I'm completely unfamiliar. I'll check it out.

0 Kudos
Message 15 of 23
(1,678 Views)

You would specify 625 kHz as your rate.  Notice that if you tried to read for 700 kHz (for example), you would receive an error.

 

Here's a link for the high speed logging feature:

http://zone.ni.com/devzone/cda/tut/p/id/9574

 

Basically, all you need to do is call DAQmxConfigureLogging at some point before you start the task.  Then everything that you read will be logged to a file.  To read your TDMS file after logging, you can use National Instruments software like DIAdem, LabVIEW, LabWindows/CVI, or Measurement Studio.  We also provide free interfaces for third-party applications like Excel: http://zone.ni.com/devzone/cda/tut/p/id/9341

Message Edited by AndrewMc on 10-13-2009 12:55 PM
Thanks,

Andy McRorie
NI R&D
Message 16 of 23
(1,675 Views)

David,

 

As Andy mentioned, sample rate is used to determine the default buffer size if you don't provide one.  I looked through the code which does this, and found the following.  For rates between 10 kHz and 1 MHz we set the default buffer size to 100,000 samples per channel.  For any rate greater than 1 MHz, we set the default buffer size to 1,000,000 samples per channel.  I think this may explain why you were able to get things working with one channel.  With only one channel (1.25 MHz sample rate), DAQmx was choosing a 1,000,000 sample buffer.  However, with two channels your sample rate drops to 625 kHz.  Using the DAQmx logic, the default buffer size drops to 100,000 samples in this case (which is consistent with the data I saw when I called DAQmxGetBufInputBufSize).

 

Reading the documentation for DAQmxCfgSampClkTiming, it says that the sampsPerChanToAcquire is used to determine the buffer size when the mode is set to DAQmx_Val_ContSamps.  I'm not sure that is true.  But I am sure that you can set it using DAQmxSetBufInputBufSize.

 

Hope that helps,
Dan

Message 17 of 23
(1,667 Views)

Correction to my last post, we will use the sampsPerChanToAcquire input to adjust the buffer size if it is greater than the default value picked. Sorry for any confusion.

 

Dan

0 Kudos
Message 18 of 23
(1,661 Views)

OK, here's where I'm at:

 

DAQmxCfgSampClkTiming(taskHandle, "", 625000.0, DAQmx_Val_Rising, DAQmx_Val_ContSamps, 1250000);

DAQmxReadAnalogF64(taskHandle, 62500, 10.0, DAQmx_Val_GroupByScanNumber, data, sizeof(data), &read, NULL);

 

As I understand this (and for those playing the home version), this gives me 2 channels of input at 625KHz, with an "internal" buffer size of 1 second's worth of data. Out of this buffer, I read 62500 samples per iteration of my read loop, and do something with the data.

 

Now, even though that's lower than the 100,000 that's been suggested, it's still WELL above the ~100 samples per iteration that the machine was doing when I had "-1" as my numSampsPerChan, and going as fast I this machine can. What I've found is that I couldn't make my data[] array larger than 125000, so I'm constrained to read in at half that. I don't recall this being a problem while I was trying this under DAQmx 8.0.1 under Linux, so I think this must be a VC++ Express thing.

 

The good news is that THIS IS WORKING! It runs for several minutes now. (It only needs to run for about 6 or 7 minutes at a time.) I'll have to verify I'm getting good data, but all the code seems to be in order now, and I think I'm starting to understand the relationship between all the critical variables.

 

So, once again, HUGE thanks for those that have commented here. I was serious about hitting someone's tip jar for the help. If it makes anyone feel better, this is a job I'm doing on my own time for a charity (though it doesn't feel like it since I'm trying to keep up with your hints while at work), so I really can't express how much I appreciate not needing to either buy a more-expensive card, or signing up for a support contract.

 

This is just the first step. Now I get to move on to actually controlling the hardware back and forth through some outputs now. Also, I read a little on the fast log thing. It seems meant for LabView; I don't know how easy it will be to get it going in C. Plus, I'll probably move back to Linux, now that I know what I'm doing, and that seems to be a 9.x or better feature. So I'll probably look at going back to the using the newer C example (with callbacks) and threading out the disk writing, as suggested, just to do it "correctly." 😉

 

 

 

 

 

 

 

0 Kudos
Message 19 of 23
(1,641 Views)

I'm glad we could be of help David.  Thanks for the tip offer, but just put it toward the charity stuff and we'll call it good. Smiley Happy  Feel free to post back if you have any further questions.

 

Regards,

Seth B.
Principal Test Engineer | National Instruments
Certified LabVIEW Architect
Certified TestStand Architect
0 Kudos
Message 20 of 23
(1,635 Views)