LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

cpu usage at 100% when using high sample rate

Dear all,

 

I am using LabVIEW 8.5 to acquire data data from PCI-6251, and expect to collect data in maximum sample rate (1,000,000/s). I have done some tests when the sample rate is 70,000/s, the CPU usage is only 5%, but when the sample rate is higher 80,000/s, the CPU usage goes up to 100%. I have attached the programme, would anyone tell me--- have I missed anything and caused the CPU usage problem? thanks

 

The PC CPU is 2.00GHZ, and RAM is 512MB

 

Thanks

0 Kudos
Message 1 of 9
(4,530 Views)

I just had a look at the spec sheet of the 6251 and it seems like it can acquire 1.25MHz for single channel and 1MHz aggregrate for multiple channels. Since you are using two channels, the max. acquisition rate of the card for each channel will be 500KHz.

 

What do you mean when you say sample rate of 70KHz? How many samples are you reading in every iteration when you do DAQmx Read. And, what exactly do you want to do with the data? Display all of this onto a graph like you have in your example code or something else?

Adnan Zafar
Certified LabVIEW Architect
Coleman Technologies
0 Kudos
Message 2 of 9
(4,517 Views)

Thanks for your reply.

 

  1. The PCI card I have  can acquire 1.MHz for single channel, and also I understand if I use 8 channels, the single channel max rate is 125KHz. The number of channels I use depends on the experiments, sometimes it is only one channel,  sometimes I use 8 channels. No matter how many channels I use, I expect to use maximum acquisition rate.

 

2. After I acquire the data, first I need to display parts of them in the window, then either I save all the data into files or after signal processing only save useful information.   I could use maximum sample data and save all the data into file, but I just found the CPU usage is at 100%, that is a little annoys me.  So I have broken my programme and try to find which parts cost most CPU usage. I have found the VI attached cost almost the CPU usage.

 

 

3.  I have done some tests by set different sample rate for single channel, when the sample rate is below 70kHz, it only cost under 5% CPU usage. But after the sample rate was increased to 80KHz, suddenly, the CPU usage was 100%. 70KHz and 80KHz sample rate don’t mean too much for me, I used them to find the change of CPU usage.

 

I hope there is a better way than mine to do the work

 

Lyn

0 Kudos
Message 3 of 9
(4,500 Views)

Does your vi contain a loop?  Is there a delay in your loop?  A loop without any delay will consume 100% CPU time.  Even a delay of 0 will prevent 100% CPU usage.

 

- tbob

Inventor of the WORM Global
0 Kudos
Message 4 of 9
(4,493 Views)

Yes, definately there is a delay in while loop. I attached the code in my first post.

0 Kudos
Message 5 of 9
(4,477 Views)

Now, I understood what you are asking for. The reason I raised the issue on top was because of the default values of your control. The way your VI looks right now, it seems like you are reading from two channels at 1MHz.

 

Anyways, what you are experiencing is the nature of the DAQmx driver. Since you are sampling at such high rates, the processor never goes to sleep but will yield time to other applications only when asked. Have a look at the following KB articles:

Default CPU Usage With NI-DAQmx Version 7.4
NI-DAQmx Program Causes 100% CPU Usage

 

The only thing I would change about your application is to remove the 'number of samples per channel' input at DAQmx Read. Let us know if you have other questions.

Adnan Zafar
Certified LabVIEW Architect
Coleman Technologies
0 Kudos
Message 6 of 9
(4,471 Views)

Hi there!

Do you need to use the "read all available samples" property? As you have set a "number of samples to read" I do not think it makes sense.

 

By setting this property to true, subsequent DAQmx reads will pass all available data from the hardware buffer to you application. This is not necessarily very effecient, as the moment you have completed one read, your application can begin another - potencially hogging up your CPU resources. I think you would be better off setting this property to false and waiting for the HW buffer to fill before reading the data (as an additional benefit, you will also read a consistant number of samples).

 

It would be better to rely on the inherent DAQmx timing. For example, by setting the number of samples to read to 1/10 of the sample rate, you effectively limit your reads to execute just 10 times a second.

 

Thanks,

Rich Roberts
Senior Marketing Engineer, National Instruments
Connect on LinkedIn: https://www.linkedin.com/in/richard-roberts-4176a27b/
0 Kudos
Message 7 of 9
(4,466 Views)

Thanks Adnan and Rich,

 

About the change “ sample number”, I tried yesterday, it would work, but the consequence is  I got a huge small TDMS files  (such as 22kB, 30KB……) when I save the data. That doesn’t look good to me. I also tried to set the “number of samples to read”  to such as 1/10 of the sample rate, it did not help, the CPU usage was still at 100%.

 

If I save one second maximum rate data (1,000,000/s) in one TDMS files, the TDMS file size is 1954KB and TDMS-INDEX is 1 KB.

 

No matter how many channels I use, I expect to use the PCI card in maximum sample rate,  I understand the maximum sample rate is 500KHz for two channels.  

 

I am off work today; I will study the inherent DAQmx timing next week and see what will happen.

 

Lyn

0 Kudos
Message 8 of 9
(4,458 Views)

Hey Lyn,

 

One thing you might try for a sanity check is to run a shipping example.  If your ultimate goal is to stream data to TDMS file, there is a TDMS Logging feature that might help you achieve this easily.  If you have DAQmx 9.0 or above, just open the shipping example called "TDMS Streaming - Cont Log and Read Data.vi" under Hardware I/O>>DAQmx>>Analog Measurements>>Voltage in the example finder.  For 500k, you might use a samples per channel of (for example) 51200 samples per channel.

 

Aside from writing raw data "under the hood" to the TDMS file (2 bytes per sample) keeping your files small, this feature will stream to disk at the fastest rate possible and generally with <5% CPU utilization.  For the optimum streaming performance, the "TDMS Streaming - Cont Log Data Only.vi" example shows streaming to disk without displaying a graph; in this mode, you could theoretically stream to disk at throughput of over 2 GB/s.

 

Let me know if you have any questions on this feature.

Thanks,

Andy McRorie
NI R&D
0 Kudos
Message 9 of 9
(4,400 Views)