From 11:00 PM CDT Friday, Nov 8 - 2:30 PM CDT Saturday, Nov 9, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Benchmarking PXIe based Data Acquisition


@Kevin_Price wrote:

I've been away from RT a while, but I expect the more memory-safe limited-size RTFIFO might be a viable alternative to a standard queue?

 

I seem to recall Ben posting about filling (and then emptying) the RTFIFO one time during initialization.  Thereafter, you can be assured that no new memory allocations will need to happen.  (Or maybe he did this with regular queues, and the same assurance is inherently built into RTFIFOs when you create one?).

 

Offering some breadcrumbs, others with recent RT experience can hopefully expand or correct as needed.

 

 

-Kevin P


Yes that worked before the RT FIFO were invented and it helped to avoid jitter when things spun-up. A similar approach can be used for writing to file ( pre-write and the over-write).

 

But normal unbounded queues work fine after they get initialized .... provided the queues get emptied as fast as they are filled. But in the latter case, it does not matter which flavor of queue we use, it is going to either back-up that DAQ or run out of memory.

 

"red pill or blue pill?" you pick.

 

But in general the following seems to ring true;

 

 "Queues are either almost completely full or completely empty".

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 21 of 26
(605 Views)

@Ben wrote:

more thoughts...

 

If that code is what you are actually trying to use then the math works out to "3600000" or about 3.6 MS/s. That was possible on old Pentiums.

 

About 2 years ago I was pulling in 400MS/s continuously using a PXIe chassis.

 

Go with TDMS streaming and only use the queue to share data with a graph AFTER you get the acquisition running right.

 

Ben 


Hi Ben,

 

Just wanted to verify. I have total sampling rate of 32channelX31KSPSX5 cards =4960KSPS.  Which is around 5MSPS and should be doable by PXIe. 

I tried with daqmx tdms streaming but the data was getting logged properly. .I will post the code and results in some time.

0 Kudos
Message 22 of 26
(598 Views)


First of all, thanks to the expert members for replying and enriching the knowledge.

 

Replying to mcduff:

As earlier stated by me, combining multiple cards in single tasks is not allowed for RT. I am getting error. Also I tried the same example for DAQMX TDMS logging and executed the code. The processor load is quite low as compared to other programs but the data written in file is compressed. I am attaching the image for your reference again. 

I tried to search for some un-necessary compression property being used by the code but couldnot..The data being written is basically wrong. 

This might be the ultimate solution but somehow data loss needs to be corrected.

 

To Ben and others:

I made 3 cases for the program.

Case 1: Queue buffer size of -1

Case 2: Queue size of 1

Case 3: No queue and direct TDMS logging in single loop.

 

I am attaching the graph of RT core loads which I got from DSM shared variables for analysis. The cards are being operated for 20KSPS only. There is some improvement in direct TDMS logging but when I increase the sampling to 31KSPS it goes to 100% loads. Also queue size of -1 is not at all better then size of 1.

 

Pls let me know if I have created any confusion.

 

As suggested I wrote a single loop for acquisition and logging without graphs. The data is being logged without any issues

 

 

Download All
0 Kudos
Message 23 of 26
(571 Views)


First of all, thanks to the expert members for replying and enriching the knowledge.

 

Replying to mcduff:

As earlier stated by me, combining multiple cards in single tasks is not allowed for RT. I am getting error. Also I tried the same example for DAQMX TDMS logging and executed the code. The processor load is quite low as compared to other programs but the data written in file is compressed. I am attaching the image for your reference again. 

I tried to search for some un-necessary compression property being used by the code but couldnot..The data being written is basically wrong. 

This might be the ultimate solution but somehow data loss needs to be corrected.

 

To Ben and others:

I made 3 cases for the program.

Case 1: Queue buffer size of -1

Case 2: Queue size of 1

Case 3: No queue and direct TDMS logging in single loop.

 

I am attaching the graph of RT core loads which I got from DSM shared variables for analysis. The cards are being operated for 20KSPS only. There is some improvement in direct TDMS logging but when I increase the sampling to 31KSPS it goes to 100% loads. Also queue size of -1 is not at all better then size of 1.

 

Pls let me know if I have created any confusion.

 

As suggested I wrote a single loop for acquisition and logging without graphs. The data is being logged without any issues

 

 

0 Kudos
Message 24 of 26
(571 Views)

1. Right now your 5 tasks are not sync'ed.  I would recommend you designate one "master" task (which you need to start *last*), and have all the other tasks use its sample clock to drive their timing.

 

2. Dunno what the defaults or options are in RT, but there's a DAQmx Read property node that can affect CPU usage in Windows when DAQmx Read.vi is waiting for data.  See if you can set it to "Sleep".

read wait mode.png

 

3. The TDMS data might "look wrong" because you're reading raw I16 data from the board.  This saves storage space, but requires more work on your part to apply the stored-on-board calibration curve manually.  (This is done automatically by DAQmx when you read the data as DBL arrays or waveforms.)

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 25 of 26
(555 Views)

@falcon98 wrote:


Replying to mcduff:

As earlier stated by me, combining multiple cards in single tasks is not allowed for RT. I am getting error. Also I tried the same example for DAQMX TDMS logging and executed the code. The processor load is quite low as compared to other programs but the data written in file is compressed. I am attaching the image for your reference again. 

I tried to search for some un-necessary compression property being used by the code but couldnot..The data being written is basically wrong. 

This might be the ultimate solution but somehow data loss needs to be corrected.

 


See this link for combining multiple channels. (I would try it with identical cards first.)

 

The TDMS logging writes the data in a compressed format. When you want to read it, use a "double" as the type to read it, that is the correct format to use, you cannot read the raw data correctly from these files. (See this link) I have never seen the data be incorrect when writing to files.

 

mcduff

Message 26 of 26
(549 Views)