From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DAQmx read analog signal - access buffer

Solved!
Go to solution

I am reading multiple analog signals at 100 kHz using a DSA card in a PXI chassis with an RT controller. The main reading loop (timed), calling the "DAQmx Read" and forwarding the data to an RT Fifo, is currently running at 1 kHz which creates a lot of load on the RT controller.

The point is, that it would be enough run that loop at e.g. 20 Hz if it were not for a PID loop running in parallel which needs access to one of analog channels and should run at >= 1 kHz.

 

Now my question is if there is any possibility to access the DAQmx buffer from the PID loop. The idea would be to read the last e.g. 20 samples from the buffer without removing them and use these in the PID loop. The main reading vi would then still get a continuous stream of data from "DAQmx Read".

 

So basically I want to read out that one specific channel from multiple locations in the code where one location should not loose data whereas the PID loop just needs to "peek" at the current value.

The aim is to increase the PID rate which is currently not possible since the main reading loop requires too many resources for faster operation.

Is there any way to achieve this?

0 Kudos
Message 1 of 9
(3,987 Views)

Actually, running a DAQ card at 1KHz produces a very light load on a PXI controller, especially if you are collecting multiple samples at a time.  It all depends on using LabVIEW's Data Flow model in an intelligent way and taking advantage of the inherent parallel processing that LabVIEW affords, using Producer/Consumer Design Patterns to maximize the throughput of your data.  This will also allow the kind of "samples from the buffer" that you describe by operating in parallel with the continuous streaming of the data.

 

In order to suggest how you should modify your code to take advantage of this parallelism, we need to see the code.  Please do not attach pictures -- we need to see the actual VIs.  If possible, include as much of the RT side of the LabVIEW Project as possible so we can follow all of the logic.  As this will probably exceed the "three-attachment" policy of the Forum, compress the folder containing your VIs and attach the single .ZIP file.

 

Bob Schor

0 Kudos
Message 2 of 9
(3,947 Views)
Solution
Accepted by topic author ehrlich

I do not know for sure whether the same rules apply for DSA cards as for the regular multiplexing DAQ cards I'm more familiar with.  It *can* be done with those, I've done it before, though there'll be some special considerations for you under RT that I didn't have in a regular Windows app.

 

You would need to manipulate some settings in a DAQmx Read property node.  I'm pretty sure you'll want to wrap this stuff up in a non-reentrant "action engine" vi to serialize access to the DAQmx task refnum and buffer.  (Note: this sets up an issue with timing jitter in your more critical timing loop if it tries to call the action engine while the other loop is already in the midst of calling it.  Can't be helped.)

 

The idea is that one action would enable lossless reading at more like 20 Hz, the other would give you a snapshot of the most recent data whenever called.  The latter one produces a series of "data windows" that may overlap one another or may have gaps between them.  It will *not* be useful to string these chunks of data together as though they were contiguous in time.

 

As I look at the DAQmx Read property node, I don't see all the things I expected to find.  Dunno if this got changed in recent years or if my memory is faulty.  The properties you can set are "RelativeTo" and "Offset".  I *thought* one of the enum values available for "RelativeTo" had a meaning like "First Unread Sample".  The value "Current Read Position" *can* have that meaning, but I would expect it to respond differently to the manipulations I'm suggesting.  

 

I remember having to tinker and experiment a fair bit to get a solid understanding for how the various property settings and status queries affected one another.  Plan to do the same.

 

I would optimize the action engine for the faster loop.  To do this, I would have initially configured the AI task with "RelativeTo" = "MostRecentSample" and "Offset" = -(# samples).   Notice the minus sign!  These will be your default task settings, which you'll override but only temporarily for your slow, lossless readings.  

 

The calls to the fast, "data window" action will simply call DAQmx Read while specifying the appropriate # samples (should be the positive value of offset).  Each time you make this call, you count on the task already having the correct "RelativeTo" and "Offset" settings for your needs.

 

For your slow, lossless readings, you'll need to keep track of the total # of samples you've read so far, query "TotalSamplesAcquired", set up "RelativeTo" and "Offset"  to start at the first unread sample, then call "DAQmx Read" while requesting exactly the right number of samples (all the ones that have been acquired but not yet read).  After that, revert "RelativeTo" and "Offset" back to their defaults.  (Alternately, a more flexible approach would be to query the offset at the very beginning of this action and restore it at the end, in case the # samples can vary).

    Sorry, I don't recall exactly what combo of settings I used to accomplish this, and as mentioned before, have a nagging thought that one of the settings I had available to me at the time is not available now.   I expect there's still a way to get there from here though, provided no special limitations are imposed by using a DSA board.  I doubt it, as these buffer interactions are in PC memory after data has moved from the board to system RAM.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 3 of 9
(3,936 Views)

@Bob_Shor: I attached the two core VIs, posting the whole project would be to difficult but the only VIs missing here is one that transmits error/log messages and one for down-sampling the output data before sending it through a network stream. Both not time critical. Note that the "Read Data" VI actually reads data from two tasks since I read data from a DSA card (100 kHz) and an M card (2 kHz). They are synchronized by setting the sampling clock timebase of the M card to the sampling clock of the DSA card (not really important here). "Read Data" and "Send Data" are called by a parent VI and run in parallel.

If I set the dt of the timed loop in "Read Data" to 1 ms, this loop alone takes up ~ 40 - 45 % of the CPU resources. The "Send Data" VI accounts for another ~ 25 %. Disabling the second DAQmx Read and the RT Fifo and thus reading data only from the DSA makes the timed loop use ~ 18 % of CPU resources.

 

@Kevin_Price: Thanks for the idea! I'll definitely look into that.

Download All
0 Kudos
Message 4 of 9
(3,913 Views)

I see three independent timing sources at work here.  You have a Timed Loop that runs once every msec, a DSA Task that reads, I'm assuming, 100 points at 100KHz every msec, and an M Task that reads 2 points at 2KHz every msec.  You explain that the DSA and M Task both run off a single clock, and I'm assuming that both are set to Continuous Sampling Mode.

 

If the above are all true, why have a Timing Loop?  An "ordinary" While loop will "clock" based on the fact that your two Tasks "wake up" and present data at a rate of 1KHz (based on the DSA clock).

 

I assume that this task is your most Time Critical task.  Something that I did on my PXI system was to assign one Core of the processor to this Task, leaving the other cores to handle all of the other work.  I'm not 100% certain that I needed to do this, but I had a "fake Clock channel" in my RT Loop that put a Clock signal on as an additional (fake) A/D channel to detect missed samples and I never missed a sample (the previous version of the code was notorious for doing this, but then, again, the critical RT Loop had tons of processing inside it ...).

 

Bob Schor (note the "silent "c" in Schor)

0 Kudos
Message 5 of 9
(3,906 Views)

@Kevin_Price: So I implemented a scheme as you suggested and it is working nicely! CPU usage dropped to ~ 25 % and I am fine with the restrictions on the data returned (possible duplicates etc.). I attached a picture of the "slow lane", the "fast lane" basically just calls "DAQmx Read".

First, I read "Current Read Position" after acquiring the data and then used it as offset with "RelativeTo" set to "First Sample". But I then feared that this might become problematic when the I32 used for the offset overflows at some point (sometimes my code is acquiring for some time). So now I am using the difference between the "Current Read Position". That should also allow varying numbers of samples to read during each cycle.

Thanks for the help! I would not have thought that setting these options using the property node is actually fast.

Message 6 of 9
(3,899 Views)

Glad it helped.  Have just a couple minutes for some quick thoughts & follow-up notes:

 

- good catch that the use of i32 for offset means that overflow can realistically occur if you tried to calculate relative to First Sample. For future readers, TotalSamplesAcquired will overflow an i32 in just under 6 hours at 100 kHz.  +2^31 samples / 100000 samples/sec ~= 6 hours worth of seconds

 

- you may have already dealt with this one, but a "fast lane" call too soon after task start will produce an error b/c you'll be asking for more samples from the recent past than the total that have been acquired so far.

 

- I assume that the "fast lane" call is the True case where the visible False case is the "slow lane" call, right?  It's vital that it is.  The entire "slow lane" sequence of DAQmx calls must *NOT* be able to be interrupted by a "fast lane" call from parallel-executing code.  Again, if you have it in your True case, you have the protection you need.

 

- I see a separate call to a 2nd AI task that also seems to request "all available samples" using a -1 input.  It's possible that those samples won't represent the same interval of time as the samples from the 1st task.  You're ok from a syntax standpoint due to use of waveform arrays.  Also, on average and over the long run, the discrepancies will tend to cancel out rather than accumulate.  

  You can use some schemes to try to make sure that the # samples read from each task is in the same 50:1 ratio as the sample rates, but you may need to be careful these don't result in ever *waiting* for samples during the "slow lane" call.  That'll interfere with your ability to make high rate "fast lane" calls.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 7 of 9
(3,888 Views)

Regarding the fast lane: I attached a second screenshot to clear things up a bit. The True/False case structure is actually there to allow for different acquisition schemes. The two DAQmx Read calls are for seperate synchronized tasks on seperate devices. The point is that the channels read are user defined so not every device may be used.

The actual fast lane is the outer case structure which has two options: Read and Peek. Read is the normal readout for recording data and Peek is, well, peeking at the buffer. Every now and then (determined based on user settings but roughly every 10 ms) the Peek is replaced with a Read. So no parallelism here. The first call is always to Read which configures everything properly. This should also avoid the "call too soon" problem you mentioned.

 

The last point you addressed is taken care of further downstream in the code when the individual waveforms are appended into larger ones and the data stream is written to disk. I wrote some stuff to take care of stitching everything together without loosing data. For display this is not an issue.

 

0 Kudos
Message 8 of 9
(3,877 Views)

Yep, sounds like you've got things covered, no lurking "gotchas".  It's nice to have the freedom to do "read all available" on both tasks so your slow lane reads execute quickly and thus minimize timing jitter between fast lane reads.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 9 of 9
(3,867 Views)