Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

save to disk from continuous pretrigger analog acquisition

I am trying to read acquired data continuously but only write it to the disk at the reference trigger.  I base this on the KB articles https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000g0BXSAY&l=en-US and https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019MiHSAU&l=en-US.  I have run the example200306.vi from here https://forums.ni.com/t5/Example-Code/Archived-NI-DAQmx-Continuous-Analog-Input-with-Both-Start-and/....  I have also found and run examples “AI with Start and Stop Trigger.vi” and “Digital Start and stop Trigger.vi.”

 

I understand that FIFO/onboard memory for the PXI-6133 card is 32 MS and that I can adjust the PC buffer size using DAQmx Configure Input Buffer function.  At the DAQmx Read function data is moved from the PC buffer to ADE memory.  The smaller the value of “number of samples per channel” on DAQmx Read function, the faster my display table is updated.  It only stops being updated after the reference trigger is detected.

 

That is acceptable for the display, but I also need to write the “sample per channel” from the DAQmx Timing function to the disk; that is the pre and post trigger samples based on the reference trigger.  Is there a DAQmx Task/Trigger/Channel/Read property that I could monitor to help me know when to write the data to the disk?

0 Kudos
Message 1 of 6
(1,157 Views)

I don't think I've tried this kind of thing before, but I think I can piece together some info and clues from your links.

 

It sounds like you've already handled the part where you can maintain a live display of near-real-time data.  Based on the example code you linked, it looks like you use the "DAQmx Is Task Done" query to notice when the defined finite acquisition has completed.  Once it does, you can then use the DAQmx Read properties "RelativeTo" and "Offset" to retrieve the relevant data from your task buffer.

 

I'd first try setting RelativeTo=FirstPretriggerSample and Offset=0.  Other combos involving ReferenceTrigger or MostRecentSample and the correct corresponding offsets (note: they'll be negative numbers) seem like they should be able to give you the same result.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 6
(1,104 Views)

I think I've found a way to get the data.  When I compute number of samples it seems correct. {seen enclosed snippet}  However, writing the data to the disk (using producer-consumer) causes my Enqueue to timeout.  Even if the timeout value for both Enqueue and Dequeue is -1 and the queue size is -1.  The Dequeue only receives the cluster from task[0].  The string in the cluster is the task name.  The array of waveforms varies in size.  That is 8 channels of waveforms for tasks[0..3] and 113 channels of waveforms for task[4].  The number of samples per channel for each task is [400000, 4000000, 400000, 40000, 3500].  The timeout occurs at task[1].  Data from task[0] gets written to the disk.  I suspect the reason I'm getting the timeout is because of the number of samples per channel.  I've tried enqueuing just a single waveform but get the same problem.  Any suggestions on how to breakup the waveform array so I don't get the timeout?

0 Kudos
Message 3 of 6
(1,077 Views)

The snippet isn't complete, but it's enough to notice a number of problems.

 

1. In your consumer loop, the first time you notice that anything's in the queue, you feed a True value to the loop terminator.  You first flush the queue contents  and write them to a TDMS file, but then your loop terminates and you release the queue.

 

2. So the next time your producer loop tries to do an Enqueue, you'll get an error (*not* a timeout error though).

 

3. Other things about the structure of your consumer are abnormal and prone to unexpected behaviors.  Instead of checking queue status for non-emptiness followed by flushing it, the normal consumer usage is to simply call the Dequeue function.  You can give it a finite timeout if you like.

 

4. What I can see of the remaining code you captured in the snippet doesn't *look* like it's going to do the things you seemed to be after in the original post.  The first loop that should be giving a live display of data would depend on you having modified the DAQmx Read property "RelativeTo" to be "CurrentReadPosition".  Perhaps you did that in code that wasn't included, but in *that* case, you would *also* need to change the RelativeTo property back to "FirstPreTriggerSample" as you move from the first acquisition loop to the second.

 

Can you describe your app in considerably more detail?   And also post the full code?  (Please be sure to set all your controls to typical values and save them as default.)   So far, I'm kinda doubtful that you're actually doing what you think you're doing based on the snippet you posted and your description so far.

 

You seem to be attempting a fairly non-trivial app (reference-triggered finite acquisition with continuous live display while waiting for trigger).   You should come up with a way to *exercise* it with known inputs.  I'd make a separate vi that generates a predefined AO pattern (I tend to like a sawtooth for this stuff) to be acquired along with a carefully placed reference trigger signal (a 2nd channel with a 0-5-0 volt "pulse" placed at a specific phase of the sawtooth wave).   This gives you a way to *test* your acquisition code.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 6
(1,070 Views)

Thanks for catching the errors in the consumer loop.  I was not producing any errors in the producer loop so I didn't catch the queue problems in the consumer loop.  Yes, I prior to snippet I modified the DAQmx Read property "RelativeTo" to be "CurrentReadPosition". So I did make the modification to change the RelativeTo property back to "FirstPreTriggerSample" in the second loop.

I'm using simulated hardware; PXIe-6368 and PXI-6133 cards.  My TDMS files contain the correct number of points for each channel.  I'll have to test with real hardware now.

I'm also debating setting the "sample to read per channel" in the DAQmx Read to 3.  It would make updating the display faster (something my users always want).  Is there any serious ramifications to not leaving it the default value?

0 Kudos
Message 5 of 6
(1,044 Views)

A standard rule of thumb for managing reads from an acquisition task is to read 1/10 sec worth of samples per iteration.  This lets you update your displays at about 10 Hz, which I consider to be *more than* fast enough for human users.

 

No human can observe data, analyze and understand it, come to a decision, and then react by interacting with the program at a rate that even approaches 10 Hz.  From a pragmatic point of view, it's tough to justify more 1 or 2 Hz.  But I'll admit that 10 Hz is less visually choppy looking than 2 Hz.

 

There's an old adage in retail that "the customer is always right."  I don't necessarily agree.  Over the years I've heard quite a lot of customer software requests that weren't well thought out or reasonable.  If the 3 sample reads imply an update rate notably faster than 10 Hz, I'd recommend you give a little pushback -- ask the users *why* they think it matters to update so much faster than their eyes, brains, and muscles are capable of responding to.

 

I think you should start by getting everything working well with 5-10 Hz updates.  Only then should you consider *maybe* trying for faster updates, though I personally probably wouldn't.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 6
(1,040 Views)