LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

"Start Trigger" for encoder measurement AND AI

OK, it is not quite kosher yet.

The CoarseSpeed task is caching and resulting in a lag.  I think it is getting every cycle, unlike the pressure task that is reading every other cycle.  The encoder I am using as a sample clock is 2000 count, and I trigger off the Index and read 2000 points, I think that means I am missing the following trigger, and catching it on the next cycle (I could be wrong).

From CoarseSpeed I just want the most recent data point.

Also, you can see my logic on the WriteMeasureFile area of the code, for some reason, when I try to record, it locks up.  Not sure why.  Am I asking it to write to disk faster than it can do so?

Any Ideas,
~milq
0 Kudos
Message 11 of 16
(993 Views)

Hi,

I actually did the tweaks last weekend and just now added a few comments after your friendly reminder.  It turns out I had already set up Coarse Speed to try to return the single most recent frequency measurement - the one which should correlate to the rev worth of pressure data.

A couple significant things that were NOT addressed:

- file writing.  As a rule, file writes should not be performed in the same loop that acquires data.  It should be able to run in parallel so it won't throttle down your data acq.  Typical approach is for data acq loop to write to a Queue and for a parallel-running file writing loop read from the same Queue.  Don't know exactly what you mean by "lock up", but have little to offer as I've always actively avoided the express vi's for file writing.

- finite vs. continuous acquisition.  You might conceivably be able to acquire and store data continuously instead of having to taking a series of 1 rev snapshots.  You'd first need to move the file writing out of the data acq loop though.

-Kevin P.

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 12 of 16
(983 Views)
Wow!

You are good.

Compound Arithmetic in array initialization: very sneaky
Shift Registers for error clusters: why?
DAQmx Read Property Node: I didn't know what kind of stuff in there I had to choose from (I have taught myself everything so far)
Error cluster dataflow control: I initially saw the 2 tasks as independent, when I noticed there was a timing mismatch, I did not think to reroute the error line

What kind of options do I have on a file save here?  I need two arrays and a double to be saved, for as many as 10 consecutive loop iterations. 

Is there a way to pull this outside of the loop with the current type of acquisition I am using?

Would continuous acquisition still allow me to start trigger for every rev?  I got the feeling that it would not.

Thank you SO much for your help.  I am going to try and start using the individual channel descriptors in my code (instead of global tasks), everything is a little messier that way but also more visible.

I am appreciative beyond words,
~milq


0 Kudos
Message 13 of 16
(979 Views)

1. Shift registers for error clusters - is an ingrained habit of mine and a generally good practice, though not strictly necessary in all cases.  Purpose is to make sure that errors occurring late in one iteration of a loop get propagated into the next iteration.  Also makes sure that errors propagate from left side to right side in cases where For loops iterate 0 times (can happen if they auto-index on an empty array).  If you have a while loop that terminates on the final state of the error cluster, you wouldn't strictly need a shift register.

2. DAQmx property nodes - yeah, I've been largely self-taught too, especially when it comes to a lot of the stuff buried a few sub-menus down.

3. Using the error cluster to enforce code execution sequencing is another very common "good practice," though there are definitely situations where maintaining separate error chains and parallel operations is more appropriate.  In your app, there was value in sequencing.

4. File save options -- too many options to go into, plus I'm not expert on many.  I typically make my own comma-delimited ASCII files for low data rates and I've begun using the new TDMS storage functions for high data rates.  Dunno your rotation rates, but I wouldn't assume that ASCII is a good choice for you.  TDMS can be learned fairly quickly if you focus on writing just the data and not try to manage a bunch of associated file/group/channel properties.

5. Pulling file operations outside the loop - the method would be to write the data into a Queue (there's a function palette for LabVIEW's built-in Queue containers).  A separate independent loop would read data out of the queue and write it to file.  Separating these functions to run in parallel won't make your CPU faster, but it can let the CPU be used more efficiently.  File writes can occur while your data acq loop is waiting for another rev of data, but don't force the data acq loop to wait for the file writes to finish.

6. Continuous acq would *not* let you retrigger on every rev, but I'm not convinced you'd need to.  If you trigger off the Z-index 1 time, then always read exactly 2000 samples (exactly 1 rev worth) at a time, then each chunk of 2000 samples is exactly what you'd have gotten if you *could* have retriggered every time.

-Kevin P.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 14 of 16
(966 Views)
One more question your lordship:

Explain why you used -1 for the offset instead of say 0?

I have poked around and tried to find out why you would have done that.  It seems like you would want a 0 "offset" from "most recent sample"

Impart on me your wisdom,
~milq
0 Kudos
Message 15 of 16
(950 Views)

Your confusion is justified.  It's based on the convention NI uses for what's meant by the position referenced by "RelativeTo".  They are consistent with their convention (as they need to be), but the original choice for that convention was a bit arbitrary.

In fact, the convention is that the position specified by "Most Recent Sample" points to the buffer location occurring *after* the 'Most Recent Sample.'  Thus reading with an offset of 0 ends up meaning, "read starting immediately *after* any samples you've already got."  In other words, give me the next future data.  The offset of -1 means, "read starting *at* the most recent sample you've already got."  When reading 1 single sample, you get the freshest sample without waiting.

I learned this from the help for traditional NI-DAQ many years ago.  I assumed and then verified the same convention under DAQmx.  I don't know if I ever dug into this topic under DAQmx help so I don't know if it's buried in there anywhere.

-Kevin P.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 16 of 16
(944 Views)