I am absolutely sure that Kevins advice is spot on, but you mention crank angle (from an engine ? ) may I therefore venture to suggest that the system is highly likely to suffer from cyclic irregularities as a result of the firing of the engine and also as a result of the speed control mechanisim.
If all you want is the speed then take a set number of pulses into the buffer ( I suggest multiples of two revolutions) and sum the pulse periods.
Just out of curisoty, how many pulses per rev does your encoder have?
The two are sync'd in time (I think). In the EncoderPosition task, I have it use the A signal as the sample clock (it's inside the task definition), and you can see I route the A terminal over to the sample clock for the Speed Task. One of my concerns here is how often (with what precision) I can use that one signal for all of these different purposes... I will take a look at the arm trigger.
What I see out of sync is the start times. Your error cluster enforces dataflow sequencing such that your EncoderPosition task starts before you configure and start the CoarseSpeed task. Any A edges that occur between the two start times will buffer measurements into the Pos task but not into the not-yet-started Speed task. And there's really no way to know exactly how many samples of offset you'll get. (I guess I'm kind of assuming that the encoder is already spinning when you start this vi so that A edges are coming in prior to running the vi).
The "Arm Start" trigger is the feature that will let you sync the two start times. You'll need to Start both tasks before issuing the trigger edge.
The finite/continuous choice is made in the task as well (I was worried that you guys would not be able to see what I was doing in there). I had it first set to have the PulsesPerRev be the number of acquired point, you can still see those property nodes on the block diagram. For some reason, this was giving me buffer errors (I think it is because it would spin through, and I would not be able to get the data out of the buffer fast enough, but I am not sure).
I can't comment much here because I never use global tasks, so I don't know what might be configured there behind the scenes. I normally call the regular DAQmx Timing vi rather than the Timing property node, and in *that* context the "# of samples" input would relate to a buffer size. The # samples to read at a time is set in the call to DAQmx Read. Your Read calls leave that input unwired, so the default is used. I don't know whether it can use some global task default you had already set up, but whenever otherwise unspecified, the default behavior is "Read all available samples."
It was this error:
Error code: - 200279
Description: Attempted to read samples that are no longer available. The requested sample was previously available, but has since been overwritten.
Possible cause: The application did not retrieve the data from the buffer fast enough, so the data was overwritten.
Solutions: Increasing the buffer size or reading the data more frequently may correct the problem.
Sounds like perhaps you've set the buffer size exactly equal to the # points to read at a time. There isn't time enough after filling the buffer to retrieve all the data before the next sample needs an available place to be stored. Try making the buffer, say, 10x the size of the # to read. That will let it, well, um, *buffer* your samples.
I just recently took the DAQ NI class, the period vs freq measurements really are just a division (no extra voodoo). I assumed it would just be faster if it did it, instead of needing to perform the machine code of whatever version of division I implement.
Sounds sensible to me. The question remaining in my mind, is that the config seems different than I'd be doing for period measurement. I'd have a call to DAQmx Timing set to "Implicit (Counter)" because the input signal whose period I'd be measuring *is* the sampling clock. What you've got config'ed *looks like*
Are my read quantities really not sync'd? I thought that the SampQuant.SampPerChan property node did that for me (you can see I have both tasks lined up to do that). If this is not the case, then I need to figure out how to do it.
I'll describe my reasoning b/c I don't know about global tasks enough to try to take them into account. By my reasoning, the property "SampQuant.SampPerChan" will only affect the size of the buffer used for the data acq task, and not the # of values to read. Then, a call to DAQmx Read with no wire specifying "# to Read" will by default read whatever # happen to be available. There's nothing forcing the two tasks to have the same # available on every call so you're liable to get different #'s of samples back from the two tasks. This aspect of being out-of-sync is *in addition to* having the start times be out-of-sync.
Solution: Use Arm Start Trigger to get start times in sync. Set a buffer size that is much bigger than the # of samples to read at a time. Be sure both tasks sample off the same signal. Wire in the same # samples to read from both tasks in your loop.
And really, I would like to go to finite acquisition instead of continuous (the graph ends up looking dumpy on the other end, and you need to adjust the period to get it to look right at any given speed, which we really do not have the time to be doing). Having finite acquisition would be a big step in the direction I want to go.
I'd think you could switch over to finite sampling with minimal changes. Did you encounter some problem when trying?
[...most of a busy day passes...]
<Homer Simpson> D'oh! </Homer Simpson>
You're measuring an engine! Things can get much simpler now because your motion is unidirectional! I locked into the notion that position encoder = quadrature bidirectional position. Not so, not so. I'll leave the stuff above for the historical record or whatever, but the stuff below should be the better answer.
All you need is the Freq (or period) measurement, using some sort of 1-per-rev reference signal as an Arm Start Trigger. As you accumulate measurements, the array values are the speeds while the array indices imply cumulative distance traveled. Voila! Now you're down to 1 counter task and all that sync'ing nuisance goes away...
Message Edited by Kevin Price on 10-27-2006 08:11 AM
Your right it's a lot of array bashing, but at the rates I have indicated it's not very 'expensive' on the system and you will have heaps of free capacity for other things.
You might like to take some of the heartache out of the process and get the order analysis toolkit. There is some pretty neat stuff in it and there may be other tools that you don't yet know you need. On the negative side, it's yet another thing to have to spend that precious capital budget on (Mine is generally near zero, I think next year they may start cutting of limbs ) ) and the toolkit is very expensive.
What's the final purpose of the data, it would be cool to know a bit more.
Order Anlaysis Toolkit Link
Message Edité par Conseils le 10-27-2006 04:36 PM