From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Use Rising and Falling Edge of an Encoder's A & B pulse as the Sample Clock

Solved!
Go to solution

Hi,

 

I am trying to use an ABZ encoder as the sample clock for my LabVIEW VI. I would like to use the rising and falling edges of A and B pulse to acquire a single synchronized reading on multiple channels (one digital, rest of them are analog) at each edge detection.

 

I am trying to acquire vibration data on an engine. My goal is to convert the data/results in crank angle domain. I have an encoder attached to the engine's crankshaft. The encoder has A & B pulses (720 pulses per revolution on each output). The outputs have a 90deg phase difference. The Z or index pulse is synced with one of the cylinders in such a way that I get a high on Z on start of every engine cycle. If I can use the A & B pulses as my sample clock and take a data sample at each rising and falling edge, I can get a data set that is taken every 0.125 degrees. I can use the Z pulse as an indicator for engine cycles. This will save me a lot of time in the post-processing and will eliminate many other channels I previously had to use to get to the same information.

 

I am using two cDAQ-9179 chassis and attached is a picture of the modules on each of the chassis. I am creating two separate tasks in LabVIEW (one for all analog channels and one of a digital input line for Z pulse of the encoder). I am using NI-9401 with my encoder. A & B pulses are connected to line 0 and 1 and Z pulse is connected to line 2. I am trying to use edge detection mode mode on DAQmxTiming.vi. I have connected line 1 and 2 to rising and falling edge physical channels, set it to continuous samples and 1 sample per channel. I basically copied an example I found on knowledge.ni to synchronize multiple modules on different cDAQ chassis. I am also using NI-9469 on each chassis to be able to share sample clock.

 

When I run the vi I get the following error:

 

Error -200452 occurred at DAQmx Timing (Change Detection).vi:4050001

Possible reason(s):

Specified property is not supported by the device or is not applicable to the task.

Property: ChangeDetect.DI.RisingEdgePhysicalChans

Task Name: Accelerometers

 

I am sure this is not the only issue in my current code but this is probably the first issue my vi is running into.

0 Kudos
Message 1 of 17
(2,783 Views)
Solution
Accepted by topic author osamafarqaleet

Short answer: you're not going to be able to do that directly in hardware.  You'll have to capture and post-process to get *approximately* what you're after.

 

The basic conflict is that you're using a lot of Delta-Sigma analog modules which must internally generate their own sample clock.  There's no way I know of to make them use a change detection event as a sample clock signal.

 

But I think there's a fairly easy workaround that'll give you similar data.

 

It sounds like ultimately, you want a relationship between crank angle and all your analog sensor samples.  If change detection worked like you hoped, you'd be capturing analog data right on the quadrature encoder edges, 1 sample per quad state, 8*360 samples per rev.  That'd be nice because the hardware would make sure that the samples were captured at equally spaced angular increments.

 

Another way to approach it is to export the AI task's sample clock and use it as a sample clock to capture encoder position.  You'll again have correlated angle and sensor data.  And it will be equally spaced, but now it'll be in time increments rather than angle.

   If you imagine graphing the data from each approach, sensor value vs. crank angle, the curves would be essentially identical.  They would trace the same path -- the only difference is that each was constructed from a different subset of all the points they pass through.

   So whichever way you *capture* the data, you can always post-process (i.e., interpolate) to produce a good estimate of what the other method would have measured.

 

You've also got some pending issues with your data acq and FFT loops:

1. You should definitely read more than 1 sample per iteration.  A typical starting point would be to read 0.1 seconds worth of samples per iteration.  (Which will in turn give you 10 Hz freq resolution in your FFT analyses.)

2. You should not have the wait timer in your FFT loop.  Once the data acq loop produces at a reasonable rate like 10 enqueues per second, the dequeue function will naturally start giving you data at that same 10 dequeues per second rate (on average).

 

I'm not at all familiar with the 9469 and what you'll need to do to sync Delta-Sigma modules across the two chassis.  Expect to need to do a fair bit of signal routing and special timing config to get it all working.  The basic idea is that all the Delta Sigma devices need to be using a common high freq signal as their timebase, which they internally divide down for use as a sample clock.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 2 of 17
(2,747 Views)

If you are able to switch to a non-delta-sigma converter you can use the output of your encoder as the sample clock for your analog acquisition:

 

https://forums.ni.com/t5/Example-Code/Using-Counter-Output-as-Sample-Clock-to-Measure-Digital-Events...

 

0 Kudos
Message 3 of 17
(2,743 Views)

Thank you so much for the answer Kevin.

 

I did not know that you cannot use edge detection for Delta-Sigma analog modules.

 

It seems like I will have to stick to post-processing for a while since I am not sure there are C-series vibration modules out there that are not Delta-Sigma. Even if there were, at this point, I don't think my budget allows for making the purchase.

 

I think I will stick to just recording all encoder pulses raw and doing the math in post processing. I have a NI-9361 as well that I initially thought of using but I think 50kHz (maximum sample rate this hardware configuration would allow) would not be fast enough to reliably calculate angular motion using X4 decoding type method. The engine runs at 2700RPM (max speed) and my encoder will end up producing 32.4k pulses per second on A & B each. 50kHz seems too low (I prefer at least 4 times higher) to have confidence in angular position data.

 

Normally I read 10% (0.1 seconds worth) of sampling rate but I was not sure that if I need to only read one sample if I end up going one sample per edge detection. I did not know that dequeue will automatically dictate the loop execution as required.

0 Kudos
Message 4 of 17
(2,658 Views)

Thanks for the answer Bert. I am not sure if I can switch to non-delta-sigma modules at this go around but I will do my research to find the modules I typically end up using and see if I can find non-delta-sigma ones.

0 Kudos
Message 5 of 17
(2,657 Views)
...I think 50kHz (maximum sample rate this hardware configuration would allow) would not be fast enough to reliably calculate angular motion using X4 decoding type method. The engine runs at 2700RPM (max speed) and my encoder will end up producing 32.4k pulses per second on A & B each. 50kHz seems too low (I prefer at least 4 times higher) to have confidence in angular position data.

No, with a counter-based encoder position task, that kind of thinking about oversampling isn't necessary!   (Although it would be if you were capturing the encoder signals as 2 digital input lines, and then post-processing to work through the quadrature, etc.)

 

Make an encoder position task using a chassis counter.  The position will be tracked in hardware and then you can sample it at a much slower rate if desired.  Using your figures, A&B each pulse at 32 kHz so you'll have a position count that increments at 4x that speed, 128 kHz.  Every second, the count value will have increased by 128k.  If you had a sample rate of 1000 Hz, each successive sample would show a count increase of 128 counts.  And so on.

 

Basically, the counter task handles all the quad decode and counting (without any misses) regardless of how often you sample that count value.  You can sample faster than your encoder rate, or way slower than your encoder rate -- either way the count value at the sample instant will be correct.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 17
(2,649 Views)

I never thought about it that way. I can simply make an angular position task using the encoder and NI-9361 and record it at my desired sampling rate. I may not have a reading at every 0.125 degrees interval but for every reading I do have, I will have a corresponding angle with a maximum potential error or +/-0.125 degrees (which is totally acceptable in my current application). 

 

I think I know how to sync two separate tasks on the same chassis/time clock so I should be good there. Just need to make sure time base source and time base rate are the same (aka they are both using the same clock) on both tasks, correct?

0 Kudos
Message 7 of 17
(2,619 Views)

I think I know how to sync two separate tasks on the same chassis/time clock so I should be good there. Just need to make sure time base source and time base rate are the same (aka they are both using the same clock) on both tasks, correct?

You've got the right idea, but the use of Delta Sigma devices makes things a little trickier.  I'm not sure you'll be able to export the internal timebase, but you should be able to export the resulting sample clock it derives internally.  

   However, you'll also need to deal with the sample delay inherent to a Delta Sigma device.  There's info in the spec sheet, but it's generally dominated by a digital filtering delay measured in terms of # samples, independent of sample rate.

   You may have to experiment a bit to figure out the right way to shift your data to compensate properly.

 

The instant when a Delta-Sigma sample is fully captured, the value it captures represents what the signal *was* a very short time ago at the input pin.  The instant when your encoder sample is captured, the value it captures represents what the count *is* right at that instant.

   I think but am not sure that the Delta Sigma device will export its first sample clock pulse at the *end* of this filter & conversion process, at the instant when its own capture is complete.  If I'm correct, your first AI sample and your first encoder sample would represent different points in time.

 

A simple way to approximately compensate will be to ignore the first N AI samples, where N is figured out from the device's spec sheet.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 17
(2,615 Views)

Kevin,

 

I was going to ignore all the samples until first reset on the encoder position measurement because every time the code restarts, the encoder starts counting from 0 degrees but there is no guarantee that the engine shuts off at TDC every time. So my plan was to skip first partial and maybe even next full cycle and last partial cycle of every reading I take for my analysis and post-processing.

 

I do however have one problem though. My sampling rate is set at 10kHz and I have tried reading 1000 and 2000 samples at a time but after running for a bit, the code gives me an error that the application cannot keep up with the hardware. Previously, I have never had this issue when reading 10% samples at a time of the entire sampling rate. I could leave my code running for days and it would be just fine. I am wondering if it has to do lots of channels (56) at this sampling rate or if my computer is just being slow or if the code in my loop is taking too much time. I am not sure how I will clean the loop anymore though since its already doing the bare minimum I need it to do.

Download All
0 Kudos
Message 9 of 17
(2,602 Views)

Ignoring partial rotations at the beginning and end doesn't address the likely time-shift issue between Encoder data and Delta-Sigma based AI data.   You should parallel wire your Z-index pulse to an AI input, start your capture, then slowly rotate a rev or two.   Then go back and examine the data.  At what encoder sample # does the angle reset back to 0?   At what AI sample # does the pulse show up?   I think you'll find that those sample #'s are different, and that's the time shift issue I'm talking about.  You'd want to make them line up at the same sample # because both are representing the same event.

 

Also, at the moment you won't get consistent time-shift results for a 2nd reason -- the tasks aren't sync'ed up.  I'd recommend that you share the AI task's sample clock over to the Encoder task and then make sure you start the Encoder task *before* starting the AI task.

 

Now then onto the buffer underflow error.  You've got 50+ channels at 10 kHz and are doing FFT calcs in real time on the majority of that data.  That isn't trivial, so let's see what might be worth changing.

 

I generally view this in two parts.  The first thing is to make the DAQ loop as tight as possible, while using a producer / consumer pattern to defer further processing to an independent loop.  Sometimes this is the only thing needed.

   The second thing is to make sure that the overall amount of processing going on isn't choking the CPU.  Deferring to parallel loops won't cure this kind of problem.

 

Your DAQ loop is mostly pretty tight.  While I don't really think the TDMS writing function is likely a bg part of your problem, I'd still move it into the consumer loop where you also do FFT's.  I would also move the setting of waveform properties inside the case structure (no need to do it when not logging, right?) over in that other loop.

   I'm a little leery of the "Insert into Array".  I'd probably make my Queue datatype to be a typedef'ed cluster with an array of waveforms (for AI) and an array of DBLs (for the Encoder).  Then you'd just wire straight from the DAQmx Read functions into a "Bundle by Name", and from there direct to Enqueue.   The point being, no wire branches, no data copies, no need to reallocate memory to expand an array of waveforms.

 

Finally, and this is most likely the biggest thing, I wouldn't do FFT's on all that data every iteration.  Those FFT's are just for the user display -- they're a *convenience* feature, not a necessity.

   I'd try something like updating only once a second or so.  That'll reduce quite a bit of the CPU load.   I'd also consider decimating the data down as another CPU-reducing strategy.

 

 

-Kevin P

 

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 10 of 17
(2,587 Views)