From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Adding an encodeur to existing application

Solved!
Go to solution

Yes, I tried like that this morning, but It stops immediately with an error like :"

Possible reason(s):
The application is not able to keep up with the hardware acquisition.
Increasing the buffer size, reading the data more frequently, or specifying a fixed number of samples to read instead of reading all available samples might correct the problem.

I'm certainly wrong somewhere else...

 

Regards,

 

Julien

0 Kudos
Message 11 of 20
(897 Views)

Your error is the classic "buffer overflow" error.  It took me a minute to puzzle out why, but I'm pretty sure I can explain it now.  But first let's prevent it while still allowing you to read exactly 3600 samples per loop iteration.

 

The solution is to explicitly set up bigger buffers for all your tasks.  You can do this by wiring the 'samples per channel' input on your calls to DAQmx Timing.  And keep in mind that memory is cheap and plentiful, so go ahead and go big.  I regularly size DAQ buffers for somewhere in the range of 1-10 seconds worth of data.

    Your nominal rates appear to be in the 5-10 kHz range for both Ctr A and AI, and in the 50-100 kHz range for Ctr B.  So you could set up 5-second buffers for the top end frequencies using 50k and 500k samples/channel respectively.

 

Now for the explanation.  I did a similar configuration for a simulated X-series device, then queried the buffer size which I found to be 10k samples/channel minimum.  (If I wired a value > 10k to 'samples per channel', the queried buffer size would match.)

 

So when your loop makes its first call to Ctr A and AI to read 3600 samples, the functions are stuck waiting for something like 0.5 seconds for that many samples to arrive.  During that ~0.5 seconds, Ctr B is expected to be buffering up 25-50k samples due to the much higher Ctr B frequency.  That's much more than the default 10k buffer size, resulting in your buffer overflow error.  Setting the buffer size to 500k samples/channel will prevent it.

    Ctr A and AI may not strictly *need* bigger buffers than their defaults, but again, memory is plentiful and cheap so go ahead with the ~5 seconds worth I suggested.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 12 of 20
(891 Views)

Yes, a big step has been done!

"at taille b" when running I should have around 4096x10 (10 rev); when 3600 for "taille a". The value is oscillating between 35000 and 50000.

This is not the physics.Shafts between encoder are tights.

Any ideas?

I saved the default values.

Many thanks

0 Kudos
Message 13 of 20
(882 Views)
Solution
Accepted by topic author CHAJ

That's more fluctuation than I'd have expected, but is it actually a problem for you?  The difference from the expected 4096*10 doesn't accumulate, does it?  If it only affects the live visual display without affecting the long-term cumulative data, it may not need to matter.

 

Part of the reason I chose to read "all available" for Ctr B is because I don't know the gear ratio between crankshaft and camshaft in your setup.  Unless it happens to be an integer, then it's potentially pretty important to keep reading all available samples, because there won't be an exactly correct integer # to specify.   Whether you round up or down, you'd keep reading either too many or not enough samples.  Over time this would accumulate and eventually lead to a buffer overflow from at least one of the tasks.

 

The producer loop is pretty lean, so it's hard to identify where such large fluctuations might be coming from.  I guess one thing I might try is to put the reads in series instead of allowing them to operate partly in parallel.  Read the same fixed 3600 samples from both Ctr A and AI (in either order), and *then* read "all available" from Ctr B.  See if that helps with the fluctuation.

    An easy way to put the reads in series is to link them together using the error in/outs to enforce sequence via dataflow.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 14 of 20
(868 Views)
Solution
Accepted by topic author CHAJ

then the code is attached.

 

Aquisition between the 2 encoder as sample of results.

Many thanks

0 Kudos
Message 15 of 20
(855 Views)
Solution
Accepted by topic author CHAJ

What exactly is the problem or question?  The spreadsheet data looks like a pretty successful run to me.  Whereas the data showing on the front panel of the latest "ver4" vi still use the small default buffer sizes of 1000, leading to the same buffer overflow error and data for Ctr B.

 

I assume the spreadsheet data was gathered on a run where you *did* set bigger buffer sizes as I suggested.  And the results confirm that the two sets of counter periods add up to the same total time, showing that the iteration-to-iteration fluctuations don't have any cumulative effect.

 

Just because I was kinda intrigued, I took the data over to the frequency domain (after interpolating to resample at a constant sample rate).  There's substantial similarity but also a few interesting differences for you to look into.  All in all though, further confirmation of proper data collection.

 

 

-Kevin P

 

Kevin_Price_1-1683425819144.png

 

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 16 of 20
(839 Views)

Dear Kevin,

 

I gived you "real"data in order to show you that it works very well.

 

Is your FFT of 2 encoder something you can share?

Many thanks again.

 

Julien

0 Kudos
Message 17 of 20
(783 Views)
Solution
Accepted by topic author CHAJ

I went back and added some comments so you could follow the process.

 

Initially, I took your spreadsheet data, extracted just the time and RPM data for each of the two encoders, and saved it as default data in a couple 2D arrays in LabVIEW.  (Note that I transposed the arrays along the way so that time is row 0 and RPM is row 1.  In retrospect, the transpose was unnecessary.)

 

The attached vi takes that default data from the 2D array controls and interpolates into a corresponding RPM time series, just re-sampled at a constant sample rate in order to be turned into a LabVIEW waveform data type.  In each case, the sample interval is chosen to be the average interval from  the original raw data.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 18 of 20
(676 Views)

Hi Everybody,

 

Attached a code and Max configuration.

 

I cant understand why AI data are "empty". Everything is working.

 

The problem could be in "Process Data -- Consumer Loop" point 16, but this is not clear to me.

 

Any suggestion or idea?

 

Thank so much,

0 Kudos
Message 19 of 20
(65 Views)

Took me a couple minutes, but it looks like I got a little sloppy about handling the "sentinel" empty array data for AI.  The final empty array ends up *displacing* the previously accumulated data.   I see from the comment that an initial earlier method using "replace array subset" must have failed on the first iteration due to the initially-empty array.

 

Instead, we'll need to use up a little more block diagram space being more careful about handling that final sentinel value.  The loop that appends the new waveforms should be conditional on making sure the waveform array is not empty.

 

Kevin_Price_0-1712145780355.png          

Kevin_Price_1-1712145805376.png

 

That oughta fix it.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 20 of 20
(54 Views)