04-15-2022 07:30 AM
I believe I have found some fundamental issues with the example provided at :
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000kEpeSAE&l=en-US
"Velocity Interval" can be configured to either provide low-speed resolution or fast update rates, but not both. 1[count]/(velocity interval [us]) defines the minimum speed resolution.
Scenario 1:
Given:
Velocity Interval [us] = 20 [usec]
Update rate = 1[sample]/20[usec]
Linear Encoder Scale = 100 [counts/mm]
Speed = 100 [counts]/1E6[usec] = 1 [mm/sec]
Therefore:
Resolution = 1[count]/20[usec] = 500[mm/sec]
horrible speed "Resolution" (Maybe I shouldn't even call this "resolution" because it's more like any speed above zero and below the Resolution gets rounded up)
good Update Rate
Thoughts? How do we get the example code updated, or better explained?
The preferred method would be 1 [count] / (x [usec]) where x = time between a change in counts
07-15-2022 05:50 AM
Hi breiter56
thanks for your post, this has helped me in the tuning of my application.
The behavior you have pointed out is a characteristic of the derivative approach in the speed calculus, where selecting a time-base (Velocity Interval) is a trade-off between resolution and response speed.
A shorter time-base yields more up-to-date data but with a lower resolution.
A longer time-base produces higher resolution, but data is updated slower and may lag or miss speed changes.
Have you implemented a different algorithm on FPGA for encoder speed evaluation that works best at low speed?
Regards