10-09-2007 11:16 AM
10-10-2007 08:19 AM - edited 10-10-2007 08:19 AM
Message Edited by steorn on 10-10-2007 08:19 AM
Message Edited by steorn on 10-10-2007 08:21 AM
10-10-2007 11:02 AM
Shaun,
Apologies in advance for the second-guessing here. It's fairly common around here that there'll be a posting where someone describes a fairly difficult / complex task they're trying to do. After a couple rounds of questions, it then turns out that a much simpler approach is actually quite acceptable. I just want to check this part out up front here.
So, do you really definitely need to sample those analog signals at >100 kHz? Are they providing useful dynamic information at a high enough bandwidth to justify the need to handle >1 Meg/sec of samples?
That kind of data rate should surely be do-able, but is getting into the realm where you'll need to be more careful and deliberate about the details to make it robust. If all that extra information is truly valuable it'll be worth it, but if not -- maybe not.
A. A 14400 ppr encoder *probably* means you get 14400 quadrature cycles per rev on both channels A and B. In which case, you could resolve up to 57600 incremental positions per rev. Not everyone uses the terminology consistently though. It might mean a maximum resolving capability of 14400 positions, which would imply 3600-cycle/rev quadrature.
The 5 MHz rate limit for the change detect pulse sounds familiar, but I couldn't say with 100% certainty.
The general idea of setting up a change detection task is the only way I know of to produce a pulse at each quadrature position. A good plan if you need to gather position-correlated data at the maximum possible rate. Bear in mind that for most encoders, the increments within the 4 quadrature states will likely vary in size considerably more than say, one rising A to the next rising A. So you may find that the issue of unequally-spaced samples is more trouble than its worth.
B. Using either the A or B signal as an external scan clock is probably a simpler approach. It leaves open the option of using a counter to divide this down further by some integer if you wanted to reduce your sampling and data streaming rates.
As to the buffering, you should be able to configure your data acq buffer to be several Megasamples in size. Big enough to hold several seconds worth of data.
-Kevin P.
10-11-2007 09:11 AM
10-11-2007 09:37 AM
10-11-2007 12:07 PM
Thoughts in no particular order:
You may want to wire your encoder A,B into digital port 0 and use change detection anyway. Then you can easily change your sampling basis by choosing which DI line changes to detect. If rising and falling on both A and B, you get a sample for each quad position. If rising only on A only, you get your latest plan -- lower position resolution, higher rotation rates supported. This approach would make it fairly easy to switch back and forth at run-time though since all the underlying code would be the same.
Another subtle (potential) advantage: You can config your sampling to be sensitive to the trailing edge of the change-detect pulse. This makes sure that the encoder edge which caused the change detect pulse to fire will have also caused the counter register to increment/decrement before sampling occurs. Using an encoder edge directly as a sample clock for an encoder-measuring task risks a hardware race condition -- is the position count buffered before or after it increments (decrements)?
Timestamping: You've used up one counter to measure the encoder position, but you can use the other counter to timestamp the change detect pulses. There's more than one way, but I would configure to count edges of the 80 MHz internal timebase while using the change detect pulse as the sample clock. That gives you a direct measure of cumulative time. (Sorry, no help on syntax as I only use LabVIEW).
Note: the counter measurements may be an overall system bottleneck. They will demand near-continuous access to the PXI bus to push data from the board to system RAM because the board has only a 1-or-2 sample FIFO on board.
Buffer Management: Under LabVIEW, I would read from the tasks and send the data directly to a Queue. Some parallel code could then read from the queue and do whatever else (display? store to file?). I'm not sure what to suggest for CVI. The real key is to create a producer-consumer architecture with parallel code loops. Run your data acq with high priority as the producer, shuffling off the data from the NI-managed data acq buffer into a you-managed buffer. Run 1 or more consumers at lower priority that pull data from the you-managed buffer and display it, store it, whatever.
Under LabVIEW, parallel code is not only easy to create, it's almost hard to avoid. NI's "Queue" primitives work very well for this kind of producer-consumer pattern. Dunno what you've got available for CVI.
If you only want to service your data acq loop once per second, I'd recommend sizing the data acq buffers for at least 3 seconds of data. However, I'd be even more inclined to service the data acq loop 5-10x per second with maybe 1 second of buffer. This would reserve more memory for the you-managed buffer which, at lower priority, is more likely to need it. You could then choose to service the you-managed buffer at a slower rate (while extracting larger chunks at a time) if you want.
-Kevin P.
10-12-2007 09:09 AM - edited 10-12-2007 09:09 AM
Message Edited by steorn on 10-12-2007 09:11 AM