01-27-2026 05:18 PM
I have a resonant mirror which scans a line across the field of view of my microscope over 62.5 microseconds. I need to sample my detector over the field of view 250 times to create 250 pixels. This means ideally I would sample with my DAQ card every 0.250 microseconds.
I would like to take analog input data over 3 channels. However, I have a PCIe-6321 with a sampling rate of 250 kS/s, which reduces to 83.3 kS/s acrossn each of the 3 individual channels. This means the fastest I can take samples is every 12 microseconds, significantly slower than I need. For every pass of the resonant mirror, I can only sample the field of view 5 times instead of the necessary 250 times. Therefore, I would have to scan one line of my field of view 50 times to acquire all 250 pixels.
Is there a way I can control the PCIe sampling to ensure that I'm sampling new pixels every subsequent line scan? My initial idea was to somehow delay subsequent read-ins by a set amount (0.250 microseconds) to shift the pixel I'm detecting over by one each time, and repeat this 50 times to scan each line fully. Is this possible?
Thanks for any help!
01-28-2026 12:56 AM
@ndaigle wrote:
Is there a way I can control the PCIe sampling to ensure that I'm sampling new pixels every subsequent line scan? My initial idea was to somehow delay subsequent read-ins by a set amount (0.250 microseconds) to shift the pixel I'm detecting over by one each time, and repeat this 50 times to scan each line fully. Is this possible?
No it is impossible.
250kS/s is the hardware limitation restricted by the settling time of the ADC and circuitry to capture the voltage correctly. Even if the driver allows you to read faster than this, the data return is inaccurate.
You should get another DAQ that meets your 4MS/s sample rate requirement.
01-29-2026 12:39 PM
Thanks for the reply! I was reading about using pause triggers to delay sampling, and I thought maybe using the onboard clock (100 MHz) I could delay sampling by the 0.250 microseconds I need to shift the pixels over. You're saying something like this wouldn't work due to inaccuracy?
01-30-2026 08:08 AM
I think there are a some complex and tedious things you could probably try to do to accomplish the shift. But my gut feel is that "the juice isn't worth the squeeze."
Whatever dynamics you're trying to capture on a pixel by pixel basis will end up being distorted by the way you would be interlacing these scans. Those 250 pixels would represent responses that occurred over a much longer period of time than you intended your single-line scan to represent. In the interests of nominally getting actual measurement data for each pixel location, you'd be smearing the data drastically across a larger time window.
In the end, I suspect you'd only be fooling yourself into thinking you can end up with "better data" that way.
Either accept the limitations and scan a lot slower, within the 250 kS/sec aggregate rate the card is capable of delivering, or bite the bullet and use a device capable of the much faster sample rate you're aiming for. (Like ZYOng already said.)
-Kevin P
P.S. For academic interest, to pursue the pixel shift idea one could manually re-configure the AI Convert Clock before each individually shifted line scan. There's a delay property that may or may not be supported by your specific device, I simply don't know. If not supported, you could generate your own finite pulse train with a counter and configure AI to use it as the convert clock. It would need to be triggered by the AI sample clock and also need to be reconfigured for each individually shifted line scan.
So, a lot of tedious work and pretty dubious potential to do much net good.