02-15-2024 08:15 AM
For context, I am currently building a Laser Scan Microscope to obtain fluorescence images of biological samples (see attached file, "schematic"). The relevant equipment consists of two galvanic mirrors (Thorlabs, GVS002) that move a laser across a sample and a photomultiplier tube (Thorlabs, PMT1001) that captures fluorescence photons that have been emmited by said sample. The mirrors are controlled by voltage output from a NIDAQ USB6001, which also reads the voltage input from the PMT.
I have created a Python code to make a heatmap of the intensity on each coordinate of the sample which, when put together, gives us a somewhat decent microscopic image of our sample (see attached image '107s'). However, the iteration time was too long (nearly 100s for higher resolution images), which does not suit our application. This time increase seems to have appeared due to the analog input reading function, since previous codes that only made the laser move around haditeration times of 0.49s.
After some research, I came across some suggestions to adjust the sample rate of the PMT analog input. After messing around with it for a while, however, I noticed that the higher I set the value of the sampling rate, the more distorted my image becomes (see attached images '27s', '40s' and '80s'). I cannot set sample rate values higher than 2000, probably due to hardware specifications (error "
nidaqmx.errors.DaqReadError: The application is not able to keep up with the hardware acquisition.
Increasing the buffer size, reading the data more frequently, or specifying a fixed number of samples to read instead of reading all available samples might correct the problem.
Property: DAQmx_Read_RelativeTo
Corresponding Value: DAQmx_Val_CurrReadPos
Property: DAQmx_Read_Offset
Corresponding Value: 0
Task Name: _unnamedTask<2>
Status Code: -200279
")
Has anyone ever come across this or perceived anything in my code/hardware compatibility that might be causing this problem? I highly appreciate any help. The code is attached below, with relevant comments made along it.
For information's sake, the USB6001 DAQ card has a maximum sample rate of 50KS/s for analog input, Maximum update rate of 5KS/s for analog output. The Galvo Mirrors suggest a minimum update rate of 100KS/s, but seemed to work fine with the code that only made them scan the sample (again, in those cases, where the code did not read any input values, the iteration time was roughly 0.49s).
02-15-2024 08:59 AM
I don't program in Python, but your main speed problem is that you're interacting with your tasks 1 sample at a time. To achieve rates like the 5 kHz AO max, you need to precalculate and write an entire buffer full of AO values all at once, then let the driver and hardware do the dirty work to deliver and generate the values.
Similarly, you could either do a one-time AI read at the end of the task to collect your entire buffer of data at once, or you could accumulate it in a loop in more reasonable sized chunks. A good starting point for loops and chunks is ~0.1 seconds worth of data per read.
Further, you'll want to do some config of your tasks that makes sure your AO and AI are sync'ed. Do some searching here -- there have been a lot of threads about syncing AO and AI for galvo work. I've never done galvo work myself, but have been involved in several such threads over the years. Much of what you'll find may assume LabVIEW rather than Python, but the core ideas will be the same.
-Kevin P
02-16-2024 05:40 PM
I am not sure if this method has better performance but you can try to use stream reader and writer. See nidaqmx-python/examples/pwr_hw_timed_stream.py
If it still doesn't meet your expectations, you should consider ditching Python. Python is an interpreter language and is slower than C, C# or LabVIEW. See C VS Python benchmarks, Which programming language or compiler is faster
02-19-2024 11:09 AM
I will try writting it on C or LABView, then. Thank you.