Hi Everyone,
my pleasure to write into this community, I'm new to LV as well as Machine Vision technology, therefore my question may result as redundant and already discussed.
If this is the case, pls share most relevant link on the subject.
I have questions for a better understanding of IMAQ sw.
I'm using GigE linear cameras for grabbing videos.
I'm acquiring as fast as 100000 lines/seconds.
The camera is set to output 1024x4096px frames.
Therefore, actual rate is almost 100000/4096=25fps.
I'm using low-level IMAQdxConfigureAcquisition.vi module to grab this.
I'm actually making practice with the already-available Grab_and_Detect_Skipped_Buffers.vi example into Vision Acquisition (directory:
VisionAcquisition\NI_IMAQdx).
My understanding of the process is as follows.
By IMAQdx_create.vi we are phisically creating space into the RAM for image data coming from the camera.
How big is this space into the RAM? It depends on how many buffer I set - at least two are mandatory for grabbing, according to LabView.
In my case, the image are grey-level 8 bit depth, therefore 1024x4096x8bit = 1Mx4Mx1byte = 4MB.
Q1: is the whole 4MB-frame stored into one single buffer? Or more than one buffer is needed to handle a single frame?
Q2: why we need at least 2 buffers to grab?
Q2: if I am loosing buffers, what am I actually loosing?
Q3: how to determin the right number of buffer to set into the IMAQdxConfigureAcquisition.vi module?
Thank you in advance and have a great day to all.
Alessandro