04-09-2017 07:48 PM
Hi,
I am working on developing a laser scanning algorithm. For this I have developed a VI that uses the NI-Vision Assistant(VA), and convert pixel to real world co-ordinates (CPR). The steps I follow are:
1) Get a calibrated image as the output from VA. Wire this to the 'calibrated image in' of CPR.
2) A VA is also used to convert the reference image to a greyscale image fed into 'IMAQ image to array' function and then I use a custom thresholding sub-VI to get the maximum intensity pixels along each column. This is then passed as an array of pixel co-ordinates to the CPR. I want to get the output of those real points.
Please see the attached files.
My problem is that this VI is not working as the first VA returns a zero by zero array always and this is passed to the Convert pixel to real world VI. Please provide your inputs on this fundamental issue.
Solved! Go to Solution.
04-10-2017 03:15 PM
Hi Vishwarath,
Where in your code are you seeing the 0 by 0 array being generated? The Vision Assistant is only outputting an image reference not an array. I see an array coming from the IMAQ ImageToArray functions and a cluster of Arrays coming from your thresh2_subvi.vi. Where are you seeing the incorrect output? It would also be helpful if you could post your actual code rather than just screenshots for troubleshooting.
04-10-2017 05:32 PM
Hello Matt,
Thanks for replying. Here is the location where I am getting the zero by zero array. The convert pixel VI step works because if I pass pixel values that correspond to the origin in my calibration file of Vision Assistant I do find (0,0) returned in my real co-ordinates indicator.
04-10-2017 05:35 PM - edited 04-10-2017 05:52 PM
Please find the attached file. (library) containing the VI's.
This is a sample test VI. My actual problem is to grab video from a camera and apply the process that works with one image. There are few issues I need help with:
a) The same image that is grabbed and calibrated, should pass through the grayscale calibration step to be wired into the 'Image to Array' VI. If I do that, Labview returns an error saying that there is incompatibility in the image type.
b) I have seen the 'simple calibration VI' example where the readings (image input) is taken from a video that is already recorded rather than perform a calibration step for each image that is grabbed by the camera. Is this a better alternative or would my present way of working do?
04-10-2017 07:37 PM
Ok. So I found out the simple issue of wiring the wrong terminals to the Imaq reference VI and the pixel to real world co-ordinate VI. I should use the U8 instead of U16.
04-11-2017 10:48 AM
I would actually use one image. This lets you set up one calibration, and then compare every image in your set to the calibration. If you're going to scale this program in the future to use multiple image types, you would have a calibration image for each distinct set of images.
04-11-2017 01:39 PM
Hi Matt, Thanks for replying and helping me out.
Right, so I have done the scanning with one image. Now I need to feed many sequences of images to this algorithm to get my real world co-ordinates in a sequential order. I have taken the read AVI file, and modified it to accept my test VI as a subVI and my aim is to put in that loop, for sequential execution
Please see the attached pictures.
However I am finding errors cropping up as the system says that the input to the IMAQ convert to array is not a valid image. Perhaps the loop is processing the information too fast? Should I use the concept of queing?
Looking forward to your reply
04-11-2017 06:09 PM
I tried a bit more but still dont know why the VI does not process the images consecutively. I am using the IMAQ dispose VI to remove the previous image from one iteration to the next.
04-12-2017 04:53 PM
I took a look at your Read AVI File2.vi that only the code inside the For Loop is executing every iteration of the loop.
At a high level can you explain what are you wanting to accomplish. I think from previous posts my general idea is that you want to detect maximum intensity points along each column of the image and then output the coordinates of those points based on each image that you aquire.
But are you acquiring these images from an AVI file (as it appears in your code), or a camera?
Also, what are the error numbers that you are seeing from the IMAQ convert?
04-12-2017 05:49 PM
Hi Matt,
Yes. My objective is to perform the scanning experiment where I try to pass a recorded video of a laser line passing over a sand bed and feed it through my algorithm. I dont exactly have the image codes right now(will work tonight) but I faced the issue in coverting the image to greyscale. I figured out, after some debugging as you rightly mentioned. It is that the loop executes the first time around, and I am able to see the co-ordinates but at the second iteration, the image is already a gray scale one and the error pops out saying 'not a valid image'.
Thanks!