10-11-2012 01:40 AM
Hi Brad, when i did a rerun with the migrated LV the vi only executes the while loop and does not go through the initialize sequence again but the camera name cam2 does not change inside the inpsect staet exec 2 vi. I will probably go with doing it all in VBAI and following the example API it is just a matter of trying to work that into my overall vi that i might struggle with. Thank you for your assistance you have been a great help.
Damien
10-11-2012 08:33 AM
So just to be clear...in the "IVB IMAQdx - Resource Manager (for RT).vi" when you put a breakpoint in the Allocate Resource frame, the "Session In" param to the "IMAQdx Open Camera" is the same in both cases? I have a strong feeling that for some reason it is different on the second run, and this is why the acquisition uses a different camera. If you're interested in getting the generated code to work as expected, this is where I would look closely to see what happens and why the name is different on the second run. If the Session In string really is the same on both runs, but the camera that you acquire from is different, I would be very interested in hearing more about this (i.e. what are the session names of the camera you want and the built in camera that it uses on the second run, what happens if you acquire from MAX?, etc.) because that is very strange.
Thanks,
Brad
10-15-2012 07:36 AM
Hey Brad
Apologies for the late reply i disabled my laptop camera and it worked fine. I also had a problem re-running my code as i migrated the configuration files tothe program files directory so each time i opened up the vi i was missing the libraries. Solved this by creating a folder on the desktop.
I tried to implement your DMM VBAI to labview by using the API example but found this too complicated for what i needed. Is there a way i can use the example below along with certain vi's to achieve what i want to achieve.
I would use this vi inside a case structure for one test step and just use vi's i need to analyse the data, for example in a case i would apply 6V and would require to see a 6V on the display and no LEDs, in the next case 75V i would require to see 3 LEDS and between 73.5-76.5 on the display and output a pass or fail?
Damien
10-15-2012 08:25 AM - edited 10-15-2012 08:30 AM
If you want this to be your starting point, I would recommend you check out the Vision Assistant software, which is very similar to VBAI, but it integrates more easily with LabVIEW using an express VI and making the image provessing results easier to access. It doesn't have the higher level vision functions (i.e. instead of detect objects step, which does a threshold, particle filter and particle analysis, you would need to call these lower level functions individually in Vision Assistant). It sounds like doing all your code in LabVIEW might be easier for you, and Vision Assistant makes this easy to move from a configuration environment (i.e. like VBAI) to LabVIEW and then customize your code from there.
I attached a Vision Assistant Example that uses two Express VIs (one to acquire and one to process/analyze the image). Double click on the Express VIs to change the acquisition to your camera and change the processing to add your OCR/Pattern Matching, and then you can have the processing Express VI return the number of LEDs found, the text, pattern matching type, etc. so the rest of your LV code can make a simple pass/fail decision.
Hope this helps,
Brad
10-15-2012 09:06 AM
Thank you Brad,
Do i require that the binary destination image string remains the same? A question the logic you provided in the calculator step in the DMM VBAI i could use to get my results from the second vi? You have been a great help!
10-15-2012 09:14 AM
You can change the name of the image so it doesn't have to be "binary destination image". I think it will be easier to not use the LV code from the earlier DMM VBAI example I sent you (this assumed you were doing everything in VBAI and had arrays of expected values, etc.). Now you can take the simple Vision Assistant Example I sent you and call these VIs in a loop where you set the DAQ voltage to what you want, call these VIs and make sure you get the expected results from the Vision Assistant Express VI. You can use a case structure as mentioned earlier to switch what DC/AC voltage you drive and what the expected results should be, and you can edit the Vision Assistant Express VI to output the values to test against the expected results (for example, I output the number of LEDs and you can update the Express VI to also output the OCR text for the voltage value).
Hope this helps,
Brad
10-15-2012 10:50 AM
Thanks very much for your help brad i have tried to simulate it with my images. I have attached what i have done. But i am trying to get the output OCR from the express vi how can i get this output. The images i have are attached. Thanks for your help.
10-15-2012 12:16 PM
Hey Brad i got that working thank you for the HUGE helping hand. I think i have all my answers thanks to you.
Damien
10-31-2012 02:04 AM
Hi Brad, I have posted this question but i have had no response yet.I have attached a picture below,like before i need to count LEDS and perform character recognition, however the LED is very bright and i am unable to see it with vision assistant i hve attmpted to use similar settings you provided but i still cannot pick up the first LED.In the picture 3 are on and the top one is off.If you could have a look and perhaps advise me please.
Regards
Damien
10-31-2012 08:01 AM
I can't even see the three (or are there 4?) LEDs. I would recommend taking two images. One with the exposure turned way down so it's basically black except for the LEDs that are one...the light from the LED shouldn't saturate a large portion of the image and make it almost impossible to see the other LEDs. This image should be black except for the LEDs that are on. The other image can look like the image you attached so you can read the LCD segments for OCR so you can read the voltage level. You should be able to change the xpousre programmatically. There are also cameras available that do this for you and return a single image that isn't overexposured in the bright areas. These cameras are called High Dynamic Range Cameras, but they are more expensive,. but they basically do what I just mentioned, but they handle automatically taking two images at different exposures and combining them to return a single image that has more even lighting.
Another option if you don't want to take two images is to put a light filter over the part of instrument with the LEDs and this will dramatically reduces the light that gets through so it's not so bright.
Hope this helps,
Brad