Hello all,
I have some additional questions about the image arrays and this is more out of my ignorance of how the image arrays function in NI. I've attached a bitmap showing what is going on.
I have my array of 10 images - I'm inspecting image index 0 for figuring out where regions of interest are, etc. and I do this by running it thru a grayscale threshold, differentiation filter, and finally a geometric pattern match. I write this information out to be used later.
When I access the same image index 0 later in the program it appears that the thresholding is still applied and I don't have a raw image anymore. I suspect it's due to the fact it's merely a reference to the image and any vision operations I'm performing are being applied to this image.
With that in mind:
- Is there something I can do to prevent this? Am I missing something?
- I thought about making a "shadow" array, but I'm using multiple thresholds for different inspections. I think I could take my ten images and store them in an "unused" array, and then make a copy of whatever I need. I don't know the best way to do this and could use a code snippet from someone.
Anyhow - any ideas on how to best approach this would be appreciated. Inspection time is getting to be a consideration for me as well and I need to minimize the overhead as much as possible.
Thanks gurus.
VisionGumby