LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

max min math function in Vision Builder calculator

Lorne,

Sorry it has taken a while for me to get back to you. I was ill on Wednesday and unfortunately missed a day on this project. Atleast today I have some images to show you.

To answer you last questions:
1. Cameras don't have auto anything.
2. Lighting may not be brilliant, but has been OK for the previous system which uses similar resolution analogue cameras.
3. Cameras are firmly mounted from a floor to ceiling bar with camera mounts attached. Doesn't mean there is absolutely no vibration, but it has never been evident in the previous system output.

Before you look at the images, I should say that the graphs (from our interfaced quality system) are several days old, and some improvement has been made since then through implementing a 1st order Butterworth filter plus a peak hold mechanism on the data being sent to the quality system. Even so, the attached images illustrate the underlying variation I've been trying to describe. The 'proof of the pudding', so to speak, is the quality of the printed paper graph which is sent to our customers. Later today I will scan two examples and post. One will be a 'good' graph from our old system and the other an example of best achieved so far with the new arrangement. The printed graphs tend to look better then the screen presentations generally, as they contain fewer points and only show peak values. That is, no negative transitions appear in the printed graphs. Although I am technically happy enough with the graphs produced .... our quality people are not! I have continually assured them that the quality characteristics obtained from the 2 arrangements will be virtually identically .... but they remain unimpressed!

Unfortunately, due to possible issues when production commences post-Christmas, I decided today to remove the new system for the time being and revert to our aging and rather temperamental old system. I'm going to try and set up a reliable testing arrangement in my office to continue investigating the system issues.

Anyway, to the images ....
'ncxtop.png' and 'ncxbottom.png' are snapshots taken this morning of the top and bottom camera views. They give an idea of the lighting levels. The cable ties are the gauge markers to be detected. I achieve this with simple edge detection through the centre of the sample.

'raw.doc' shows a relatively early screen image based on raw i.e. unfiltered extension measurements gained from the new extension measurement system. Our terminology for the system is 'NCX' meaning 'non-contact extensometer'. I had another image showing a graph based on averaged data, but it now seems to refuse to copy from the floppy!!!

In 'raw.doc' you'll see that the x axis is from 0 - 1.5%. This is percentage extension of 600mm. So 0.1% is 0.6mm. Field of view of the camera is around 150mm (probably a bit more). This gives 1 pixel = 0.23mm = 0.038%. This is close to the size of the stepwise variations in extension seen in the graph, and is what I need to reduce. As I've mentioned earlier, I expected to see less variation than this from the system. As you will see when I post the printed graphs, the printed output looks much cleaner, but still shows significant variation.

I don't know if the attached images will give you any new clues, but atleast they may make what I am trying to achieve a little clearer. Our old system seems to (by means unknown) produce a more precise extension measurement, but is extremely temperamental to both operate and calibrate! The NI system is a breeze to operate and calibrate, but I need to improve the precision obtained so far to satisfy our users and customers.

As always, your assistance is greatly appreciated!!
I now have the printed graphs as pdf files and will attach to the next post.

Thanks again,
Greg

Download All
0 Kudos
Message 11 of 23
(1,620 Views)

Lorne,

Attached are pdf files of graphs, which are usually signed by our testers and sent to customers along with additional test certification. 012730.pdf is a 'good' graph produced by our old system on 15th December. 000005.pdf was produced today using the NI system. Pay no attention to the 'REJECT' imprint. This sample has actually been stresses several times and no longer has the expected material properties. The 'quality' of the curve is the point in question.

Greg

Download All
0 Kudos
Message 12 of 23
(1,618 Views)

Greg,

Nice Test System.  Hopefully this will eventually save the quality people a lot of time.

Both ncxtop.png and nxcbottom.png do not look sharp enough.  When I look at ncxtop.png I can see between 3 and 4 pixels which represent the edge of the zip tie.  You should, in the proper environment, have your zip tie's edge completely defined within 2 pixels.  I get the 3 to 4 pixels by counting pixels in the blur in the image which defines the edge of the zip tie.

Two things to try:
1.  Increase the light applied to the zip tie (use a flashlight if you have to).  See if the noise is reduced, also check to see if the edge is better defined in the picture.

2.  Move the camera futher away/closer to the zip tie and make sure that the camera is focused on the zip tie and not the cable (or somewhere else).

I think that once the quality of the image improved to the point that the edge is defined within 2 pixels, you should start seeing the precision you want.

A couple of other questions...
1.  I noticed that the top and the bottom of the zip tie are in different locations then the middle of the zip tie.  What do you do to account for this variation?

2.  Since the zip tie has a noticable thickness, if the camera is not pointed directly above the zip tie,  you will see the side and the top of the zip tie with the camera.  If initially the camera was directly above the zip tie and you could only see the top of the zip tie, and then during the testing of the cable the side of the zip tie came into view, you would get a faulty reading.  How do you account for this?

Lorne Hengst
Application Engineer
National Instruments
0 Kudos
Message 13 of 23
(1,580 Views)
Lorne,
 
Must say I was pleasantly surprised to see your response at this time of year! I had thought you might be on holidays. Very big holiday season here at the moment.
 
Apparently my response is a little too long ... so have split into 2 posts.
 
I understand/agree with the points you've made, although some are more straight forward than others to implement or investigate.
Sharpness of the image has appeared a bit disappointing to me from the outset, although I must say (as you may expect) that it seems of a similar quality to the original system. Lenses are manually focused, so perhaps I can do better with a little effort. I haven't concentrated on achieving the absolute sharpest focus. I think part of the problem is that the sample is essentially 'round', so sharp focus for the whole image is difficult to achieve. I'll concentrate on the ties. When looking at the zoomed images I also noticed the 3-4 pixel transition. Despite this though, as the software has only every detected a single edge, I've assumed that the edge strength threshold would provide consistent edge detection.
 
Applying more light to the sample is easy, as the lighting system has a dimmer control. Due to this, I did do some experimentation with increasing lighting levels, although I don't recall trying this under testing conditions. I think I was just investigating robustness of the edge detection software. Only problem I recall is that if lighting is too high, reflections/bright spots on the individual strand wires begin to be detected as edges. That is, idea has been to keep lighting low enough that these reflections aren't significant. I agree, this could do with further investigation.
 
Moving the camera closer or further away would be difficult to arrange, as the existing system has a permanent purpose built enclosure providing lighting and camera mounts. If I have to modify this aspect of the system it will be very difficult. However, I should be able to investigate this parameter on the test rig I intend setting up next week.
 
On the zip tie, the top and bottom not aligning with the middle isn't really an issue. The ties are applied by hand using a jig to ensure an approximate 600mm gauge length. Once the test commences, the actual separation of the two edges detected is determined and extension is calculated as a percentage of this 'initial' gauge length. That is, almost any edge transition along the zip tie will do, as long as it is consistently detected. However, it is intended to detect an edge at the centre of the sample. This allows for testing a range of products from approximately 9mm to 16mm diameter without adjusting the system. Have just realised that changing product size will also slightly affect image focus! For trial purposes I've only been working with one product size. Previous system has accommodated this OK without need to re-focus cameras.
 
Continued ....
 
0 Kudos
Message 14 of 23
(1,570 Views)
From previous post ....
 
Also, I think the ties are actually slightly tapered from top to bottom of the tie, so I'm not sure that a really clean 'top' view can be arranged. I believe the intention is that the cameras are mounted 600mm apart, although I haven't checked the separation. The present camera mounts have been in place since the original system was commissioned. So expectation is that the camera is looking directly down on the zip tie from above, as well as can be arranged. However, as the sample stretches by approximately 10mm during the test, this can only really be an approximate arrangement. The system needs to accommodate these slight variations. I could try black zip ties rather than white on this system? As long as contrast is high enough, this may provide a more definite edge than the white ties, which are (I think) actually a little translucent and tend to highlight shadows. Perhaps black ties would provide a sharper transition? What do you think?
 
I agree that movement of the tie during testing could produce faulty readings, but as mentioned above, I think the zip ties are actually slightly tapered. What this means is that the view from above doesn't change drastically during running of a test. As mentioned earlier, as the software only ever appears to detect a single edge (the profile always looks pretty good) I've been assuming that the edge will be consistently identified during the running of the inspection.
 
Anyway, I think I've covered all your points, and you've definitely given me some ideas for further investigation. This is a problem I want to think through as thoroughly as I can before having another shot at the production system. Apart from the characteristics you've suggested for investigation, do you have any clear idea of the resolution I could expect to obtain from the system under ideal conditions? I can't really find anything relating to this in the Vision Builder AI documentation. Only real reference I have found is in the IMAQ Vision Concepts manual, but I'm not clear whether all I've read there applies to the Vision Builder tools.
 
Again, thanks so much for your help.
 
Greg
0 Kudos
Message 15 of 23
(1,572 Views)
Greg,
 
"I could try black zip ties rather than white on this system? As long as contrast is high enough, this may provide a more definite edge than the white ties, which are (I think) actually a little translucent and tend to highlight shadows. Perhaps black ties would provide a sharper transition? What do you think?"
 
This might be a good idea.  Since you are trying to achieve subpixel accuracy, anything might help.
 
In measurement and automation explorer, if you do a grab from your camera and zoom in on the edge of the zip tie, do you see noise.  In other words, are the individual pixels staying the same value or are they fluctuating when the camera is not moving and the cable is not moving?  I know on my camera if I do a grab on some of my cameras I see quite a bit of noise.  It sounds like you are seeing the exact same thing.
 
It looks like you are getting pixel accuracy, but that your camera is sending noisy images.  If you have a perfect image, edge detection will do a great job finding the edge between pixels, but with a noisy image the benefit of subpixel accuracy is negated.
 
Did you say the current system had better accuracy?  If so, what camera did it use?  Can you describe the specifics of the system?
 
Thanks,
Lorne Hengst
Application Engineer
National Instruments
0 Kudos
Message 16 of 23
(1,550 Views)
Lorne,
 
The cameras in the original system are Mintron MTV-1801CB 640x480 analogue. Nothing special as far as I'm aware. Mintron is still around, although they don't seem to be a common/popular brand. No one else I've spoken to had ever heard of them! Although I don't think the cameras are anything special, the hardware and software was. I don't have board numbers with me at the moment, but the hardware for the vision part of the system is VME bus. Remember this was built 13 or so years ago, and was based on a system already in place at a related steel business. The VME hardware was relatively fast and was built as an embedded system. It interfaces serially to a PC, via a custom designed protocol, which provides an operator interface for setup, calibration etc. The PC then interfaces serially to our quality control system, passing on the measured extension readings. A pretty messy design by todays standards, but serial system interfaces were pretty big in the late 80's and early 90's. To be honest, we still rely on this type of interface quite a lot. In general, they are very robust!
 
Anyhow, interface technique is not the issue here. Where I know the system is quite 'special' is in the software on board the VME system. I suspect now that it was probably more special than I have realised, particular considering how long ago it was developed. A discussion I had recently with someone aware of the original development suggested that reliable sub-pixel resolution as high as 1/8 or 1/16 of a pixel may have been achieved! This surprised me, but would certainly explain the earlier systems apparent performance.
 
I hadn't thought of doing a grab and zooming. I'll try that when I get a test rig set up, hopefully next Monday. The original system certainly appears to have far higher accuracy than the AI equivalent. The cameras are what I would understand to be analogue and digital equivalents, although I'm also aware that we're not really comparing apples with apples. The best indication of system precision differences I can give at the moment are the images I attached to earlier posts. The 'good' (smooth) graph is based on unfiltered data from the original system. The not so smooth graph is based on measurements from the AI system which have been through a first-order Butterworth filter. The filtering is done within AI, constructed using the AI calculator facility. According to my customers, that graph still has a fair way to go to be satisfactory.
 
I'll be working on this (probably exclusively) next week before I go on 2 weeks leave, to try and determine if accuracy improvements are likely to be achievable. However, I have to say I suspect I may be beaten by the available resolution of the system. I anticipate I would need to improve resolution achieved to date by a factor of about 4 to satisfy my customer.
 
Thanks again,
Greg
0 Kudos
Message 17 of 23
(1,550 Views)

Lorne,

Hi and Happy New Year!!

I'm back at work today and keen to see if I can make some improvement to the system. Have to say that indications so far aren't encouraging. As I mentioned last week, first task I set myself was to set up a test rig to give a clearer indication of normal system operation. I'll send a photo of the rig tomorrow, as soon as I can get hold of a camera. It is pretty basic! Just the camera on a standard camera tripod, facing a fine Vernier gauge we use for calibrating our mechanical extensometers. Anyway, the rig seems to work fine.

The data in the attached spreadsheet was obtained from a simple single camera application in Vision Builder AI. It detects the edge of a zip tie applied to the Vernier gauge. With the Vernier in its initial position the application is started. Then the Vernier is continuously and evenly moved, by hand, through a 5mm range. The AI application reports the raw X value of the edge pixel to COM1. These values were captured using HyperTerminal and then imported into Excel for presentation.

unfortunately the graphs for both cameras show quite distinct transitions of approximately 0.8 pixels. I have to say that I don't understand the cause of these large edge transitions. If you can give me any idea as to the likely cause or how I might be able to minimise them I would be extremely grateful. I will continue investigating effects of lighting and different gauge markers. These results are for a black zip tie on a aluminium body (see attached image). at the moment I'm only using ambient lighting. Contrast could be better, but the symptom indicated in the graphs is what I expect has also been evident in the plant installation.

Thanks,
Greg Shearer

Download All
0 Kudos
Message 18 of 23
(1,525 Views)

Lorne,

I may be on to something! Just for fun (more or less) thought I'd try pattern matching of the zip tie instead of edge detection. Unless I'm missing something .... the result is fantastic! This is the sort of outcome I'd hoped to achieve with edge detection. Any idea why one method should have so much better apparent resolution than the other? Are there likely to be any catches you're aware of associated with this technique? Test arrangement was unchanged from that for edge detection.

I'm not sure if this approach will work as well in the production system, but certainly looks encouraging!

Greg

0 Kudos
Message 19 of 23
(1,520 Views)
Greg,
 
Thanks for posting the pattern match spreadsheet.  Something strange is going on here.  I am going to look into this, I will get back to you as soon as I can.
 
Lorne Hengst
Application Engineer
National Instruments
0 Kudos
Message 20 of 23
(1,497 Views)