Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Chicken part identification

I am working with a Customer who needs to sort hard frozen chicken parts.

Thighs, wings, drumsticks, breasts, maybe at some future time, connected thigh and drumstick.

Parts currently come down the line on a conveyor at 25 FPM, 100-130 parts/minute. Parts will be randomly placed on the conveyor.

I can align and cause a gap between the parts, so a vision system would only need to look at one part at a time (which is how parts need to be fed to a sorter system anyway.).

Since all parts except the thighs will tend to be rectangular, the aligner will probably cause those parts to be aligned with their longer centerline rotated to align with the direction of the conveyor. The thighs, being more round, would be at any point of centerline rotation.
Breasts, thighs and wings tend to be more flat than drumsticks, so  their  footprint  will be more definable. Drumsticks, due to their roundness, may have abroader range of footprint. All in all, I think there is enough differentiation between the parts to gain acceptable identification, and subsequent sortation.

There would need to be some scalability by the vision system, as not all chickens are the same size, and, there will be multiple systems used here for various plants. Chickens range in size by growing region, and, turkeys may be processed at some plants. I would think that an average outline profile of the various parts could be obtained by a sampling of number of parts. The more parts sampled, the more accurate the identification and subsequntly, the sort.

Operating environment is 30-50 degrees F, and if everything is working right , non-condensing (if there is condensation, the Customer has a lot more to worry about than the camera.). However, this probably will be a washdwon area.

In order to minimize electronic hardware in the vicinity, I am thinking to use an IP camera, and ethernet communicate with the sorter.

1. Is this a feasable application of machine vison?
2. If so, what could the maximum identification rate be?
3. Does PID fit into the picture here for improving long term identification accuracy?
4. Suggestion of vision equipment and software to use (Washdown should be a consideration.).
0 Kudos
Message 1 of 6
(4,102 Views)

Hello MrMark,

 

 

 

 

Based on what you have presented in this thread, I would suggest looking into our Vision Builder for Automated Inspection (VBAI) software.  This is a stand-alone software package which can solve most machine vision application challenges without the use of a programming language or complicated customization tools. Vision Builder AI includes NI Vision Acquisition software, a set of drivers and utilities that acquire, display, and save images from various camera standards (GigE, IEEE 1394, etc.).  You can find more information regarding this software package here.  

 

Depending on how you plan on image processing your devices (pattern matching, golden template, geometry matching, etc.) and how similar your products are will affect the identification rate.  

 

I am not completely sure of how you wanted to implement PID control into your system (i.e. controlling the speed of the products are passing through or intelligent learning of the system to more quickly identify parts.)   If it is the first case, PID control of the mechanical system is a direct addition which can be made.  In the second case, you might also want to look into fuzzy logic if that is a concept that is comfortable for your customer.  Both control systems are offered in our PID toolset.  Here is the manual for this toolset which will give a better idea of what options and features are available in architecting your application.  For your reference, the PID toolset would be used with LabVIEW instead of VBAI.   

 

If you are interested in IP, I would recommend using a GigE camera. More information is available here.     

 

I hope this helps get you started in the right direction.  Please let us know if you would like further clarification or assistance regarding this issue.

Regards,

Vu D

 

 
0 Kudos
Message 2 of 6
(4,074 Views)
Thanks Vu D!

I've downloaded a copy of the Vision Builder AI and have played with it a little bit. I am mor eused to programming AB PLC's a few years ago, so am having to brush up on my programming skills a little.

As for the PID, you are right on the 2nd scenario, kind of a fuzzy logic situation.

Any feel for what a mid range system like this should cost in hardware? Of the programmers you know, how many weeks or days do you think it take to write and debug the intial program? And if I read the NI website correctly, each license of the runtime is about $400.00.

Do you see any pitfalls in identifying parts like this, based on what you can visualize as the footprint differences between the parts?

Again, thanks!
0 Kudos
Message 3 of 6
(4,068 Views)

Hi MrMark,

 

Also there are several examples that are included in VBAI 3.0 which can be found by navigating to C:/Program Files/National Instruments/ Vision Builder AI 3.0/Examples which will hopefully give you a good start with how the software works.  

 

Regarding system costs, I would recommend contacting one of our technical sales representatives. They will be able to give you a more accurate quote for hardware/software cost depending on your application. 

 

In regards to the time required for developing, I would reserve a couple of days to initially learn the software and run through the examples.  Vision Builder AI is a process-oriented program, in the sense that the design is based on the inspection steps and sorting procedures you choose to implement.  Therefore, once you are familiar with the design aspect of the software, actually coding your application should be fairly easy.  I think the pitfalls you have brought up are the main concerns.  Since the leg does not have clear edges, it will be hard to perform edge detection; however, you might be able to use geometry or pattern match. You might also take a look at the classification function.  A description of Classification is available in the NI Vision Concepts Manual. If you have a common identification point, you also have the option to reorient your object for inspection.  Again, I would take a look at the different examples to reacquaint you to how to configure an inspection in VBAI 3.0. 

 

Take care,

 

Vu D

 

 

 
Message 4 of 6
(4,047 Views)
Thanks again Vu D!

I'll look over everything you've pointed me to.

Just some final (I hope) comments and thoughts, I am thinking to plow all the parts to a single file, so the camera;s field of view would be a limited width path, probably 6 inches or so wide. Because there will be gap between parts (end to end), and no parts to either side, part ID within the camera's field of view would be limited to 1 part. I am thinking that a photo-eye could be used to trigger the camera to take a picture. But that may be more complex than what is required.

Your thoughts?
0 Kudos
Message 5 of 6
(4,026 Views)
Hi MrMark,

 

There are several methods for triggering a capture (reading an encoder on the belt, triggering a laser using a data acquisition card, software triggering based on a constant velocity of the line, etc.) National Instruments has actually partnered with the following companies who specialize in providing machine vision consulting and integration with NI Vision products. 

 

They will be able to provide information about setting up a complete system.  If you have more detailed technical questions regarding our products, you can also go here to request direct support from one of our engineers.  Good luck with your application.

 

Best regards,

 

Vu D

 

 

 
0 Kudos
Message 6 of 6
(3,996 Views)