Find the Fruit Experiment
My wonderful son had just turned 18 months old. It is such a exciting age. He is getting remarkably good at identifying and naming "stuff" in the world around him. Also, he is getting a little too good at demanding more of that "stuff". But that’s a different story
A child’s ability to identify and name objects doesn't magically happen. Like most parents, I've spent countless hours flicking through picture books, repeatedly naming the objects on each page.
We take it for granted that our massive adult brains can see a picture of, for example, an apple... and just know it's an apple. But through training a little kiddie, I realise how complex object identification is.
In one book an apple looks like this...
...in another book, an apple looks like this...
... while, in another book, an apple looks like this...
So what makes an apple an apple? The colour? The size? The shape?
Being an unashamed nerd, I wondered if it would be easier to teach a computer to identify pictures of fruit than an 18 month old child.
A resounding YES!! It took me several weeks to teach my son a dozen types of fruit. Armed with a copy of LabVIEW, I was able to teach my PC to do the same thing in just 45mins.
Existing processing technology (and, certainly, my silly applications) do NOT come close to the wonders of the human brain. My awesome son can already do many things that no technology can – including converting an apple into energy [suck on that PC!!].
This is just a fun, tongue-in-cheek experiment.
Experiment Part #1: Find Fruit in Static Images
Using LabVIEW to classify objects in static imagery is easy!! The Vision Development Module (VDM) installs all of the LabVIEW functionality required to load and process images - including dedicated functions for object classification through colour or shape. VDM also installs a stand-alone Classification Training application, which allows you to teach your application what different objects (or, in my case, fruit) look like.
NI Color Classification Training Interface
You simply create object classes (banana, apple, orange etc), then provide examples of what each class (fruit) looks like. The app then generates a classification file (.clf), which can be quickly loaded into a LabVIEW app.
The only real coding I had to do was some basic morphology, which allowed me to locate and separate the different objects (fruit) within the image.
This requires a 4 step process...
1. Remove the White Background. Apply a threshold to the image, to produce a binary image (ie. all pixel values are either 1 or 0). Clusters of 1's, known as Binary Large Objects (BLOBs), might be a fruit!
2. Remove Noise. Apply erosion to the BLOBs, to strip layers of pixels from the periphery of the BLOBs to remove noise (false fruit) and separate the BLOBs.
3. Fill Holes. Apply dilation to the image, to add pixels back to the innner and outer periphery of the blobs, thereby filling holes which could cause erroneous readings.
4. Optional: remove any objects touching the border of the image (ie. Fruit that are partially off-screen)
Once the locations of potencial fruit are found, their positions (defined by regions of interest) can be passed the prebuilt classification functions.
Experiment Part #2: Find Fruit in Live Camera Feeds
Next, I wanted the code to classify fruit in live images streamed from a camera – a significantly more complicated task. As anyone who has worked on machine vision application will know, it is imperative that you have appropriate camera, optics and lighting. I had none of those things
Best Practice 1: Select a sensitive, high-res camera that provides full control over everything from focus and shutter speed, to gain and white balense.
My Camera: I used a the crappy webcam, which is integrated in my laptop. The webcam provided minimal control over acquisition parameters
Best Practice 2: A quality ring light (provides consistent illumination) and a diffuser (to minimise reflections)
My Lighting: The flickery office strip lights
Due to the immense limitations in my setup, colour classification would not work. Instead, I moved to geometry based classification - by recreating the classification file using the NI Particle Classification Training Interface (installed with VDM). Against all odds, this worked great!
Armed with my new particle classification file, I then replaced the IMAQ functions that load images from disk with the IMAQdx functions that streamed images from my webcam.
The resulting program was remarkably resilient to focus, lighting and even image rotation.
Steps to Run the Code
1. Download attached zip file, and extract content onto your desktop
2. Open Fruit Classifier.lvproj. From the project, open Fruit Classifier (StaticImages).vi
3. Using the file path control at the bottom of the front panel, select one of the supplied fruit images (inside a subfolder called Images)
4. Run the code
Note: You are welcome to try the code with images of your own. This may require to train the code to classify the new fruit. To do this, open the Fruit (Colour Classifier)2.clf in the Color Classification Tool, open your new image, and tell the tool what the different fruit are.
The code is fairly neat and well documented, so should be easy to follow.
Update: As per request, I have just uploaded the code in LabVIEW 2013