University: Universidad Distrital Francisco Jose De Caldas . Bogota - ColombIa
Faculty: Tecnologica - Control Engineering
Team Members(s): Jonathan Eduardo Cruz Ortiz
Tels: (57)(1) 7360862 – 3124451894
Email Address: jecruzo@hotmail.com
jecruzo@correo.udistrital.edu.co
Title: IMPLEMENTATION OF A SYSTEM OF OBSTACLE AVOIDANCE USING NEURAL NETWORKS, APPLIED TO MOBILE ROBOT HOLONOMIC.
Description:
I developed the Implementation of neural networks (multilayer perceptron). In a typical mobile robotic system, such as obstacle avoidance Developed in real time thanks to the versatility and efficiency that can provide programming environment LabVIEW as all this work assembled and applied in the mobile robot holonomic "ROVIO"
Products:
NI LabVIEW 2010
NI Vision Development Module
The Challenge:
Achieve implementation of artificial neural networks mobile robotic systems. In recent times there has been the breakthrough that had science in the field of artificial intelligence, is today the subject of worldwide research where there have been great scientific Such is the case and we can find algorithms that try to mimic the behavior of our brain to neuronal algorithms that are based on the genetic process of living organisms and the evolutionary process, hence the name "genetic algorithms".
All this development is that today we can see a future with highly developed intelligent systems that may be capable of performing multiple tasks.
So the challenge is to develop this project, using all the work in a mobile robot called "ROVIO"capable of doing a lot of movement, thanks to peculiar motions due to omnidirectional wheels brings to an ip camera that is mounted on its body and an IR sensor, and can implement algorithms in the above.
The Solution:
The main objective is to design a neural network (Multilayer Perceptron ) chosen for its versatility and its ability to be a roughly linear, train it to try and finally deploy a mobile robot called ROVI where they observed all the potential this such algorithms.
But before this he had to conduct an investigation into the robot itself to control it in its entirety and subsequent programming of LabVIEW VIs, the project is divided into several parts.
1. The robot is basically an HTTP server (web server) to which you have adapted a ip camera and wheels to have control of movement, then clearly think that this part will be related to network protocols (TCP / IP) and specifically with CGI scripts (Common Gateway Interface). where we made a pattern of communication between the robot (server) and a computer (client) which will be running LabVIEW is the one who finally execute all control actions through the implementation of HTTP requests and responses (request and response). All this was developed with the palette of LabVIEW network protocols - DATA comunication. see Figure 1.
2. The next part is processing the image obtained through previous work, because at this time already had the ability to ask the robot was capturing the image to use it, this part is basically to obtain this image, rebuild JPEG image (as the image data is obtained in binary), convert it to an image IMAQ for LabVIEW recognized as its own, and a series of image processing algorithms through the NI Vision Development Module where the main objective is to filter the image obtained the figures in red , so you're can be analyzed by the neural network (recall that the network should make things easier since only is an approximation of our neural networks and may never get to process all our own processing in our brain), taking into account that the images obtained by the robot will not have any controlled environment (call well in an environment where controls all environmental variables light, brightness, etc. to not disturb the normal functioning of a vision system) is important to perform this step. Where only finally be presented as an object to
avoid the neural network are processed images in red.
3. The next part was to determine what data is going to present a neuronal network that includes taking the value of IR infrared sensor and make the processed image, specific points that the network will tell which part is the figure for she decides what action to take, you are the inputs to the neural network.
4. finally the design and implementation of the neural network, the multilayer perceptron was designed as follows.
INPUT LAYER: 5 input neurons corresponding to the value of a sensor and four image points,
Hidden layer: 10 neurons
Output layer: 3 neurons, each of them will tell LabVIEW to control action to take, turn right, turn left, or forward.
Activation function: sigmoid function was used
TRAINING: backpropagation algorithm, with adaptation by the gradient method, and using validation mean square error SSE.
We proceeded to do the training for which they were necessary training or 375 times for the network to settle down and throw the weight vectors and threshold vectors, which were used in the construction of the neural network in LabVIEW.
Benefits of using LabVIEW
Finally you can see the whole operation and conclude the great potential offered LabVIEW as the propagation environment which optimizes undoubtedly this type of projects 100% this is demonstrated by the time the project was developed, there were about three months. Where usually this kind of project usually last about a year in university research projects.
The IMAQ VISION undoubtedly facilitated the project much easier and making understandable the whole range image where labview to understand all these issues almost complete courses that are required image processing which are usually seen in majors or masters.
Finally, the LabVIEW graphical environment more pleasant to work in such applications as they allow rapid and effective cleansing and a nice programming environment.
Hello there,
Thank you so much for your project submission into the NI LabVIEW Student Design Competition. It's great to see your enthusiasm for NI LabVIEW! Make sure you share your project URL with your peers and faculty so you can collect votes for your project and win. Collecting the most "likes" gives you the opportunity to win cash prizes for your project submission. If you or your friends have any questions about how to go about "voting" for your project, tell them to read this brief document (https://forums.ni.com/t5/Student-Projects/How-to-Vote-for-LabVIEW-Student-Design-Projects-doc/ta-p/3...). You have until July 15, 2011 to collect votes!
I'm curious to know, what's your favorite part about using LabVIEW and how did you hear about the competition? Great work!!
Good Luck, Liz in Austin, TX.