From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Student Projects

Showing results for 
Search instead for 
Did you mean: 

Virtual Speak for Dumb for Student Design Competition 2013

Contact Information

University: VIT University, Vellore

Team Members (with year of graduation): (1) Himesh Reddivari(2014), (2) Hithesh Reddivari(2013)

Faculty Advisers: M. Eswar Reddy

Email Address:

Submission Language : English

Project Information

Title: Virtual Speak for Dumb


This project helps dumb to speak with few gestures. It can also be used to recognise a person, talk in different languages.


LabVIEW-2011, Image Processing Toolkit, Arduino Decimila. 

The Challenge

Today dumb people are facing a big problem to communicate with each other. The gestures they make cannot be understood by common people. Although there were many approaches to this problem before, they failed when it comes to implementation. Is there some system which can be effectively used by dumb?

The Solution


I have designed a system to solve the above mentioned problem. The person using this system will be wearing colour tags to his fingers. Then his intentions or feelings will be read by the system by his gestures and generate output accordingly. The output is a voice or speech signal which is already stored inside the system. Some set of generally used speech will be stored.

By using this kind of system a dumb person can communicate easily with the outside world. It’s like everything is in his fingertips. This system can also serve the general public. It can help in talking in a different language.

Very low cost sensors are used here. And designing of this entire system can be done using NI software.

Detailed description:

The user will be wearing colour tags to his fingers as shown in the fig.1.  The entire setup is explained through block diagram.


The camera is the primary input device. It captures the user input and transmits to labVIEW. labVIEW process the input image. 

In the present case user has three colour tags to his right hand. It creates 2^3 combinations. In which the combination 000 is removed. Now each combination is mapped to a distinct word or sentence. If the user shows that combination the mapped sentence or word will be delivered as voice.

Considering user has two hands of which he can use 4 fingers from each (thumb is not considered). He can have (2^8)-1 combinations. Each one of it can be mapped to a distinct word.

Choice of language:

User is also provided with choice of language where he can choose the language which his listener understands. This selection of input can be made through cell phone provided.

Choice of environment:

User also has a choice of environment where he can choose the set of words that are related to that particular environment. For example ,user can choose a set of words for friend , doctor etc., Once again this input is given thorough cell phone.

Use of microphone:

Consider the following situation: User doesn’t know the name of the listener. User asks his name. The next second user asks his name the microphone starts recording his name and store it in its memory. Camera records his face and updates the database. This can be used by the user in the future.

Benefits using LabVIEW and NI tools:

LabVIEW and NI tools made me to work directly on my idea. It has a large set of examples where I modified some of those to use as a sub-blocks of my project. I need not to worry about the syntax errors like other code oriented softwares.

Level of completion: alpha

Time to build: 3 months

Additional revisions that could be made:

Additionally face recognition system and voice recognition system can be added to the project. Face recognition system makes user to identify faces and map the particular words related to him.

The labVIEW screen shots are provided:





Video Link: