myAlexa: Voice Controlled Robot Hand to Interpret Sign Language
Artificial intelligence is getting too smart, how dangerous can this be in the future? The topic of what AI can do in the future has been a huge concern for people over the last few years, however what if it can be used for the benefit of deaf people?
I integrated Amazon Alexa into LabVIEW for the first time, in order to control a robot hand converting speech into sign language. It can display numbers, letters and sequence of letters (words). Using a robot hand with more motors to control the elbow and wrist this can be extended to convert full sentences into sign language. Making it much easier to communicate with people experiencing hearing difficulties. To understand more check out my video.
A Bit About the Developer
My Name is Nour Elghazawy and at the time I did this project I was doing my Industrial Placement at National Instruments. I am studying Mechatronics Engineering at the University of Manchester and I finished this project in one week. Working on the project improved my LabVIEW skills which made it easier to become a Certified LabVIEW Developer. It did also improve my skills in other programming languages such as Python and JSON;
if you have any questions or feedback, feel free to reach out on LinkedIn.
Why this project?
Speech detection can be very useful to be integrated into your project, from your personal assistant to robotic system and even smart homes. Implementing the whole Artificial Intelligence platform from the start can take a long time even after doing so, the quality of the system is will depend on the number of samples used to train the algorithm. Therefore I did this proof of concept for integrating the already built speech detection Algorithm by Amazon Alexa into LabVIEW. This allows you to control ANYTHING in LabVIEW just with your voice!
This was demonstrated with a voice controlled robot hand to convert speech into sign language, which can be a first step to allow all of us to communicate in sign language with deaf people.
System Summary
Intents and Slots were used on Alexa to interpret what you would say, for example if you want to open finger number 1, we have an intent called robotfingerstate this intent has utterances which is what you would say to Alexa. The utterances have slots which are variables inside of the utterances. The utterances can be like "Open finger number one" and on the Amazon Cloud this will be "{State} finger number {Number}". Where {State} and {Number} are slots. State can be either Open or Close. The Number slot represents which finger is being used. It can be 1,2,3,4 or 5. So, you can say open finger number 1. close finger number 1, open finger number 2 and so on. All of these utterances will trigger the same intent.
If I said anything which corresponds to an intent, my Lambda Function will be triggered. Lambda is like the brain on the Amazon side. It will know which intent triggered it and put a message in the SQS Queue based on the intent and slot. For example if the user said open finger number 1. The message will be onefingerstate,open,1.
LabVIEW has a Queued Message Handler architecture. It reads the message from the SQS Queue, decodes it and update a Network Published Shared Variable on the Real Time of the myRIO, which controls the Motors of the hand.
Hardware
Software
How to use the code?
Create Lambda Function:
Create Alexa Skill
Now go to LabVIEW and open the Project attached