Competition Year: 2016
University: NTNU, Norwegian University of Science and Technology
Team Members: Edwin de Pano, Jørgen Kopstad, Kawan Kandili & Sindre Bjørvik.
(Year of graduation: 2016)
Faculty Advisers: Dominik Osinski (email@example.com), Dag Roar Hjelme
Email Address: firstname.lastname@example.org
Country: Norway (NO)
A Wearable Real-Time Sensory Substitution System
NTNU together with Norwegian Association of the Blind and Partially Sighted runs a research project connected to developing a visual sensory substitution system, which will help blind people in orientation and identification tasks in everyday life.
Our task as a bachelor group was to design and build a hardware part of the system, re-design and implement existing Colorophone code written in LabVIEW on a Real-Time FPGA target - myRIO.
According to World Health Organization (WHO), in 2014 there was an estimate of 258 million visually impaired people in the world. Out of the 258 million people, more than 39 million of these people are blind. There are still no effective and accepted technology-based aid used to help these people. Even though there are some products that exists on the market, they have their disadvantages: they are expensive and are not widely accepted by the users. Today, the most popular tool to help them is a white cane. Nine of ten visually impaired lives in the developing countries that is why it is very important to create a solution which will be accessible for them.
Recent discoveries in neuroscience shows that our brains are more task-oriented than sensory oriented devices, in other words it means that we can use the visual cortex by using proper auditory stimulus. There are not many sensory substitution devices that code color as sound. The information about color can give visually impaired people possibility to access the information, which could not be obtained before. It will result in helping in many of the everyday tasks like travelling and object identification. Additionally visually impaired can be included in communication related to colorful world around us.
Figure 1: The Colorophone prototype
Sensing colors through sound have been a challenge that many scientists have tried to solve. Even Sir Isaac Newton has his own theory of how to substitute colors with sound. Newton meant that there were seven primary colors: red, yellow, green, blue, indigo, orange and violet. These seven primary colors would then be matched up with the seven musical notes. However, this solution like any other where people tried to match light and sound frequency in a one to one relation had its disadvantages. One of them is a difficulty in recognition of the sound associated with colors. Only people with absolute pitch would be able to do it exactly.
The Colorophone sonification method invented by Dominik Osinski has been inspired by the human visual system. Our color perception comes from the comparison of the responses of different color sensitive cells called cones. We are equipped with three different cone types with the peak sensitivities related to light wavelengths, which we call red, green and blue. If our "green" and "red" cones are active, we perceive a color as yellow. If the "blue" and "red" cones are active, we perceive a color as violet. If all three are active, we will interpret color as white.
Figure 2: The RGB colors
The Colorophone method associates red color component with a high frequency sound, green color component with the middle frequency sound and blue color component with a low frequency sound. White is codes as a white noise. This kind of association allows us to represent every possible color as sound.
This system is innovative since it is a new way of listening to colors. With this system, we are able to listen to a large spectrum of colors. However, the user does not need to learn a large spectrum of frequencies. It only has three frequencies, one for each RGB component. In addition, white noise to fill in for the lowest RGB value. This makes it easy to learn how to listen to colors.
With a proximity sensor, which delivers the information about the distance to the object, one would think there would be an overload of information through sound alone, but the two sounds are easily distinguishable. The combination of the two makes it possible for a visually impaired person to look at an item from a distance and understand if it is what he is looking for.
The prototype consists of a pair of glasses with a built-in camera, proximity sensor, AfterShokz headphones and a processing unit (myRIO). The camera receives picture and sends digital RGB values to the processing unit. The different RGB values are used to create a waveform, which is sent to the headphones. The proximity sensor measures the distance from 10 to 180 cm in front of the glasses and gives information necessary for generating high frequency ticking sound at short distances and a sound with low frequency ticking at long distances. The sounds generated on basis of information from the camera and the proximity sensor gets played through the AfterShokz headphones. AfterShokz is a type of headphones that vibrate against the temples of the skull and because of that, the user can hear different sounds from the environment and from the speakers at the same time.
Figure 3: Overview of the whole system
The whole system is working fine and can almost be seen as a fully functional system. The distance measurement system we want to implement is still in Beta phase. We are working with optimization of the system however the whole color to sound conversion is finished.
Totally hours spent on the project is about 2000 hours, where ca. 500 hours includes programming in LabVIEW to get the myRIO system to work.
Benefits of our solution:
There are many benefits to the Colorophone system. It can help visually impaired with everyday tasks. That is our motivation. To give opportunity to those who have it harder than us.
Back to the benefits:
One of the additional revisions that can be made is beacons so the person could find places easier like the neighborhood mall or nearest bus stop. Other improvements that could be implemented are eye-tracking technology. This way the user could have a more natural feeling of eyesight, instead of moving their heads around.
The main software development system the group used during this project was NI LabVIEW. One of the benefits is that LabVIEW made programming much simpler since it gives users easy access to video processing. Another benefit of using LabVIEW is that the code is graphical, which provides a better overview of the code. This program is also perfect for prototyping, because of the easy configuration and changing possibilities.
Figure 4: The front panel of the colorophone program
On the front panel, you are able to see what the RGB values for the current color is on the right side. You can also see what the amplitude on the frequencies will be on the audio output. On the left side we have some settings, where we can amplify the volume for each frequency. It is important to have these setting since everybody hears differently.
Since Colorophone is a program that needs to run in real-time, programming in FPGA would be the ideal way to execute this project. Running the program in real-time, means that everything happens within a determined period, usually not more than milliseconds. This is important since the user of Colorophone needs to know what happens fast.
NI Multisim & NI Ultiboard
NI Multisim and NI Ultiboard are programs used to design circuit boards. The combination of these two programs is perfect. The first task for us was to design the voltage to frequency conversion circuit on Multisim, easy and straightforwardly. The circuit can be exported to Ultiboard to 3D design the PCB. Done with Ultiboard? Ok, just mill and solder. That is how easy it is!
We hope that our contribution to Colorophone project will help in developing of the affordable sensory substitution device, which will help visually impaired people.
Join us at www.colorophone.com
Figure 5: Poster