Example Program Drafts

Showing results for 
Search instead for 
Did you mean: 

Waterloo Labs- Karaoke on Fire

by Member DylanC ‎04-16-2012 10:22 AM - edited ‎01-30-2017 10:42 AM


This application was designed by the Waterloo Labs team to allow 2 players to simultaneously be scored upon their singing ability as well as displaying what notes they and the original singer are singing.

This code requires:

LabVIEW 2011 or Later

DAQmx 9.3 or Later

Steps to Complete:

This system requires NI myDAQ’s for each audio input. You will need a microphone for each player's myDAQ with a standard 3.5mm plug.

This is the player’s front panel. First the players are prompted for their names and once entered they press play to begin the game. The game mode switch on top lets the users select between VS mode where the players are compared to the song and Duet mode where the players can sing together and are scored on how well they match each other. The player's score is located directly below their names along with 3 LED’s that light up with how well they are matching the note. The notes being sung is displayed right next to the LED’s. The plot on top is the frequency being detected. It also shows a history of frequencies that were sung as the x-axis is time.  The next plot is the notes being sung, keep in mind these plots show all 3 channels at once with the song in yellow, player1 in red, and player 2 in blue. The chart below that is the high score chart which holds a record of all player names and scores that have been sung in a text file saved on the computer.


This is the code for acquiring data from the myDAQ. It uses producer consumer architecture to maximize performance while we collect, process, and display data. Below is the code where we retrieve the microphone data from the singers and place it in their respective queue.


Expanding Daq Producer.vi:


Once we get this data we then queue it for the consumer loop to process it. The consumer loop, which runs in parallel to our producer loop shown above, dequeues and converts our time domain data to the frequency domain using the Fast Fourier Transform shown below. The scaling factor of 8 was optimized to the magnitude of the signal we were getting off of the microphones.


Once we get the fundamental frequency, f0, we use the above code to retrieve what note the player is singing as well as how far away they are from the note. We display the closest note the player is singing but we use the distance from the note for scoring.

We do this for both players and compare these notes to the calculated notes of the song to figure out the scoring while in VS mode. The scoring increases the closer and longer the player can match the note of the song they are singing. This is represented by the 3 LED's by the players’ names.



For the duet scoring we use the same scoring algorithm except compare the two singers and score them based on proximity to the note to each other.

Once we are done singing we press the ‘Done’ key which writes the players scores to the high score chart and ends the game.