From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Student Projects

cancel
Showing results for 
Search instead for 
Did you mean: 

myDAQ Real-Time Audio Processing

By Jared Kirschner

________________

So, your project, what exactly is it?

The purpose of my project was to create an adaptable system for the processing of audio signals in real-time.  You could imagine a lot of different potential applications for this -- an auto-tuner for voice, an equalizer for music, a guitar pre-amp, a make-your-voice-sound-like-Darth-Vader-er... and so on.  To show some of the different functionality which can be implemented in this system, my demo includes the following functionality:

  • Frequency shifting
  • Echoing
  • Encryption (scrambling)
  • Filtering

Anywhere an audio signal is passed along a cable, this system can be used.  Let's see what you can come up with.

What do I need?

In truth, not much.  If you opened this document, you probably already have everything you need:

  • myDAQ
  • LabVIEW (2010+)
  • Computer

Optional equipment -- anything that takes audio signals as an input or sends audio signals as an output.  Examples are: headsets, microphones, speakers.

How does it work?

In hardware, an audio cable is connected to the designated "Audio In" terminal of the myDAQ.  The myDAQ communicates via USB with a computer (which also provides it with power).  The VI running on the computer communicates with the myDAQ to acquire and process the data.  The data is then sent back to the myDAQ via USB, and sent through the designated "Audio Out" terminal of the myDAQ.

In software, a producer/consumer model is used.  The producer loop acquires signals continuously from the audio input channels of the myDAQ, and adds them to the end of the queue.  The consumer (processor) loop pulls data from the front of the queue, processes it, and then writes it to a buffer on the myDAQ for the audio output channel.  This platform allows data acquisition, processing, and output to occur concurrently.  Within the processing stage, you can implement whatever functionality you desire for your idea.

How do I use it?

There isn't much to it.  Here's the hardware diagram for use in the context of a voice chat:

HW_Diagram.png

From there, just run the VI.  The microphone and speaker can be replaced with other components depending on your application.  In this set-up, I am using the microphone on my headset to provide the audio signal.  My speaker on my headset is plugged in directly to the appropriate jack on my computer.  The output of the myDAQ is going into the microphone jack on the computer.  This will allow me to use my VI to alter my voice before it is sent to the person I am chatting with.

Another simple set-up is to use the speaker output of the computer as the audio input signal, and connect the speaker jack on a headset to the audio output of the myDAQ.  This will allow you to alter music playing on your computer.

Let's see it in action.

Demo time:

So where's the VI?

Everything you need is right here: (or wherever the attachment happens to show up)


Contributors