LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Clear sound input read buffer

Greetings,

 

I'm having a problem I think I may be able to solve if I could clear the sound input buffer, but I'm not sure how to do it. Basically I need to sample ~40ms worth of input at a time and send it over TCP/IP. All of this works. The data transfers, and the animation is fluid. The problem is that the sound buffer backs up. This causes two problems.

 

1. While the animation on the client's graph is fluid, a delay is created. If I let it run for a couple of seconds and then I blow into the microphone it takes a while for this to display. The longer the program runs, the worse it gets.

2. The buffer is finite, of course, so after a while it will crash.

 

I need a way for the data to be sampled, and for that sample to be sent within 150ms or so to the client. I think if I could send my sample, clear the buffer, take a sample, send it, clear the buffer, etc it would be fine, but I'm not sure how to do it. I don't want to create, sample, clear with every iteration. There's too much overhead and I lose the fluid animation of the graph.

 

If there's a better solution someone can come up with I'm certainly open to that as well, but does anyone have any suggestions?

 

I'm attaching a picture of my block diagram of the server so you can get a better idea of what I'm talking about.

 

Thanks,

Pheria

0 Kudos
Message 1 of 3
(3,546 Views)

You are "producing" faster than you can "consume".

 

 

  1. Have you considered compressing the data prior to sending?
  2. Is UDP and option rather than TCP? The lower overhead will allow greater functional bandwidth, albeit lossy.
  3. Can you lower the sampling rate of your input?
  4. You can consider a producer/consumer architecture that would divide the input sampling into a separate loop from the the outbound network messaging.
0 Kudos
Message 2 of 3
(3,529 Views)

Thank you for your advice, JackDunaway.

 

Most of the specifications for this project are in place because it's a school project. We didn't cover compression at all, so I'm fairly certain it should be avoided. UDP is not an option as the professor has specifically stated it must be TCP. When you say lower the sampling rate, are you talking about the number of samples / channel at the sound device's configuration step, or the number of samples that are read? If I lower the number of samples that are read at Sound Input Read, I don't get enough information to populate my waveform on the client's end.

 

I'll probably give producer/consumer a try.

 

I decided to ask this same question to my professor after posting here. So in case anyone else google's with a similar problem seeking advice, his advice was:

 

When you call "Sound input Configure", you define the size of the Internal buffer. At the same time sampling starts! From that moment you have to make sure that the buffer never overflows, i.e. that you always call "Sound Input Read" *before* the buffer is full. I recommend the following concept:
 - SERVER:
  Runs a state machine, which keeps a loop running that continuously
  reads the sound card buffer and checks whether the data contains a
  trigger event or not. On a trigger event, the state machine reads
  more data that is necessary to complete a sweep an then sends all
  data to the client.
 - CLIENT:
  Runs a state machine, which waits for an incoming sweep from the
  server (in the case of a trigger event), in order to display that
  data. After reception, it sends back an acknowledge to the server,
  which also contains the current trigger configuration.

 

0 Kudos
Message 3 of 3
(3,517 Views)