As good as LabVIEW is, NI has not yet release the long-awaited "Predict the Future" function, so when talking about "real-time sound processing", we need to (pardon the outrageous pun) "get real" and realize (or "real"ize) that we can only output data based on previously-sampled data, i.e. there is necessarily a time delay in (digital) signal processing.
Having said that, you can provide continuous signal processing in real-time, i.e. you can take in data at, say, 20KHz and output a continuous stream of data also at 20KHz, with some delay to allow for (a) sampling time, (b) sampling width (the more data you work with, the more you are able to do, particularly with regard to removing noise, which goes down as the square of the number of samples), and (c) whatever algorithm you are using to process the sound.
You'll want to take advantage of LabVIEW's Data Flow model, which permits parallel operations -- while one "chunk" of data (say, 1000 samples) is being acquired, the previous "chunk" is being processed and the processed chunk before that is being output. This particular trick is often accomplished by a Producer/Consumer Design Pattern. A Template for this Pattern (which will provide some explanation and Code You Can Use) can be found in the New... (the dots are important) File Menu entry in LabVIEW. Choose "From Templates" and look for Producer/Consumer Design Pattern (Data).
Bob Schor