From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Record and playback "sample of sound" using the Speedy-33 (similar to sound recorder.vi)

Hi Guys,

 

I'm trying to record and playback "sample of sound" using the Speedy-33 (via 1D array). Although I understand the data flow of sound recorder.VI example, there are issues which I don't really understand.

 

1) Why a buffer (array size) of 4480 and block size of  32 is chosen? (it is link to the elementary I/O node with sampling freq of 8kHz and framesize of 64 or memory size?)

 

2) During playback state (inside the Digital voice recorder_Arrsub.VI), there is a for loop to recall the array elements. However, during recording state (replace array subset), it is not require. Why?

 

3) Pack subVI (64 to 32 elements): What does it mean "the lower of 16bits of a double element in the input array and concatenate the lower 16bits of next element?

 

Thanks!

 

Regards,

Bee Kwang

0 Kudos
Message 1 of 4
(2,745 Views)

Hi Bee Kwang,

 

1) Memory of the DSP is actually at 32k, which theoretically means that 4480 is not the upper limit of the buffer size. Block size of 32 is specifically suited for this application as a means of optimization though. Reasoning is stated below.

 

2) Data is stored in an array. For you to have the ability to play it back, you'd need the information stored in the array, in contrast when you do the recording where you only need the array buffer itself. This is the reason why recall of array elements only happen on playback, because when you record, you don't need these since you'd be replacing it anyway.

 

3) The main reason for this is the fact that the DSP only has a 16-bit ADC, while a double type format in LabVIEW is inherently 64-bits. Though it's perfectly licit not to convert, it would save you a lot of memory if you did (which is a luxury for DSP targets). Since a 16-bit ADC means that data can be represented by a 16-bit data type, any larger and the rest would go to waste. But if you'd notice, though data is taken 16-bits at a time per element, it is concatenating it into one data and stored into a 32-bit type. The reasoning behind this is the fact that if you look at the architecture as available from the manual, the memory of the DSP is stored as a 32-bit contiguous unit, which means that this is the most optimized way to store data (Storing it in let us say, 16-bit, would beat the purpose since the upper half of each memory element would definitely not be used.)

 

Best Regards,

 

Joshua de la Llana

Applications Engineer

NI ASEAN

Message 2 of 4
(2,709 Views)

Hi Joshua,

 

Nice to see ur support in this forum too. For this, I will give u a "Kudos". 🙂

 

1) Under the target information, I have checked the memory usage when the sound recorder.vi is running is about 97.6%. If I were to increase the buffer size, I'm unable to run the VI (probably due to memory overusage). If decreased, the length of the recorded audio will be shorten. Also, how does it relate to the Elementary I/O node with sampling freq of 8kHz and framesize of 64? In other words, can I increase the duration of recorded audio, higher sampling frequency and higher frame size?

 

2) OK, now I understand the difference between playback and record state. Under the DigitalVoiceRecorder_ArrSub.vi, I'm still don't understand why the index (e.g. 0x32, 1x32 etc) is added with the loop count (e.g. 1, 2 etc) to recall the elements of the stored 1-D array buffer. Will there be any misalignment in the index(es) recalled during record state?

 

3) Not too sure whether I've  understand it correctly.  Do you mean that it take two "data" of 32bits each at one time (total 64 elements). However, since the ADC is 16-bit,  only the lower 16 bit of each element is useful (element 0 to 15 and 32 to 48). So, the lower 16-bit of the two "data" is then concatenate together and form 32 bit of data so that the DSP can process in one cycle?

 

P.S. By the way, did you receive my email regarding the anti-aliasing filter and preamp gain issue?

 

Regards,

Bee Kwang

0 Kudos
Message 3 of 4
(2,702 Views)

Hi Bee Kwang,

 

1. I am not that knowledgeable on the internal architecture of the SPEEDY-33, but the extra memory might be allotted to some other processes aside from the buffer. If this is the limit indeed however, adjusting the sampling rate wouldn't make any difference, aside from further decreasing your record length (This directly relates to how fast your buffer is filled). I don't suspect any difference from adjusting the framesize either as, looking at the algorithm, this wasn't taken into consideration at all in the process.

 

2. The elements must be recalled in the same way as they are stored. Like I said before, for one memory location, you actually have two data stored. That is why the playback algorithm is coded as such.

 

3. I meant that for each element, initially it is stored as a 64-bit data, but since the ADC is only 16-bit, one can optimize data storage by decimating the information into a 16-bit data type. So, for an element, the 64-bit data type is reduced to 16 and then stored with the next element into a 32-bit per element buffer.

 

I hope I was able to explain the concept clearly. Please notify me if there is still some confusion.

 

Best Regards,

 

Joshua de la Llana

Applications Engineer

NI ASEAN

0 Kudos
Message 4 of 4
(2,668 Views)