RF Measurement Devices

cancel
Showing results for 
Search instead for 
Did you mean: 

5660 QAM demod data rate

I am using PXI 5660 as RFSA and PXI 5610 as upconverter with 5421 as AWG.. I was testing the QAM modulation and demodulation examples. I generated the 16QAM signal successfully a a symbol rate of 2.00M/Hz. The signal was also received and the constellation plot was fine. But then I edited the example to count the number of symbols received. Now the count of symbols dont match with my symbol rate.. Like if my symbol rate is 2M/Hz. the count of symbols increases slowly in about 5-10 seconds to reach 1x10^6.and my processor usage goes to 100% .. the controller is PXI 8186...  All I want to do is to send bits at 20 Mbps and then measure the total bits and  bit error rate as well.. Can someone tell where is the problem and how can I achieve my goal ?

0 Kudos
Message 1 of 4
(5,376 Views)

Sabe,

 

There are a few primary questions that I have while going through the edited code that you posted. First, I noticed that you seem to have removed a lot of the configuration VIs that are running in the outside while loop, and have also removed the NI RFSA Initiate and Abort VIs inside the inner while loop. What is the reason that you've altered the configuration and operation of those portions of the code? Also, how are you implementing the running total of the symbols being received? Any direction you can help with in regards to the specifics in how and why you've changed from the example will help me as I look through it to identify the current issue. If the system's processor usage is spiking, then we may have created some issues when you altered the code that would explain this behavior.

 

Timothy S.
Senior Technical Support Engineer
0 Kudos
Message 2 of 4
(5,362 Views)

Thanks for the reply.

 

Well first of all I didnt make remove any VI from the example.. I used the example MT ni5660 QAM demod.vi as it is.. The only change i did was to extract the symbols from the complex waveform received using "unbundle" and then take the size of that array to calculate the total number of symbols received.. The symbols were then added in each iteration of the loop.. I have attached a screen shot of my code indicating the edited part.Thats the only change I made in the example and nothing else..

 

If my approach is incorrect or there is any other problem in the configuration .. Then please suggest me any other way to count the number of symbols that are received that should match with the "symbols/sec" configured .

0 Kudos
Message 3 of 4
(5,358 Views)

Thank you for the clarification on that point, I've got that well in hand. I'm not seeing anything inherent in the code alterations that would cause a shift in the CPU usage, but if you could monitor (in task manager or other Windows programs) the CPU usage over time as well as the memory usage to give us an idea of what you're seeing that may help. I'm going to get the hardware together on our end to see if I can identify what may be happening with that case.

 

The functions you are using from the Modulation Toolkit to demodulate do a lot of signal processing, so the timing behind reading from those and reading the changing values to keep up with the changing symbols/bit streams could be causing the timing issues you're seeing. I'd either recommend looking at the bit-streams themselves if possible, or even better, to try looking at the MT Calculate BER.vi function to get the information you're looking for.

 

I'll update you once I can get the system together to test on our end.

Timothy S.
Senior Technical Support Engineer
0 Kudos
Message 4 of 4
(5,341 Views)