02-07-2011 02:26 PM
Hello All,
I am currently using the SMT (Cont) Zoom FFT (I've used both versions of the zoom FFT VI's) in a spectral display application. Unfortunately, when trying to "zoom" in on the incoming data I receive a spectrum that looks like the attached pictures. For the attached images the incoming data is real valued, Fs = 800kHz, Zoom Factor = 14, and Trim = 1.33. The center freq, RBW, and Span are in as shown in the pictures.
I have application requirements which are not being met currently with the way that spectral data is being displayed right now. If someone could help me out and let me know if I'm configuring something incorrectly or if this is just a limitation of the SM Toolkit that would be great.
Other info that may be worthwhile: window = 4-term B-Harris, spectral lines = 1024, effective band = 0 to 0.4 fs, RBW definition = bin width, and overlap = 0% when using the Cont Zoom FFT VI.
Also, I drilled down into the SMT Cont Zoom FFT VI and discovered that it performs a multistage CIC filter followed by two FIR filters in order to achieve the decimation and output span that is desired. My hunch is that there is something going on with the CIC filter that it is not attenuating the out of band images and they are aliasing back into the span of interest when decimating by large amounts (I'm not that strong of a DSP person though so please correct me if I'm wrong). I can't drill down through the SMT Zoom FFT VI since it uses a DLL call to perform the decimation, FFT, and data formatting.
Any help is greatly appreciated.
Thank you,
Tim Sileo
02-07-2011 02:37 PM
Oh, And I also noticed that when I zoom in, the frequency of my injected signal is off (ie. when injecting a CW at 60kHz it is showing up at 60.154 kHz, give or take, but the 60 kHz tone has been verified with a spectrum analyzer to be correct).
02-07-2011 07:00 PM
I do not have access to the Zoom FFT VIs to try anything, so my comments are based on the appearance of your images posted.
It looks like a Moire pattern. This is similar to aliasing. Some of the problems you are having may be due to the number of samples or sampling rate. The frequency bins in a standard FFT are spaced df = fs/N apart. If 60.154 kHz is closer to an integer multiple of fs/N than 60 kHz is, then the peak will show at that frequency. In such a case you will also have spectral leakage, that is, some of the energy will appear in two or more bins. You mentioned fs = 800 kHz. You did not indicate the number of samples in each dataset, but I am guessing that the 1024 spectral lines may indicate that the number of samples is 1024 or 2048. Assuming 2048 makes df = 800000/2048 = 390.63 Hz. 154*df = 60.156 kHz. 153*df = 59.766 kHz.
Zooming is not magic. It does not create any information which was not in the original samples. If the number of samples is insufficient, at some point zooming is working on very little information.
Lynn
02-08-2011 09:17 AM
Hi Lynn,
Those are good points about the number of samples. It's possible that this is my problem but I thought that this is something that the SMT is taking care of in either the Continuous Zoom or the Block Zoom VI's (I buffer up the samples required to achieve the desired RBW in the span specified for the Block Zoom). I can't explain the Zoom FFT process as well as the NI help document does so here's the link to that: http://www.ni.com/pdf/manuals/370355g.pdf. Under the "Resolution Bandwidth, Spectral Lines, and Window" section it specifies the acquisition size calculations that are needed to achieve the resolution desired.
You were correct about the 2048 samples as that is my incoming data stream size (I'm using UDP to receive the data). As mentioned, I buffer up the incoming packets of data for the Block Zoom FFT until I exceed the Acquisition Size returned by the SMT configuration VI's, then an FFT is performed, and a center freq/span range of bins is returned for display. For the Continuous Zoom FFT VI's the SMT decimates every incoming packet and buffers up the decimated samples (at the lower sample rate) and does a smaller FFT when enough decimated samples have been buffered up from multiple iterations. In either method, I would think that I have enough samples prior to the FFT to avoid the issue but I guess not. I'm curious to inject directly on a bin frequency to see if the spectral leakage disapears though...
Thanks for the info,
Tim Sileo
02-08-2011 03:46 PM
Another engineer's idea light came on and helped narrow down the issue for me...It has to do with discontinuities in my data. Since my data stream is coming in at such high rates I don't have enough time to do the DSP on it and keep up with the real-time stream of incoming data (they are already in parallel but I have another data stream that's at 7.5MSps so no hope of keeping up with that on a Windows box). So, the phase discontinuities between packets of data that are being buffered up for the FFT is what's causing the signal images. I tested this with a very quick change to the code and for the time period that I can keep up with incoming data the spectrum looks correct. However, I'll need to change my code a bit in order to buffer up data in non-disjointed "snapshots" of the incoming real-time data before doing the FFT. Currently, the data gets buffered up continuosly for a little bit of time and then starts to become disjointed.
Regards,
Tim S.
02-08-2011 03:59 PM
Tim,
That would certainly cause problems.
I glanced at the manual for the SMT. Without the VIs to run things, coming to a full understanding from the manual will be difficult and I will probably not have the time to dig into it.
Lynn
04-25-2011 08:58 AM
So, just in case someone else is experiencing issues such as this I'll give an update. Since the previous posts I've come to realize that the spectral leakage issues that I'm having are not entirely related to the Spectral Measurements Toolkit.
Since my time-domain data is being received via UDP in this particular application there comes a point (i.e. a high sample rate) in which the data stream is no longer continuous. I learned that at my 7.5 MSps data rate stream (4 bytes/sample so 30 MB/s) the OS was actually dropping UDP packets. I improved my UDP reader code to read faster and write to queue only when the DSP needs a flood of data but there were still dropped packets within that "grab" of data. This was due to a Windows socket buffer size of only 8192 bytes where each one of my UDP packets is 8220 bytes. So, unless my UDP read function was actively waiting for data Windows would drop new data on the floor. I found http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D which allowed me to increase the socket buffer depth. This helped a little bit but when the display is set to a very high resolution (aka large FFT size) this requires 1000's of UDP packets and at that point data is still being dropped when the UDP socket buffer gets full. As I discovered, there is no way to flush this OS socket buffer and at my high sample rate even the act of putting the UDP data string onto a queue is slowing down the UDP read loop enough for this socket buffer to fall behind.
Ultimately, it looks like dropped UDP packets will be unavoidable in my situation. This is not because of the network since I only have one switch in the path and to my knowledge there has never been a dropped UDP packet on our local 10 GbE network (I can verify this with our other UDP clients which are Linux machines). I blame Windows for this one...
There is however a DSP solution to the discontinuous UDP packet problem but I did not have time to implement it. It enables achieving high frequency resolution while avoiding most of the issues with having phase discontinuities between time-domain data. It is a tiered channelizer approach in which each packet of data is windowed and FFT'd (or each packet is STFT'd with a smaller sliding window/FFT operation). Buffering up each FFT operation into a 2D array makes the rows frequency bins and the columns are additional time-domain samples. Another FFT can be performed on the time-domain samples (column) for each of the frequency bins to achieve higher resolution. This process can be repeated to achieve very high resolutions and as long as the signal doesn't need to be reconstructed taking the mag^2 of this data for display should avoid any phase discontinuites and eliminate the spectral leakage that was seeing in my application. It's worth noting that I've greatly simplified this DSP "channelizer" method. I've tried to implement it with some STFT's and for loops but haven't had success yet. I know it works though since it has been implemented in C/MATLAB code at my company.