I am currently working on a project that involves LabVIEW Communications. I need to capture data from an FM signal (ex. 95.5 MHz) at a very fast rate (10MHz or greater) using an FM Rx and a USRP. I have slightly modified the "Spectral Monitoring (History)" code to capture the FM modulated signal, but it captures the data in frames at a very slow rate (~500Hz). I am very new to this LabVIEW Communications. I notice that the intensity graph is captured in a matrix form, and can be saved as a CSV. I have made the samples/frame as low as I can without glitching the program. I want as many frames per second as possible.
My Rx receives 88-108MHz signals. My USRP is rated from 50MHz to 2.2GHz.
Every time the USRP receives a new bit of modulated data, I want LabView to Capture the data. Ideally, I would want a complete 500ms of data, where if I re-transmitted that modulated data at the original frequency (ex. 95.5MHz) it could be captured, and deciphered from another receiver.
Attached is a screenshot of the program I am looking at.
Can you provide more context on what you're wanting to do with the data you're acquiring? Are you trying to acquire the data and demodulate it? Are you just wanting to look at the power spectrum?
We do have examples for demodulating FM signals. It can be found in Examples >> Hardware Input and Output >> NI-USRP >> Modulation Toolkit >> FM Demodulation.
I want to record the FM data across multiple receiver on multiple PCs, and then use TDOA (time Difference of Arrival) to calculate the location of a Transmitter, perhaps a kilometer away. If I demodulate the data, the resulting data will only be at 44.1kHz correct? at that rate, the accuracy given the speed of light will only be around 6km, much too inaccurate for what I want.
I have found a .VI that can capture up to about 3MHz as can be seen in the attached images, but I cannot seem to capture data at a higher rate than that, as an error is returned stating the data is out of sync.( Error image is attached). Any further help would be greatly appreciated.
Thanks for the additional context. There's a couple of different possible causes for the error you're seeing. It could be related to the performance of the PC, the code architecture, or ethernet speeds. It may simply not be possible to acquire at the rate you're attempting, but there's a few things you can try to optimize the performance. I've linked some help documentation below that goes over those modifications.
Thank you for the information. I decided to add code that writes to a binary file every time a new sample is taken (perhaps freeing up the buffer within the USRP?). Now instead of 3M samples/sec I am able to achieve upwards of 20M samples/second. This will give me a max accuracy of 15 meters. Perhaps with extra methods that I understand I can further increase the accuracy to within 5 meters.
I have a current error at 20M samples/sec where I can only save a max of 1M samples, and at the end I receive the error attached. If I attempt to save any more data than around 1M samples, the code continuously runs and does not end.
I don't think the increased sample rate is related to the binary file. Comparing your current code to the previous code, I noticed that in your old code, you had an initiate VI within the while loop. That was probably slowing down your code. Removing that from your code probably had the biggest affect on your sampling rate.
The buffer on the USRP is emptied when the data is transferred to the PC. Adding functionality to write to binary file does not affect the buffer on the USRP.
As for the error you're seeing, I think it's probably still rated to the processing on the PC not being able to keep up with your acquisition rates. Anything you display on the front panel takes additional processing power. Since the graphs and displays you have is in the same loop as your "read", it can slow down the acquisition. To streamline this process, I'd recommend getting rid of all the graphs and only log the data to your binary file. Basically make the code in the loop as simple as possible so nothing is slowing down the acquisition loop. See how much data you're able to acquire at that point.
Comment out (disable structure) the write to file and check your speed. If the writing drag speeds down, you need to move it to a separate loop and feed it through a queue. This has the possibility of filling the memory if it goes on long enough, but you only mentioned some seconds, so it shouldn't be any problems.
I am now able to achieve 50M samples/second with about 2000 samples, even when disabling all writing and displaying of data. There is no advantage at any sampling rate to disabling or re enabling the writing feature or plot display outside of the loop. If I attempt a higher sampling rate, the coerced sampling rate will stay at 50M, even if I set it to 1001M. So it appears either LabView or the USRP has a coded in limit for the sampling rate (perhaps by conducting a small test on the components used just before executing the code).
I am still wondering exactly why I can read 2000 samples at 50M samples/second, but if I attempt to read more than 2000 samples at 50M samples/second, the code runs forever and it does not finish, even if no graphs are displayed and data is not saved to a file.