I am basing my desing in http://zone.ni.com/devzone/cda/epd/p/id/6193, my requieremet is to build a repeater (with a 10 mili seconds delay) with 20MHz of bandwidth. It seems to work OK up to 1M, but when I increase the bandwidth the repeater stop working
In the example which I am using as a base ( http://zone.ni.com/devzone/cda/epd/p/id/6193), streaming is done but in the example it is NOT neccesary to abort the session to continue writing, using the 5641 and streaming I need to abort the sesion write and then continue the execution.
Is there is a way to repeat an incomming signal without aborting the session (using just the 5641)??
Solved! Go to Solution.
I think I can explain better
I need to build a repeater that delays UMTS BS signals and then repeat it with a variable delay from 0ms to 10 ms (20 MHz BW). I am ok if I can repeat the signal comming from one BS (5 MHz BW). Rigth now I am not able to repeat signals with a BW higher than 1 M.
I am using just the NI PXie 5641R (later on I will also use a downconverter and a upconverter). I am working with a host API. What I am doing is the following:
1) Read groups of samples from the ADC as fast as possible (in a while loop without delay)
2) Put the samples in a queue
3) Delay the DAC 10 ms, so it start after the ADC has kept some samples in the queue
4) Read samples from the queue and sent it to the ADC (I guess I am sendint it to the FPGA buffer). If there is no samples at the queue the host waits until there is a group os samples.
I am using a chunk of samples of 10k or 100k (the size of the group of samples I am reading and putting in the queue), I know this value should not be to small or to big (in order to increase speed).
Up to 1 M using 10K the system seems to work correctly. At the beggining the queue grows until the DAC starts working and then the elements in the queue are taken as soon as they are being queued, so there are no elements in the queue.
When I increase the sampling frequency (higher than 1 M) the queue grows even when the DAC is workig (in fact I dont know why I am not getting an underflow error since my DAC and ADC rates are the same, how can I have the same rates and be able to keep samples in a queue??). If I increase the chunk size I get an error time out in the DAC, it seems that the samples are not transmitted fast enougth to the DAC.
Do you think I am doing something wrong, or it is the maximium rate my system can stand?? Do you think there is a way to improve the system?
I am basing my desing in http://zone.ni.com/devzone/cda/epd/p/id/6193, In the example streaming is done but in the example it is NOT neccesary to abort the session to continue writing, using the 5641 and streaming I need to abort the sesion write and then continue the execution. Is there is a way to repeat an incomming signal without aborting the session (using just the 5641)??
I am open to use async FPGA code, but in this case I am having other problems:
-using feedback node (vi component) with a delay of 100000, the system is not able to compile (out of memory after working 10 hours) . My controller (NI Pxie 8108) has 4G in ram memory
-using Discrete Delay Function (vi component) , this compenent is implemented as a Lookup table (so it is not implemented in the RAM), the delay I want is to big to be handle by the FPGA (it is not able to compile becouse my desing doesnt fit in the device)
-maybe I could use a FIFO but in this case a dont see any way to control a variable delay (i guess the fifo is implemented in the FPGA RAM). the VI FIFO doesnt allow to chose element.
Any suggestion? HELP!!!
Carlos J Rueda
It sounds like a great starting point for you would be the frequency translation FPGA example. I understand that you just want a delay so you will need to remove the frequency translation code but it will still have the basic framework for a acquire and playback system which is what you are looking for. If my understanding of your needs is correct, you will simply need to delay the output for a set number of clock cycles which can be done a couple of different ways, the most straight forward (and least robust) is just adding a pipelining step in the form of a feedback node to the data path, this will delay the data for one single cycle timed loop iteration. However if the delay becomes to large you will have an overflow/ underflow condition. You mention that you can repeat the signal so you may also look into writing to memory.
5640R examples can be found here : C:\Program Files\National Instruments\LabVIEW 2010\examples\instr\ni5640R\FPGA\PXIe-5641R\ni5640
More information on DRAM can be found here: C:\Program Files\National Instruments\LabVIEW 2010\examples\instr\ni5640R\FPGA\DRAM Examples.txt
Looking at your code it appears that you are using the instrument driver. The example that Jaced posted only applies if you also purchased the LabVIEW FPGA software. However, looking at your code I have a few ideas.
First, one of the limitations of using the instrument driver is that you are unable to manipulate the code that is running on the FPGA. This means that you are bounded by the speed of your system to transfer the data to the host and back to the FPGA.
If you do have labVIEW FPGA you could manipulate the code to do the delay in the FPGA.
Also, the span/IQ rate that you wire into the acquisition and generation VIs actually gets coerced to the next highest rate which the ADCs or the DACs can handle. These don't necessarily always line up, so if you were to request 10M on both you may get 10M on one and 12.5M on the other. You can use the get Actual IQ Rate VIs to check this. If they are getting set slightly different, that could be causing your underflow or overflow.
Thankyou for the replay Jaced
I have already checked these templates, Tthere are two ways to solve my problem 1) Using the instrument driver or 2) Using the FPGA module
I think you are suggesting to use the FPGA module (with the instrument driver is not posible?). Using the FPGA module I have the following ptoblems:
I am not able to compile a feedback node with a delay of 100 000 (using a clock of 100M I need 100 000 cycles to delay 10 ms). I am using the templates, I am creating a sub Vi with practically just the feedback node, but I am not able to compile this sub Vi. Other problem with this solution is that I am not able to vary the delay.
"However if the delay becomes to large you will have an overflow/ underflow condition." I dont understand why I could get an overflow..I thougth If the delay is large it doesnt affect overflow conditions, because initial values are being tranmited, so the input/output is constant
On monday I am going to check the DRAM examples that you suggest,
-I will check your suggestion about input and output
-Yes I think I am limited by the speed of my system
-I have the LabVIEW FPGA Module
so far I dont see how to solve my problem, I think using the RAM in the FPGA is the only way, but I dont see which Vi I could use
Tankyou for replaying
If you right click on your target in the project tree, then select New >> Memory, you should be able to change the implementation type to DRAM. Then you can use the memory method node from the Memory and FIFO palette to access the dram. It uses a random access protocol, but you could utilize it for your purposes I believe.
Also, if you prefer to use a CLIP node (Component Level Intellectual Property) to access the DRAM, the 5641R has both a Random access and a FIFO CLIP. A CLIP is basically custom VHDL for a particular purpose integrated into LabVIEW. The CLIP may be harder to use, but there is a FIFO implementation already built for you. Only one method for accessing DRAM can be used at a time (either CLIP or the built-in method nodes), so you will have to change your project if you want to use a CLIP. To do this, right click on the target in the project tree and select properties >> Dram properties. There should be an implementation dropdown where you can change it from DRAM memories to CLIP. Once you have done this, an icon for the DRAM clip will appear in your project. You can right click that icon, go to properties, and choose your implementation (Random access or FIFO). FlexRIO provides some very helpful examples for getting you started. Try searching the Example Finder for "FlexRIO" (with no quotes).
Hope that helps.
Oh, and if for some reason you do not have the FlexRIO examples, you should consider installing the full version (with examples) of the NI-RIO driver. The ni5640R driver only installs a "Lite" version of the driver that may or may not include FlexRIO examples. The full driver is free and can be found here: http://search.ni.com/nisearch/app/main/p/bot/no/ap
Thankyou, I have the examples and I already checked (I coudn´t check before becouse I dont have access to the equipment on weekends).
The examples use an I/O method to access the memor but using the 5641R as FPGA target I cannot access the memory in this way, I guess that using 5641R I should use the FIFO and Memory VIs. Currently I am doing some testing.
One question, Program Files (x86)\National Instruments\LabVIEW 2010\examples\instr\ni5640R\FPGA\DRAM examples.txt says that "The NI PXIe-5641R contains a single 128MB bank of DRAM." but in the specification of the 5640 R says "8,784 kilobits of block RAM". Are these different memories or something is wrong here?
So there are two seperate RAMs on the device. There is some memory built-into the actual FPGA itself (the 8784 kilobits of block RAM), and there is an additional DRAM chip that is not part of the FPGA, but is connected to it. This DRAM is much larger (and somewhat slower) than the RAM built into the FPGA. Basically it was put on the device for applications similar to yours.
You should be careful as to what type of example you are looking at. There are examples for both CLIP implementations and the native labview memory primative. "Simple External memory FIFO" is a CLIP example while "Memory Integrity Test" uses the native implementation. I also see two potential reasons why you are having trouble accessing the built in memory. Make sure you are using the "Memory Method Node" if you choose to use the native memories, and use the "I/O Method Node" if you are using a CLIP implementation. Second, if you choose to use the native memory implementation with memory method nodes, make sure you have the properties for the memory set to DRAM not to block ram. Like I said above, DRAM is slower, but it will give you more space.
Note: DRAM can be accessed through the CLIP or native memory method nodes, but Block RAM can only be accessed through memory method nodes.