RF Measurement Devices

cancel
Showing results for 
Search instead for 
Did you mean: 

Host-based streaming in VST

Solved!
Go to solution

I had a chance to take a quick look at your VI before I leave for the holidays and this is what I found. You are not checking for the status of both streams, so you are not detecting input or output streams overflows. Actually, I think you are checking one of the two, but in the loop for the other stream. That was interesting.

 

If you are not checking the status of the streams, then I'm not surprised you are missing to get packets. Two ideas:

 

1. Modify your loops to still check the stream status but only once every 100 or 1000 iterations.

2. Do #1 but use a new version of the Check Stream Status VIs that do not check the RF status

 

You can also make the DMA FIFOs deeper, which should help to do higher streaming rates.

 

I think you can maintain stream rates over 40 MS/s if you do suggestions #1 or #2.

 

Other things you can try:

 

- Make the Host DMA buffer sizes larger (64MB or larger is needed for very high streaming rates), ideally a multiple of 4096 bytes

- Try doing your reads/writes in clean multiples of 4096 bytes

 

See if your host code is doing any undesired data copies or doing inefficient things that are taking too much CPU.

 

I don't have much experience on this subject, but given yoour requirements of continuous streaming of 1.024MS/s, that should be very doable with the VST.

 

Good luck.

Message 11 of 17
(3,475 Views)

Dear JMota,

 

Thank you for your help.

 

Happy holidays!

 

HC

0 Kudos
Message 12 of 17
(3,469 Views)

Any ideas on how to cause delay between TX and RX in the microsecond level ?

The changes would have to be made to the fpga program and not the host program. 

 

I already increased the FIFO lengths by 8 times so that neither overflows/underflows during the delay period. I'm trying to create a 10 microsecond delay between TX and RX, so that the TX starts 10microsecond after RX begins to accumulate data. 

 

I've already tried many methods to create the delay but none is working, especially because the fpga target program is in a single timed loop which does not support the wait function.

0 Kudos
Message 13 of 17
(3,424 Views)

Not sure I fully understand the use case here, but here are a few methods to delay data on the FPGA:

 

1. Use LV FPGA's Discrete Delay block:

 

http://zone.ni.com/reference/en-XX/help/371599G-01/lvfpga/ht_discrete_delay/

 

 

2. Implement your own delay block using a block RAM or DRAM (use DRAM for very large delays)

 

Have a write and a read pointer into your memory that are off by the amount of delay you want to apply.

 

 

3. Use a FIFO and not start poping out elements of the FIFO until X cycles

 

The Simple VSA/VSG Sample Project ships with the VI: FPGA\SubVIs\Delay FIFO (U8).vi, which you might be able to leverage if you need to Reset the FIFO during a run.

 

0 Kudos
Message 14 of 17
(3,412 Views)

Thank you for your reply.

 

What I'm trying to do is to recieve the signal as soon as possible, however while transmitting the signal, I start transmitting after a user customisable delay (of 1 to 100 micro seconds). In this process, the data for the delay period should not be lost. Basically, it stores the data for the first few micro seconds and then starts transmitting.

 

Your suggestions were actually helpful, but I encountered some problems. I still haven't tried out the first option.

However, in case of block RAM or DRAM, the block RAM uses an embedded shift register, so Labview does not allow me to set a read adress to 0, while the write address to (say) 1000. It says that the design require that I use a feedback node or shift register. In the case of DRAM, I can specify a read address, but there is no option for a customised read write address. So, in both cases, the first value which is written would be the first value to be read as well, hence there won't be any delay.

 

In the case of the FIFO example that you mentioned, it actually looked quite promising. However, that example has been created for the host program which can give me results in miliseconds and not micro seconds. If I use an incrementer and comparater to implement the same mechanism in the fpga program, labview gives an error saying that the fpga program does not support an incrementer. Hence, I am unable to create a custom trigger for the popping of FIFO elements after a certain number of clock cycles. 

0 Kudos
Message 15 of 17
(3,389 Views)

I'm sorry, I actually tried the first method as well.

 

Since I'm reading at 120Mhz, creating a delay of 80 micro seconds would mean implementing 20 sequential discrete delay functions (since one can at max, delay 512 clock cylces). Implementing 20 such functions and controlling them individually does not seem effective. However, I have implemented it and am currently in the process of compiling the bitfile. I'll let you know if it works.

It may not be effective, but something is better than nothing.

 

Thanks 

0 Kudos
Message 16 of 17
(3,387 Views)

Edit 1 : The compailation with the discrete delay function failed. I don't know why, but it gives a timing violation error irrespective of wether I use just one discrete delay function or multiple.

 

Edit 2: I found how to give the address for the DRAM read. I have one question though. What happens when the DRAM gets full ? Does the program crash, or will the DRAM start over writing the previous values. I'm asking because my program needs to run for many hours. 

0 Kudos
Message 17 of 17
(3,378 Views)