NI FlexRIO Network

Reply
This is an open group. Sign in and click the "Join Group" button to become a group member and start posting.
Highlighted

DRAM clip Mode Vs Memory Item mode

Hello ,

I'm experiencing some issues with the DRAM clip mode .

I would like to apply a delay to an input signal by writing it into the DRAM and shifting the read adress depending on the chosen value of the delay .

Since my delay have to hav ea 8 ns resolution , I need to have access to every memory adress . For these reasons , I have to use the clip mode since the Memory item mode gives access to 64 bits at once ( 128 in my case for the PXI 5646 ).

As a first step , i implemented the memory controller and the read/Write loop . I'm writing the memory adress as data and i compare it to the reading adress in order to validate the memory integrity before applying any delay .

I succeeded in implementing the memory controller with the memory item mode based on the memory throughput example .

I'm trying to switch the code to the clip mode , but I have some problemes with the synchronisation signals .

Is there an example that implements the 4 wires protocol with the clip mode as it is done with the memory item mode ?

Thanks for any help .

Mourad FAKHFAKH

CLAD

0 Kudos
Message 1 of 7
(3,851 Views)
6 REPLIES 6

Re: DRAM clip Mode Vs Memory Item mode

Hi mourad_FAKHFAKH, 

I have checked the example "Memory Throughput Test.lvproj" and found LabVIEW FPGA memory items also support 128bit data width so does 5646R.  I guess you checked one of the FPGA VIs under 795xR.  You have to check one of the FPGA VIs under 796xR and you will find the LabVIEW FPGA memory items will be used as 128bit. 

Just for a curiosity, when is the start of delayed signal, some kind of trigger?  How much is the range of delay you want to add?  If the amount of delay is very small, you have to switch the input data path from DRAM FIFO to BlockRAM FIFO, because DRAM has longer latency than BRAM. 

Thanks,

0 Kudos
Message 2 of 7
(2,500 Views)

Re: DRAM clip Mode Vs Memory Item mode

Hi O.Fujioka ,

Thank you for your quick response .

I have already implemented the memory controller with memory items mode and it supported the 128 bits data format . The issue is that every memory is a 128 bits word . Since my data is 32 bits format , a memory element contains 4 packed samples . As a consequence , this memory representation does't allow me to switch between every sample and delay resolution is 4 times higher than my need. In contrast , I read that the clip mode allows to access every memory adress but there is more work to do to have a functional memory controller interface .

Delayed signal start is triggered and the delay range reaches the second so I don't think I have a solution rather than using DRAM with Clip mode . The latency problem of the DRAM can be avoided by using a pipelined architecture with FIFOS communicating with the memory controller .

0 Kudos
Message 3 of 7
(2,500 Views)

Re: DRAM clip Mode Vs Memory Item mode

Now I see what you are experiencing now.

To achieve what you want, you need to "shift" the data stream when you read the packed 32bit IQ data from DRAM.  As you have mentioned, we can adjust the amount of delay in 16nsec resolution by changing the timing of the start of read.  To achieve higher resolution, you have to use a trick of shift stream.  There are two kinds of VIs for shifting data stream.  One is included  FIDL (https://decibel.ni.com/content/docs/DOC-15799).  You can check the shift stream function by simulation. 

As for the amount of delay, I asked how much the minimum delay for the flexibility is for your application.  I understand that DRAM can delay the signal up to its (maximum depth) - 1 ,considering shift stream.  However, if you want to set the amount of delay to a small number such as 100nsec, DRAM cannot handle this short delay and you need to switch the FIFO to another BRAM FIFO which may be placed in parallel. 

Thanks,

0 Kudos
Message 4 of 7
(2,500 Views)

Re: DRAM clip Mode Vs Memory Item mode

Adding a shift stream block seems to be a good idea for me . Though , I hope it won't cause timing violations when all the design assembled with the RF chain . In case such problem occurs , do you think that it is possible to implement the design using the clip mode ?

It is possible for me to have a long initial delay , DRAM would be sufficient by itself . However it would be good to make the design capable of short initial delay .

0 Kudos
Message 5 of 7
(2,500 Views)

Re: DRAM clip Mode Vs Memory Item mode

Shift stream supports four-wire protocol and it almost unlikely causes timing violation.  If you would like to persist with the random access CLIP, you would have to do a little bit implementation by using this CLIP to achieve the handshaking input/output equivalent to those of the memory item.  For replacing input valid with DRAM write of memory item, you can take && of Command_Write_Enable and Command == 0.  For replacing ready for input of DRAM write of memory item, you can check Command_FIFO_Full.  For replacing rest of the signals, you can do the similar things with the CLIP nodes.

0 Kudos
Message 6 of 7
(2,500 Views)

Re: DRAM clip Mode Vs Memory Item mode

O.fujioka ,

Thanks for your help , it will be very helpful for me .

0 Kudos
Message 7 of 7
(2,500 Views)
Reply
This is an open group. Sign in and click the "Join Group" button to become a group member and start posting.