From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

compile error when using DRAM

Hi elsayed03,

Do you know what you have your Memory Depth configured to in your project?  I suspect that memory depth is set to a fairly large number, which in short is why such a large memory block is being reserved when it isn't ignored by the compiler.  Is there a way you can zip and post your project instead of the just the VI?

0 Kudos
Message 11 of 15
(538 Views)

I thought of that. Initially, I had the memory configured to be the largest possible (256 MB for each bank) but then I tried setting it to the minimum (1MB), but its exactly the same error (with the same # of memory blocks used).

 

This is my first time posting a whole project. I saved the project in a new directory then zipped that. It's a bit awkward.. the actual project is in post\Users\AMNLexp\Documents\Project\10-Tap 8-bit Camera with DRAM

0 Kudos
Message 12 of 15
(534 Views)

Hey elsayed3,

I went through your code again and was able to generate the same error log that you were.  I'm getting that all of your code is requiring 20% more blockram than the FPGA you are using has available. 

 

I noticed one somewhat signifcant difference between the example code and your code.  In the original code, the Pack 80 to 256 and Pack 256 to 64 CLIP was socketed CLIP and was all held on the DRAM banks of this specific FPGA.  You are using the DRAM banks as addressable memory, and then the Packs are being housed on the FPGA fabric.  Additionally, you end up taking everything out of the DRAM bank and passing it into a FIFO then reading out of that FIFO in a different part of the code.  In essence, you are creating multiple instances of the same memory objects.  Would it be possible for you to reorganize some of the code, and then instead of writing to DRAM and then write to a FIFO, to either just write to DRAM or just write to the FIFO?  You wouldn't need to cluster the information in the same way necessarily.  I'm just seeing a lot of copies of the same information being passed around which is utilizing more resources than you need to be.  For example, skip the DRAM Random Access memory entirely.  Instead, use the DRAM as socketed clip, and choose the implementation as a 128 bit FIFO.  This will get you the same net result, and won't require the additional FIFO.  I don't know how much space this will free up, but it will be some.  You could still use redundant FIFOs on the two DRAM banks, and get rid of those two local FIFOs.

 

I did see that you slowed down your timed loops which eliminated the second error we were originally getting.  The original example code used over half of the available resources on the FPGA, and it utilized the DRAM for the socketed CLIP.  What you are trying to do is transplant that socketed CLIP onto the FPGA, and additionally create an additional FIFO to handle what you've placed in DRAM.  You can choose how to use the DRAM however you would like, but I suspect using it as a FIFO may be the best solution.  Like I said, I'm not sure how many resources this will free up, but it will certainly help.

0 Kudos
Message 13 of 15
(520 Views)

Hey elsayed3,

I also noticed one of your other threads:

 

hdl files in examples

http://forums.ni.com/t5/LabVIEW/hdl-files-in-examples/m-p/1999777#M658378

 

where you were asking about changing the utilization of the DRAM banks.  It looks like you got some information on using the DRAM, and that using it as a FIFO to the Host is useful to avoid using too many resources.  I definitely understand that you are trying to do some image processing on the FPGA side, but I'm not sure that you have enough resources on this specific FPGA to do what you are looking to do.


Is there anyway you can do the processing on the RT controller?  Can you divide up the workload?  I'm just not sure that you have hardware that is going to suite your needs, and I don't want you to waste your time trying to optimize a code that ends up still not being suitable for your hardware.

0 Kudos
Message 14 of 15
(517 Views)

Thanks for the details. I think you understand what I am trying to do. But, just to make sure we are on the same page, I will explain.

 

In a nutshell, I want to do edge detection real time. Edge detection requires a 3x3 window. So I need to store large parts of the image. That is why I believe it is best to store the image on DRAM.

 

The camera outputs 10 pixels at a time, each pixel being 8 bits. That is a total of 80 bits of data. This is being written to the Pack 80 To 256 CLIP. When data is available on the output terminals of this CLIP, the data comes in 256 bits,or 32 pixels. I tried getting rid of the DRAM altogether from the original example, and the vi works fine, so the DRAM being used as a FIFO is not actually really necessary. Perhaps it's necessary if I have slow host memory / cpu (so the DMA becomes the bottleneck). Anyway, I thought of (theoretically) the best part of the vi that I should try to take the data from. Should I take 80-bit data (just out of the camera link interface), 256-bit data (just out of the Pack 80 To 256 CLIP) or 64 - bit data (out of Pack 256 to 64 CLIP). Given that after a read request command, the data on the DRAM becomes available after 2 or 3 clock cycles, using 80 or 64 bit data transfer would be too slow. 

 

I am not reading the data from DRAM in the same order as it is being written. The data comes in line by line, and gets written to DRAM as such. However, when reading, I am reading 32 (width) * 3 (height) pixels, and then I make 32 3x3 windows. I pass the pixels from the 32 windows to some math that calculates the likelihood of the center of this window being an edge. 

 

I had tried to do this without the DRAM, but I end up using too many resources (because I store too much stuff on the FPGA fabric). Perhaps I can try to re-do that, configured for only small size (200 x 100) size images. If that works, I can then try to increase the size of the image and see the maximum I can store without using the DRAM (or using it only as FIFO). I was hoping I could get the system to work for the largest image size (2040 x 1088) but I can compromise. Particularly since we need very high speed (5K + FPS), which requires much smaller image size. I'll let you know how that goes.

 

By the way, I am still not quite sure what the problem is. If I open up a new project altogether and simply try to write / read stuff (perhaps randomly) to the DRAM using addressable memory locations, just that would take up a huge number of resources. I don't understand why. But let's just try to work around that instead of fighting the tools.

0 Kudos
Message 15 of 15
(512 Views)