04-04-2018 03:31 PM
Check out the below LinuxRT code (FPGA code available on request)
As the size of the requested chunk goes up, memory use also goes up and when we request smaller amounts, the memory use goes back down.
Ideally, I'd like to allocate everything ahead of time, and then feel comfortable that as long as my array size doesn't go above their allocated amount, I'll never use up all my memory. Another benefit is that I could observe the total memory use and make sure it doesn't increase. I'd be able to disable code and easily identify the memory leak.
Is that possible with the above code?
04-04-2018 05:01 PM
The simple solution is to always request the same number of samples every iteration. Then your memory will not change.
04-04-2018 05:45 PM
In my case there's a highly variable amount of data coming in at a given time. Sometimes it's almost nothing and waiting for a "full packet" would require that some of that data is stale.