02-24-2015 07:42 PM
Hello,
I am configuring a fractional decimation filter. I calculated the taps using
Matlab, and created a valid .coe file. Using Xilinx FIR Compiler (v5.0), I
configure the filter appropriately (the code is attached). When I compile and
run, the FIR filter has its "rfd" output always set to FALSE, indicating that
the core is never ready to accept new data.
This behavior (RFD always being false) holds true whether I run the
top_level.vi in simulation mode, or I compile it and run on the actual
hardware.
Filter details:
Fixed fractional decimation filter, configured as follows:
a) Interpolation rate: 10
b) Decimation rate: 13
I have created a dummy project just to reproduce the project, and attached it here.
NI Software : LabVIEW FPGA Module version 2014
NI Hardware : FlexRIO device PXIE-7966R
LV Version: 2014
Thanks,
Aditya
02-24-2015 07:59 PM
Update: If I were to configure the filter as an integer interpolation filter or an integer decimation filter, it works in simulation. (I have not tried it on the actual hardware).
But when it is configured as a fractional decimator, this problem shows up.
03-01-2015 11:36 AM
Hi Aditya,
I have found the following link that came into the structure used to generate the filtering using the polyphase structutre,
http://www.ni.com/white-paper/9260/en/
Hope you can find it helpful