From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Digital Shaping on FPGA

Solved!
Go to solution

Dear experts,

 

Intro:

I am currently working on a digital shaping algorithm to be programmed on the 7966R FPGA in LabVIEW 2012 SP1. I have designed the algorithm to work off-line and need some assistance with modifying it to work on the FPGA target. The goal of the algorithm is to extract signals very close to the noise threshold. The signals are of the shape described by a sharp rise followed by a longer exponential decay of a known time constant.

 

Background:

The algorithm involves convolution of a digitized waveform (sampled at 40MHz with the 5734 AI) against a triangle/cusp kernel. The working offline convolution code is attached as: SubVI-KernelConvolution.vi. To program this on an FPGA, I have been using one of Knoll's iterative algorithms based off Knoll_Algorithm.png. I have both an offline and a FPGA/FIFO based version of this algorithm. I have attached the offline version as SubVI-IterativeAlgo.vi. The sub-vi used by the FPGA is the same algorithm but uses FIFOs to implement the delays instead of indexing the array.

 

Problem:

When I run the iterative shaping vi on sample data, in some occasions it reproduces the results from the convolution vi seen in Signal_BaselineRecovery.png and NoSignal_BaselineRecovery.png but sometimes fails to recover the baseline seen in Signal_BaselineDecline.png and NoSignal_BaselineIncrease.png. In all four of these screenshots, the raw waveform is shown in the upper graph and in the lower graph is the result from SubVI-KernelConvolution.vi is in white and green while the result from SubVI-IterativeAlgo.vi is in red. There does not seem to be a preference in it converging above or below baseline and this occurs both when there is and is not an event. This data was taken with a signal pulse generator and thus the waveforms are the same up to  digitizer noise.

 

Question:

Can anyone identify the problem with my algorithm or implementation of it? I have a suspicion that I am overlooking something trivial. Additionally, if anyone knows of a better method by which convolution can be implemented, I am all ears. As mentioned before, I have LabVIEW 2012 SP1 but am able to get 2017 if a more updated version is required.

 

Thank you very much,

Alex

0 Kudos
Message 1 of 13
(3,247 Views)

Those VIs are saved as 2017.

0 Kudos
Message 2 of 13
(3,205 Views)

As mentioned, I have access to 2017 (have it on my personal computer) but have 2012 on the DAQ.

0 Kudos
Message 3 of 13
(3,191 Views)

Can't help with VI's from 2017. Though I gave it a try, expecting 2012, but couldn't open them. It was just a heads-up so others will know before spending time to find out. That's all.

0 Kudos
Message 4 of 13
(3,184 Views)

Alex,

 

I would resave the VIs as 2012 versions and repost them. That will make it so as many people as possible will be able to to help out.

Michael Bilyk
Former NI Software Engineer (IT)
0 Kudos
Message 5 of 13
(3,152 Views)

Attached are 2012 versions.

Download All
0 Kudos
Message 6 of 13
(3,143 Views)

I think you think those feedback nodes will get set to 0 each time the VI is started, but they are not. They only get initialized on first call or on compile\load, depending on their configuration. I think both are inappropriate in your use case.

 

Unless you call those VIs dynamically (through VI Server), those feedback nodes will retain their value over different cycles. You should use a shift register instead. 

 

Not sure what else is wrong (if it is wrong), but this seems off...

0 Kudos
Message 7 of 13
(3,125 Views)
Solution
Accepted by topic author Alex1777

I found the solution to be a difference in notation between various papers within different applications (physics vs. engineering).

0 Kudos
Message 8 of 13
(3,099 Views)

I'd still look into those feedback nodes. Unintended data transfer over iterations is a bug waiting to happen.

0 Kudos
Message 9 of 13
(3,090 Views)

Hello,

 

I've written a new version of the algorithm to perform an iterative convolution between the data and a truncated cusp (see the attached SubVI-Truncated-Cups.vi). This vi performs the same convolution as SubVI-KernelConvolution.vi posted above, this is shown in ConvolutionAlgorithm.png where the red trend shows the result of SubVI-KernelConvolution.vi  and the white trend shows the result of SubVI-Truncated-Cusp.vi. The offset was added just to differentiate the two as otherwise they would overlap completely.

 

However, I have found a new error depicted in ExponentialDivergence.png. Eventually (when ran on a longer dataset) the algorithm fails resulting in an exponential divergence. Using the debugging, I found the cause of this to be from small errors in the subtraction of order E-16. Because the lower Shift Register is multiplied by a number bigger than 1 (i.e. exp(1/T) for some time constant "T")  this error propagates and eventually grows to order 1. Because the intended vi (when put on an FPGA using FIFOs) will run at 40Msamples/s for several months regardless of what time constant is chosen, it will eventually explode (numerically speaking).

 

At this point I have not put in too much thought towards this problem but believe it will require some compromise of precision and that there will have to be some optimization by hand. Any suggestions?

 

Much appreciated!

 

Alex

0 Kudos
Message 10 of 13
(3,080 Views)