LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

"Unscale" Polynomial Fit coefficients taken with scaled data

Hi,

 

I was experiencing many problems attempting to fit my data with general polynomial fit VI. This was because my data was ill-conditioned and this was solved by scaling my X & Y values. However, because this is scientific data I need coefficients representative of the true values. Y was scaled through multiplication, and X values were transformed by subtracting the mean of all X's from each individual X. The problem I'm having is taking the correctly determined coefficients and now unscaling the data so it is representative of my true data set now. Initially I thought that Taylor series coefficient determination would solve this issue. However it does not. The values get close for order of 1 and maybe 2, but after that they're all over the place, and even the smaller orders are not correct. Does anyone know how to unscale your parameters determined from scaled data? I have spent days searching this with answers either specific to R programming, or that it is not possible. Read a few research papers regarding this but couldn't find anything of use. I may be searching the wrong fields/topic? If anyone has any insight or experience this it would be GREATLY appreciated.

0 Kudos
Message 1 of 11
(1,657 Views)

Isn't it just a little algebra?  You've defined: 

X' = X-mean(X)

Y' = k*Y

 

Then you do your polynomial fit in terms of X', Y':

Y'_fit = a + b*X' + c*(X')^2 + d*(X')^3

 

So now don't you just substitute your definitions of X' and Y' into that fit equation?

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 11
(1,637 Views)

Hi,

 

Unfortunately it's more complicated than that. The polynomial is of variable order (user selected) and can be has high as order 40 let's say. The Y is easy as all I need to do is multiply all coefficients by k. However the X is not, as the entire polynomial must be recentered. Do you know of an algorithm that determines coefficients for a recentered polynomial of order n?

0 Kudos
Message 3 of 11
(1,631 Views)

No, I don't know of any particular algorithm.  But a polynomial of order up to 40?!?!?!?!?

 

What phenomenon could possibly require a 40th power polynomial to describe?!?!?!?!?

 

I'm not an expert in the field of data fitting, but I'd be looking at things like cubic splines before I used an overall polynomial fit that required an order more than 4 or 5.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 11
(1,591 Views)

This is a little tedious, but not hard given that LabVIEW has polynomial composition. I happened to have some code I wrote several years ago that fits a polynomial to some data, and then maps the X and Y data using a linear transformation to different intervals, fits a polynomial to the new X' and Y' data, and uses polynomial composition to map back to the original X and Y values. The coefficients are then compared and shown to be the same. On the front panel of the attached VI please look at the 'new coefficients (mapped)' and 'new coefficients (fit)' for the same coefficient values.

-Jim

0 Kudos
Message 5 of 11
(1,585 Views)

I realized after looking at the example that the mapping was a bit confusing so I rewrote to be clearer. The attached VI generates X and Y data from a known polynomial and adds a little noise. The VI fits a polynomial using X and Y data. The X data is mapped to [-1,1] using Scale.vi. Similarly the Y data is mapped to [-1,1]. Let the scaled X and Y data be called X' and Y'. The VI fits a polynomial using X' and Y', then uses polynomial composition to map the coefficients.

 

-Jim

Message 6 of 11
(1,569 Views)
@solidw20 wrote:

However, because this is scientific data I need coefficients representative of the true values..


Polynomial coefficients are are not scientific except for maybe some low order terms. Doing a 40th order polynomial fit is just plain silly. You can make a blackbox subVI that does the scaling and a similar one that calculates f(x) based on the scaling and the scaled terms.

 

(If I would ever get a scientific paper for review that lists 40 polynomial coefficients (in any scaling), I would probably reject it for overinterpretation.).

 

(What is the purpose of all this. My suspicion is that you are probably the same guy who had the recent thread deleted where I already explained some of the issues and spend a lot of time helping with code. If this is true, I will not participate in this thread or help out.)

0 Kudos
Message 7 of 11
(1,566 Views)

Hello, not sure what you're referring to, this is my first time posting in this forum. I did follow another thread where I think it was actually you that suggested to normalize data by subtracting mean from my data. The Y scaling was because the variance was too large. Either way you may or may not participate in this thread by all means, suit yourself. If you do intend to help, would you mind explaining what a blackbox subVI is?

0 Kudos
Message 8 of 11
(1,559 Views)

Hey Jim, 

 

It seems you did something similar to what I am intending to in your experience. Could you attach the file saved for an older version of LabVIEW though? Operating on LV 2016 due to it's increased functionality

0 Kudos
Message 9 of 11
(1,556 Views)

Here is the LabVIEW 2016 version of the VI. 

 

I agree with the concerns expressed by Kevin Price and Altenbach regarding the order of your polynomials, if you are seriously considering 40th order. Scientific modeling aside, please be aware that the regression routines may not be accurate past 9th order or so depending on your data.

 

-Jim

Message 10 of 11
(1,516 Views)