LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What use is "residue" in fitting vi's?

I've been using the linear fit vi for a while now and I still don't know what use is the output of "residue."  

 

Why doesn't the vi output an array of residuals? I understand that there is an express vi for fitting that does this, but the vi is very cumbersome and not really useful to me in the program I'm running. 

 

But I'm curious. What use is the value of the "residue" anyway? Who uses it and for what? 

0 Kudos
Message 1 of 10
(4,101 Views)

@VariableRange wrote:

I've been using the linear fit vi for a while now and I still don't know what use is the output of "residue."   




Well, the help describes it in detail: is the weighted mean error of the fitted model. You use it to gauge the quality of the fit and e.g. compare fits.

 

For example if a quadratic polynomial fit gives you a significantly lower residue that a linear fit, you are probably not dealing with a straght line but with some banana. 😉 If ther residues are about the same, a linear fit model might be sufficient to describe the data in the simplest scenario.

Message 2 of 10
(4,092 Views)

I know what it is, but I don't see how it's very useful. Sure it's a rough gauge as to the goodness of fit, but the number itself as a gross average is not very useful. An array of residuals as an output would be much more useful. O well, never mind me. 

0 Kudos
Message 3 of 10
(4,086 Views)

@VariableRange wrote:

I know what it is, but I don't see how it's very useful. Sure it's a rough gauge as to the goodness of fit, but the number itself as a gross average is not very useful. An array of residuals as an output would be much more useful. O well, never mind me. 


Well, you could start subtract the data array from the best fit array and you are already pretty close. 😉

 

(Why should the primitive do it for you even though most users don't need it most of the time? Just to waste memory and CPU?)

 

If you often need a certain modified functionality, create your own subVI with some extra code in it. That's what I do.

0 Kudos
Message 4 of 10
(4,084 Views)

It's funny that anyone would think of Labview as being conservative of memory or CPU. 

0 Kudos
Message 5 of 10
(4,050 Views)

@VariableRange wrote:

It's funny that anyone would think of Labview as being conservative of memory or CPU. 


It is even funnier if somebody wouldn't think that. 😄

 

Anyone can write a program that consumes 100% CPU, but why should LabVIEW prevent him from doing so, even if it is not really smart coding?

 

 

(from here)

 

CPU

Of course all modern computers have multiple CPU cores, and if the mathematical problem is large, we actually want to utilize all available resources. Due to the inherent parallel nature of LabVIEW, programs can be written to seamlessly scale for any number of available CPU cores.  A program should run fine on an Intel Atom based netbook, but should take off like a rocket once it is runs e.g. on a dual Xeon with 16 hyperthreaded cores. Here are some benchmarks of my LabVIEW based spectral fitting program (e.g. the dual Xeon is >130x faster than the atom!). And yes, if it is not in the process of fitting, the CPU use is negligible. 😉

 

We want: (1) near zero CPU use if nothing needs to be done and (2) full use of all available CPU cores if needed. LabVIEW allows to do all that easily!

 

 

Memory

WIth some experience, memory use is very predictable and there are many ways to minimize it (inlining, DVRs, in-place algorithms, smart coding, correct representations, etc.). Of course anything that is shown on the front panel also needs data copies for the display and for the transfer buffer, etc.. For this reason, one should keep large intermediary data on the block diagram ( e.g. in a wire or shift register!) and not on the front panel (e.g. local variables etc.). Also keep the front panel of subVIs closed and there should be few problems unless you use code elements (e.g. property nodes) that force the front panel to be loaded.

 

The "inplacer" in the LabVIEW compiler is extremely sophisticated, so you might want to read up here.

 

(If you work with very large data, get a massive computer and LabVIEW 64bit ;))

 

 

Of course you can probably do slightly better by doing everything in assembler (IF you are very skilled at it!), but it probably would take you asymptotically forever to complete the program... 😄

 

Message 6 of 10
(4,040 Views)

@VariableRange wrote:

I know what it is, but I don't see how it's very useful. Sure it's a rough gauge as to the goodness of fit, but the number itself as a gross average is not very useful. An array of residuals as an output would be much more useful. O well, never mind me. 


 

Not very useful?!?

 

 

Although I have been regarded as a engineer who is a subject matter expert in applying statistics to engineering problems, I am not a statistician. I did use my electives in my grad school program, however, to study applied statistics.

 

I would also not use LabVIEW for regression analysis or other statistical analysis unless I needed to dynamically use the results in a larger program. There are so many other tools for statistical analysis. STATISTICA does a nice job at most things statistical, but it is pricey and the versions I have used (9.x) are awkward at best while performing stepwise linear regression. STATISTICA also allows a user to convert key/mouse strokes into a Visual BASIC script to automate analyses. JMP is the tool that most companies provide their engineers for all-around statistical analysis, and provides an EXCELLENT interface for stepwise regression. I prefer the look-and-feel of STATISTICA over JMP, however. If you are just after performing regression of inputs and reviewing the outputs, Excel does this and does not require additional an additional software purchase. The canned analysis routines work nicely. Most of the time, I use Excel's matrix operations to manually do the math.

 

 

Now on to the matter of your post:

 

The help file that Altenbach referred you to identifies that "residue" is the mean square error (MSE) of the fit. The wikipedia page for Linear Regression (http://en.wikipedia.org/wiki/Linear_regression) gives the following:

 


 y_i = \beta_1 x_{i1} + \cdots + \beta_p x_{ip} + \varepsilon_i
 = \mathbf{x}^{\rm T}_i\boldsymbol\beta + \varepsilon_i,
 \qquad i = 1, \ldots, n,

The MSE is an estimate of the variance of distribution of errors (the epsilons). How is this not useful?!? It is a measure of the amount of variation in your responses that is not explained by the regression model. The width of Altenbach's graphical "banana".
I saw in your other thread that you and Altenbach were discussing what weights to apply to your data. You need to know what your data looks like in order to determine what weights to apply, should you choose to use weighting factors. I cannot recall an instance, however, when I have used Weighted Least Squares analysis. More often, true outliers are eliminated from the analysis or the data is transformed so that extremes of the distribution do not have more leverage than other portions. Using a recommendation for data analysis you found on a web-based encyclopedia without determining whether the technique is proper for your data is poor engineering practice.
 
I apologize for this soap-box post. The mis-use of statistical techniques is one of my pet peeves. Statistical analysis techniques are things that an engineer should have in his/her toolbox. Just like there are different screwdrivers and wrenches that require some knowledge to identify which to use, there are different regression techniques. Does it require a phillips-head or flat-head?  Metric or SAE? Male or female? what size?
 
Sorry, boss.  My model for the behavior of the Space Shuttle's Booster O-Rings was generated using this weighted least squares technique I found on Wikipedia that negated the stiffness of the material at low temperatures because I wanted to reduce the leverage of datapoints at below freezing temperatures. It always warm in Florida, anyway. What? Seven people died while millions, including school children across the country and around the world, were watching on live TV?
 
Jeffrey Zola
Message 7 of 10
(4,019 Views)

@Jeffrey_Zola wrote:

The help file that Altenbach referred you to identifies that "residue" is the mean square error (MSE) of the fit. 

Not quite, it also includes weighting.

0 Kudos
Message 8 of 10
(4,015 Views)

@altenbach wrote:

@Jeffrey_Zola wrote:

The help file that Altenbach referred you to identifies that "residue" is the mean square error (MSE) of the fit. 

Not quite, it also includes weighting.


But you wouldn't blindly apply weights because a sage member of an internet community pointed you to a Wikipedia article that identified that particular weighting, would you?

 

The "residue" appears to be the quantity that is minimized by the least squares fit. I therefore do not see how it can be regarded as "useless." And in my experience, I have never used weights. I have, however used various transformations of both predictors and responses in order to allow for better fitting of data and reducing the number of outliers that need to be excluded. But I have always reviewed the results of the transformations to ensure that I was satisfied with what they were doing.

 

I agree with the OP's comment about wanting an output of residuals. Techniques to evaluate regression models include reviewing the distribution, order, etc., of the residuals.

 

Jeff

Jeffrey Zola
0 Kudos
Message 9 of 10
(3,986 Views)

But you wouldn't blindly apply weights because a sage member of an internet community pointed you to a Wikipedia article that identified that particular weighting, would you?

Well, nobody said anything about "blindly". There are many realistic scenarios where useful weights can be determined exactly. For example if you have an array of photon counters, the error in each channel is inversely proportional to the count. If you have a set of data from the same instrument where each point was signal averaged for a different duration you know the relative weights. Some instruments have inherent known variations in precision, such as an angular dependence.

0 Kudos
Message 10 of 10
(3,935 Views)