LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Inverse Error Function

Any ideas how I can calculate the inverse error function in LabVIEW 7?

I know you can do this in LabVIEW 8 using a Mathscript node, but that is not an option here. And since this is going to be running on an RT target, I can't use any other script engine either.

Surely this has been done before by someone out there? Or maybe there's just some fancy mathematical trick involving other functions included in LabVIEW 7? I suppose I could just use the MacLaurin expansion if necessary.

Message Edited by kehander on 03-10-2006 03:20 PM

Message 1 of 8
(6,450 Views)
Since LabVIEW has erf(x), you could just use a quick Newton-Raphson to implement the inverse function. Seems to converge very fast (4 iterations for accuracy=1e-8, input=0.5 and initial guess=0). 🙂
 
See attached (LabVIEW 7.0). I patched this together from scratch in a few minutes, so please verify correct operation. 😉
Message 2 of 8
(6,428 Views)
Here's a slightly better version that shows more digits in the controls and indicators.
Message 3 of 8
(6,426 Views)
Woah, neat trick. I suppose that would work, but I'm thinking the MacLaurin series would end up being just as good and substantially less computationally intensive. Did you get that algorithm from some place in particular, or was that mostly off the top of your head?
0 Kudos
Message 4 of 8
(6,406 Views)
Well, Newton-Raphson is a very simple and well-known algorithm, see the wikipedia entry. I looked into it quite a while ago while devising a solution for one of the (LabVIEW coding challenges) to find the N'th root for integers with up to 10'000 digits. 😉
 
I simply took the skeleton code of one of my drafts and put the erf(x) in it. 🙂
 
Yes, there are a few series approximations. To see if they are faster you would just need to make a quick benchmark.
 
If you don't need high precision, you could even just use a lookup table with a few dozen entries, then interpolate linearly.
 
Don't underestimate Newton-Raphson, it is very efficient.
 
 

Message Edited by altenbach on 03-13-2006 09:40 AM

Message 5 of 8
(6,397 Views)
Just a few remarks about the code in "InvErf_1.vi":


1) Because of the Fundamental Theorem of Calculus, we can compute an explicit derivative here, and there is no need for resorting to finite difference ratios.


In other words, because of the FTC, the derivative of erf(x) is known to be

[2 / sqrt(pi)] exp(-x2)

2) It is very easy to create [admittedly pathological] functions which cause an infinite loop in Newton-Raphson-Simpson. For instance, you can create a scenario with "Twin Peaks", and a zero between them, but with slopes near each peak which point off towards the other peak, so that if you make the mistake of starting immediately beneath one of those special points, then Newton-Raphson-Simpson will just bounce you back and forth forever between the two peaks, and you'll never work your way down to the zero in the valley.


Bottom line being that because Newton-Raphson-Simpson can give you infinite loops [even in the presence of a known zero], it's imperative that you put some sort of upper bound on the number of iterations in your "while" loop so that your program won't spin out of control if it comes into contact with this sort of pathological behavior.


3) This point is very subtle, but bear with me: Because of the [ABSOLUTELY MADDENING!!!] nature of floating-point numbers [as opposed to fixed-point numbers], THE GRANULARITY OF FLOATING-POINT NUMBERS IS NON-CONSTANT!!!


As you move up and down the "Real Axis", the granularity of floating-point numbers changes rather drastically - down around zero, a floating-point implementation will have fairly good [small, tight] granularity, but as you move out towards positive infinity, the granularity quickly bulges way out to the point that it becomes useless for any serious numerical computations.


In your algorithm, you've got both a constant "accuracy" and a constant "derivative increment", which shouldn't be much of a problem in the general vicinity of regions of known granularity, but if you find yourself pushed out into regions of possibly lousy [large, loose] granularity, then it's easy to create examples where an algorithm like yours could [at least in theory] find itself in an infinite loop because the granularity of the fixed-point numbers is locally much larger than the constant increments against which you are comparing the differences of such fixed-point numbers. For instance, it's easy to fiddle with the "Twin Peaks" example above so that an approach to Newton-Raphson-Simpson like yours will converge in regions of tight floating-point granularity, but when translated out to regions of loose floating-point granularity, will find itself locked in an infinite loop.


In the case of Newton-Raphson-Simpson as it's used in the Inverse Function Theorem, this is particularly infuriating because you may not know a priori where the inverse came from - it could be that the function value lives in a region with a very tight granularity of floating-point numbers, but that the inverse lives way off in some faraway region with very loose granularity of floating-point numbers.


Anyway, if you ever get really serious about numerical analysis, then you need to think about compensating for variable granularity in an algorithm like this [or proving a priori that you are necessarily confined to regions of tight granularity, so that any potential problems with granularity can be ignored].


4) Finally, while erf(x) doesn't have the kinds of "Twin Peaks" pathologies that I used above [erf(x) being monotonically increasing], it is nevertheless the case that in both the positive and negative directions, erf(x) VERY QUICKLY moves up towards its horizontal asymptotes.


Horizontal asymptotes, in turn, mean that the slopes f'(xn) are heading down towards zero [as above, by the FTC, the slopes are moving towards zero at the order of exp(-x2)], and at some point, these slopes could very well be rounded down to zero [and with non-constant granularity of the floating-point numbers, it's a real pain-in-the-neck to know exactly when the metal will decide to round the slopes down to zero]


And once you get a zero derivative in Newton-Raphson-Simpson & the Inverse Function Theorem, then you've got one heckuva mess on your hands.


So if you find yourself trying to compute the inverse error function for error function values down towards -1, or up towards +1, then you need to be very careful that the metal doesn't round your derivatives down to zero and completely ruin your algorithm.


Message 6 of 8
(5,502 Views)

Thanks for the detailed analysis. Yes, this was only a quick draft and I agree it needs a few tweaks for a final version, for example a limit on the number of iterations. My version had it but I dumbed it down for this thread for simplicity because erf is so well behaved.  In newer LaVIEW versions this is most easily done by using a FOR loop with a conditional terminal. Wire e.g. a 20 to N and stop early if the condition is met. Generate an error if the condition was never met.

 

I agree that using partial derivatives is preferred if you know them, however that's not always the case and really complicates the scalability. You want to be able to use it on a "blackbox y=f(x)" VI, even if you don't know the formula used. Sometimes, analytical derivatives are not possible. Ideally, you want a multipurpose subVI that accepts a VI reference and use "call VI by reference Node" inside. You don't want to rewrite the entire VI for each model function. Basically, we want a subVI that takes (1) a y value input, (2) a guess input and a (3) VI reference (pointing to a VI with a DBL input and a DBL output and possibly error handling). The output would be (4) x=f(y) and (5) error and possibly (6) y=f(x) for verification.

 

For mathematical proofs, you want near infinite precision, but real world signal processing is typically limited to around a dozen of bits of information content and approximations are entirely sufficient.

 

If you feel up to it, please write a "fuller featured" version and post it here. 😉

Message 7 of 8
(5,478 Views)

> 'If you feel up to it, please write a "fuller featured" version and post it here.'

 

You know, I've been toying with that idea, but just thinking about trying to compensate for the nonconstancy of the granularity of the floating-point numbers gives me the heebie-jeebies - that could take a long, long time [and then there is the question of whether the implementation is using 64-bit floats or 80-bit floats, and whether you can even determine the answer to that question from within the confines of the LabVIEW runtime environment alone].

 

BTW, if you ever want to get really depressed about this stuff, then spend a while at this page:

http://www.cs.berkeley.edu/~wkahan/ 

In particular, don't read this paper if you want to sleep well at night:

http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf

PS: I've got a little "Twin Peaks" example which I might be able to post.

 

PPS: There's something else which occurred to me this afternoon which is really bothering me now - maybe I should start a new thread about it - but I'm wondering what LabVIEW does with the Waveform data type if the initial timestamp pushes the time domain out to the point that the "dt" in the Waveform is smaller than the granularity of the surrounding floating-point numbers - i.e. what happens to a Waveform if "t + dt" is indistinguishable [as a floating-point number] from "t"?

 

Ugh - God, I hate numerical analysis - it will absolutely drive you insane if you start thinking about all of this stuff.

 

 

 

 

 

 

0 Kudos
Message 8 of 8
(5,467 Views)