6. Minimum Maximum Error

Linear Least Squares is a popular optimization method because it is well understood mathematically and can be implemented in a numerically stable and easy fashion (assuming that you consider performing a singular value decomposition to be easy).

However, minimizing the sum of squared errors is not necessarily the best solution for all applications. In particular, for approximating square root, minimizing the maximum error makes more sense since it relates directly to the actual error one will encounter.

A analytical series expansion using Chebyshev polynomials could be used to approximate a minmax solution, but a direct numerical solution seems easier (assuming that you consider multidimensional non-linear function optimization to be easy).

The 2nd degree polynomial which minimizes the maximum error between g and sqrt(f) is:

  g = 0.37586120206482448 + f*(0.72507003492630417 - f*0.10425346659441884)  MinMax
For comparsion, the polynomials previously considered are:
  g = 0.38190217717175667 + f*(0.71612039556246465 - f*0.10126647163745968)  LLSQ

  g = 0.375               + f*(0.75                - f*0.125)                TA2

  g = 0.5                 + f*0.5                                            TA1

  g = 0.5                 + f*0.45710678118654757                            RP1
The MinMax Error plot looks the same as the LLSQ plot so it is not shown.

If Goldschmidt's Method will be used to refine the square-root estimate, we need to initially approximate g = 1/sqrt(f), which leads to the following minmax 2nd degree polynomial:

  g = 1.92746778475579550 + f*(-1.23591106754666491 + f*0.31909704434158537)

  and 1/sqrt(x) is ~= ldexp( g, -e/2) = 1/(f*2e)½