Home | Libraries | People | FAQ | More |
#include <boost/math/special_functions/gamma.hpp>
namespace boost{ namespace math{ template <class T> calculated-result-type lgamma(T z); template <class T, class Policy> calculated-result-type lgamma(T z, const Policy&); template <class T> calculated-result-type lgamma(T z, int* sign); template <class T, class Policy> calculated-result-type lgamma(T z, int* sign, const Policy&); }} // namespaces
The lgamma function is defined by:
The second form of the function takes a pointer to an integer, which if non-null is set on output to the sign of tgamma(z).
The final Policy argument is optional and can be used to control the behaviour of the function: how it handles errors, what level of precision to use etc. Refer to the policy documentation for more details.
There are effectively two versions of this function internally: a fully generic version that is slow, but reasonably accurate, and a much more efficient approximation that is used where the number of digits in the significand of T correspond to a certain Lanczos approximation. In practice, any built-in floating-point type you will encounter has an appropriate Lanczos approximation defined for it. It is also possible, given enough machine time, to generate further Lanczos approximation's using the program libs/math/tools/lanczos_generator.cpp.
The return type of these functions is computed using the result
type calculation rules: the result is of type double
if T is an integer type, or type T
otherwise.
The following table shows the peak errors (in units of epsilon) found on various platforms with various floating point types, along with comparisons to various other libraries. Unless otherwise specified any floating point type that is narrower than the one shown will have effectively zero error.
Note that while the relative errors near the positive roots of lgamma are very low, the lgamma function has an infinite number of irrational roots for negative arguments: very close to these negative roots only a low absolute error can be guaranteed.
Table 6.3. Error rates for lgamma
Microsoft Visual C++ version 12.0 |
GNU C++ version 5.1.0 |
GNU C++ version 5.1.0 |
Sun compiler version 0x5130 |
|
---|---|---|---|---|
factorials |
Max = 0.914ε (Mean = 0.167ε) |
Max = 0ε (Mean = 0ε) |
Max = 0.991ε (Mean = 0.311ε) |
Max = 0.991ε (Mean = 0.383ε) |
near 0 |
Max = 0.964ε (Mean = 0.462ε) |
Max = 0ε (Mean = 0ε) |
Max = 1.42ε (Mean = 0.566ε) |
Max = 1.42ε (Mean = 0.566ε) |
near 1 |
Max = 0.867ε (Mean = 0.468ε) |
Max = 0ε (Mean = 0ε) |
Max = 0.948ε (Mean = 0.36ε) |
Max = 0.866ε (Mean = 0.355ε) |
near 2 |
Max = 0.591ε (Mean = 0.159ε) |
Max = 0ε (Mean = 0ε) |
Max = 0.878ε (Mean = 0.242ε) |
Max = 0.878ε (Mean = 0.241ε) |
near -10 |
Max = 4.22ε (Mean = 1.33ε) |
Max = 0ε (Mean = 0ε) |
Max = 3.81ε (Mean = 1.01ε) |
Max = 3.81ε (Mean = 1.01ε) |
near -55 |
Max = 0.821ε (Mean = 0.419ε) |
Max = 0ε (Mean = 0ε) |
Max = 0.821ε (Mean = 0.513ε) |
Max = 1.59ε (Mean = 0.587ε) |
The main tests for this function involve comparisons against the logs of the factorials which can be independently calculated to very high accuracy.
Random tests in key problem areas are also used.
The generic version of this function is implemented using Sterling's approximation for large arguments:
For small arguments, the logarithm of tgamma is used.
For negative z the logarithm version of the reflection formula is used:
For types of known precision, the Lanczos
approximation is used, a traits class boost::math::lanczos::lanczos_traits
maps type T to an appropriate approximation. The logarithmic version of the
Lanczos approximation is:
Where Le,g is the Lanczos sum, scaled by eg.
As before the reflection formula is used for z < 0.
When z is very near 1 or 2, then the logarithmic version of the Lanczos approximation suffers very badly from cancellation error: indeed for values sufficiently close to 1 or 2, arbitrarily large relative errors can be obtained (even though the absolute error is tiny).
For types with up to 113 bits of precision (up to and including 128-bit long doubles), root-preserving rational approximations devised by JM are used over the intervals [1,2] and [2,3]. Over the interval [2,3] the approximation form used is:
lgamma(z) = (z-2)(z+1)(Y + R(z-2));
Where Y is a constant, and R(z-2) is the rational approximation: optimised so that it's absolute error is tiny compared to Y. In addition small values of z greater than 3 can handled by argument reduction using the recurrence relation:
lgamma(z+1) = log(z) + lgamma(z);
Over the interval [1,2] two approximations have to be used, one for small z uses:
lgamma(z) = (z-1)(z-2)(Y + R(z-1));
Once again Y is a constant, and R(z-1) is optimised for low absolute error compared to Y. For z > 1.5 the above form wouldn't converge to a minimax solution but this similar form does:
lgamma(z) = (2-z)(1-z)(Y + R(2-z));
Finally for z < 1 the recurrence relation can be used to move to z > 1:
lgamma(z) = lgamma(z+1) - log(z);
Note that while this involves a subtraction, it appears not to suffer from
cancellation error: as z decreases from 1 the -log(z)
term grows positive much more rapidly than
the lgamma(z+1)
term becomes negative. So in this specific
case, significant digits are preserved, rather than cancelled.
For other types which do have a Lanczos approximation defined for them the current solution is as follows: imagine we balance the two terms in the Lanczos approximation by dividing the power term by its value at z = 1, and then multiplying the Lanczos coefficients by the same value. Now each term will take the value 1 at z = 1 and we can rearrange the power terms in terms of log1p. Likewise if we subtract 1 from the Lanczos sum part (algebraically, by subtracting the value of each term at z = 1), we obtain a new summation that can be also be fed into log1p. Crucially, all of the terms tend to zero, as z -> 1:
The Ck terms in the above are the same as in the Lanczos approximation.
A similar rearrangement can be performed at z = 2: