Disabling SWIG in that manner did not work
[Math-GSL.git] / Roots.i
blobb278a850cd9c4f209a44e322d78a5851a4810bf0
1 %module "Math::GSL::Roots"
2 %include "gsl_typemaps.i"
3 %include "typemaps.i"
4 %{
5 #include "gsl/gsl_types.h"
6 #include "gsl/gsl_roots.h"
7 %}
8 %include "gsl/gsl_types.h"
9 %include "gsl/gsl_roots.h"
11 %perlcode %{
12 @EXPORT_OK = qw/
13 gsl_root_fsolver_alloc
14 gsl_root_fsolver_free
15 gsl_root_fsolver_set
16 gsl_root_fsolver_iterate
17 gsl_root_fsolver_name
18 gsl_root_fsolver_root
19 gsl_root_fsolver_x_lower
20 gsl_root_fsolver_x_upper
21 gsl_root_fdfsolver_alloc
22 gsl_root_fdfsolver_set
23 gsl_root_fdfsolver_iterate
24 gsl_root_fdfsolver_free
25 gsl_root_fdfsolver_name
26 gsl_root_fdfsolver_root
27 gsl_root_test_interval
28 gsl_root_test_residual
29 gsl_root_test_delta
30 $gsl_root_fsolver_bisection
31 $gsl_root_fsolver_brent
32 $gsl_root_fsolver_falsepos
33 $gsl_root_fdfsolver_newton
34 $gsl_root_fdfsolver_secant
35 $gsl_root_fdfsolver_steffenson
37 %EXPORT_TAGS = ( all => [ @EXPORT_OK ] );
39 __END__
41 =head1 NAME
43 Math::GSL::Roots - Routines for finding roots of arbitrary one-dimensional functions.
45 =head1 SYNOPSIS
47 use Math::GSL::Roots qw /:all/;
49 =head1 DESCRIPTION
51 Here is a list of all the functions in this module :
53 =over
55 =item * C<gsl_root_fsolver_alloc($T)> - This function returns a pointer to a newly allocated instance of a solver of type $T. $T must be one of the constant included with this module. If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of $GSL_ENOMEM.
57 =item * C<gsl_root_fsolver_free($s)> - This function frees all the memory associated with the solver $s.
59 =item * C<gsl_root_fsolver_set($s, $f, $x_lower, $x_upper)> - This function initializes, or reinitializes, an existing solver $s to use the function $f and the initial search interval [$x_lower, $x_upper]. $f has to be of this form : sub { my $x=shift; function_with_$x }. For example, sub { my $x=shift; ($x-3.2)**3 } is a valid value for $f.
61 =item * C<gsl_root_fsolver_iterate($s)> - This function performs a single iteration of the solver $s. If the iteration encounters an unexpected problem then an error code will be returned (the Math::GSL::Errno has to be included),
62 $GSL_EBADFUNC - the iteration encountered a singular point where the function or its derivative evaluated to Inf or NaN.
63 $GSL_EZERODIV - the derivative of the function vanished at the iteration point, preventing the algorithm from continuing without a division by zero.
65 =item * C<gsl_root_fsolver_name($s)> - This function returns the name of the solver use within the $s solver.
67 =item * C<gsl_root_fsolver_root($s)> - This function returns the current estimate of the root for the solver $s.
69 =item * C<gsl_root_fsolver_x_lower($s)> - This function returns the current lower value of the bracketing interval for the solver $s.
71 =item * C<gsl_root_fsolver_x_upper($s)> - This function returns the current lower value of the bracketing interval for the solver $s.
73 =item * C<gsl_root_fdfsolver_alloc($T)> - This function returns a pointer to a newly allocated instance of a derivative-based solver of type $T. If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of $GSL_ENOMEM.
75 =item * C<gsl_root_fdfsolver_set($s, $fdf, $root)> - This function initializes, or reinitializes, an existing solver $s to use the function and derivative $fdf and the initial guess $root. $f has to be of this form : sub { my $x=shift; function_with_$x }. For example, sub { my $x=shift; ($x-3.2)**3 } is a valid value for $fdf.
77 =item * C<gsl_root_fdfsolver_iterate($s)> - This function performs a single iteration of the solver $s. If the iteration encounters an unexpected problem then an error code will be returned (the Math::GSL::Errno has to be included),
78 $GSL_EBADFUNC - the iteration encountered a singular point where the function or its derivative evaluated to Inf or NaN.
79 $GSL_EZERODIV - the derivative of the function vanished at the iteration point, preventing the algorithm from continuing without a division by zero.
81 =item * C<gsl_root_fdfsolver_free($s)> - This function frees all the memory associated with the solver $s.
83 =item * C<gsl_root_fdfsolver_name($s)> - This function returns the name of the solver use within the $s solver.
85 =item * C<gsl_root_fdfsolver_root($s)> - This function returns the current estimate of the root for the solver $s.
87 =item * C<gsl_root_test_interval($x_lower, $x_upper, $epsabs, $epsrel)> - This function tests for the convergence of the interval [$x_lower, $x_upper] with absolute error epsabs and relative error $epsrel. The test returns $GSL_SUCCESS if the following condition is achieved,
88 |a - b| < epsabs + epsrel min(|a|,|b|)
89 when the interval x = [a,b] does not include the origin. If the interval includes the origin then \min(|a|,|b|) is replaced by zero (which is the minimum value of |x| over the interval). This ensures that the relative error is accurately estimated for roots close to the origin.
90 This condition on the interval also implies that any estimate of the root r in the interval satisfies the same condition with respect to the true root r^*,
91 |r - r^*| < epsabs + epsrel r^*
92 assuming that the true root r^* is contained within the interval.
94 =item * C<gsl_root_test_residual($f, $epsabs)> - This function tests the residual value $f against the absolute error bound $epsabs. The test returns $GSL_SUCCESS if the following condition is achieved,
95 |$f| < $epsabs
96 and returns $GSL_CONTINUE otherwise. This criterion is suitable for situations where the precise location of the root, x, is unimportant provided a value can be found where the residual, |f(x)|, is small enough.
98 =item * C<gsl_root_test_delta($x1, $x0, $epsabs, $epsrel)> - This function tests for the convergence of the sequence ..., $x0, $x1 with absolute error $epsabs and relative error $epsrel. The test returns $GSL_SUCCESS if the following condition is achieved,
99 |x_1 - x_0| < epsabs + epsrel |x_1|
100 and returns $GSL_CONTINUE otherwise.
102 =back
104 This module also includes the following constants :
106 =over
108 =item * C<$gsl_root_fsolver_bisection> - The bisection algorithm is the simplest method of bracketing the roots of a function. It is the slowest algorithm provided by the library, with linear convergence. On each iteration, the interval is bisected and the value of the function at the midpoint is calculated. The sign of this value is used to determine which half of the interval does not contain a root. That half is discarded to give a new, smaller interval containing the root. This procedure can be continued indefinitely until the interval is sufficiently small. At any time the current estimate of the root is taken as the midpoint of the interval.
110 =item * C<$gsl_root_fsolver_brent> - The Brent-Dekker method (referred to here as Brent's method) combines an interpolation strategy with the bisection algorithm. This produces a fast algorithm which is still robust. On each iteration Brent's method approximates the function using an interpolating curve. On the first iteration this is a linear interpolation of the two endpoints. For subsequent iterations the algorithm uses an inverse quadratic fit to the last three points, for higher accuracy. The intercept of the interpolating curve with the x-axis is taken as a guess for the root. If it lies within the bounds of the current interval then the interpolating point is accepted, and used to generate a smaller interval. If the interpolating point is not accepted then the algorithm falls back to an ordinary bisection step. The best estimate of the root is taken from the most recent interpolation or bisection.
112 =item * C<$gsl_root_fsolver_falsepos> - The false position algorithm is a method of finding roots based on linear interpolation. Its convergence is linear, but it is usually faster than bisection. On each iteration a line is drawn between the endpoints (a,f(a)) and (b,f(b)) and the point where this line crosses the x-axis taken as a “midpoint”. The value of the function at this point is calculated and its sign is used to determine which side of the interval does not contain a root. That side is discarded to give a new, smaller interval containing the root. This procedure can be continued indefinitely until the interval is sufficiently small. The best estimate of the root is taken from the linear interpolation of the interval on the current iteration.
114 =item * C<$gsl_root_fdfsolver_newton> - Newton's Method is the standard root-polishing algorithm. The algorithm begins with an initial guess for the location of the root. On each iteration, a line tangent to the function f is drawn at that position. The point where this line crosses the x-axis becomes the new guess. The iteration is defined by the following sequence, x_{i+1} = x_i - f(x_i)/f'(x_i) Newton's method converges quadratically for single roots, and linearly for multiple roots.
116 =item * C<$gsl_root_fdfsolver_secant> - The secant method is a simplified version of Newton's method which does not require the computation of the derivative on every step.
117 On its first iteration the algorithm begins with Newton's method, using the derivative to compute a first step,
118 x_1 = x_0 - f(x_0)/f'(x_0)
119 Subsequent iterations avoid the evaluation of the derivative by replacing it with a numerical estimate, the slope of the line through the previous two points,
120 x_{i+1} = x_i f(x_i) / f'_{est} where
121 f'_{est} = (f(x_i) - f(x_{i-1})/(x_i - x_{i-1})
122 When the derivative does not change significantly in the vicinity of the root the secant method gives a useful saving. Asymptotically the secant method is faster than Newton's method whenever the cost of evaluating the derivative is more than 0.44 times the cost of evaluating the function itself. As with all methods of computing a numerical derivative the estimate can suffer from cancellation errors if the separation of the points becomes too small.
124 On single roots, the method has a convergence of order (1 + \sqrt 5)/2 (approximately 1.62). It converges linearly for multiple roots.
126 =item * C<$gsl_root_fdfsolver_steffenson> - The Steffenson Method provides the fastest convergence of all the routines. It combines the basic Newton algorithm with an Aitken “delta-squared” acceleration. If the Newton iterates are x_i then the acceleration procedure generates a new sequence R_i,
127 R_i = x_i - (x_{i+1} - x_i)^2 / (x_{i+2} - 2 x_{i+1} + x_{i})
128 which converges faster than the original sequence under reasonable conditions. The new sequence requires three terms before it can produce its first value so the method returns accelerated values on the second and subsequent iterations. On the first iteration it returns the ordinary Newton estimate. The Newton iterate is also returned if the denominator of the acceleration term ever becomes zero.
130 As with all acceleration procedures this method can become unstable if the function is not well-behaved.
132 =back
134 For more informations on the functions, we refer you to the GSL offcial
135 documentation: L<http://www.gnu.org/software/gsl/manual/html_node/>
137 Tip : search on google: site:http://www.gnu.org/software/gsl/manual/html_node/ name_of_the_function_you_want
140 =head1 AUTHORS
142 Jonathan Leto <jonathan@leto.net> and Thierry Moisan <thierry.moisan@gmail.com>
144 =head1 COPYRIGHT AND LICENSE
146 Copyright (C) 2008 Jonathan Leto and Thierry Moisan
148 This program is free software; you can redistribute it and/or modify it
149 under the same terms as Perl itself.
151 =cut