r"""Adaptive quadrature routines for one-dimensional integrals.
This module provides access to several adaptive quadrature routines
for performing one-dimensional integrals. The main routine is
:func:`integrate`, which performs a one-dimensional integral. It
allows the user to specify properties of the integrand to facilitate
an accurate computation.
In particular, the nature of the decay at `inf` can be described by
using a subclass of :class:`Decay` such as :class:`PowerDecay` or
:class:`ExpDecay` to effect a transformation for dealing with these
tails.
Orthogonal Polynomials
----------------------
Here we review some properties of orthogonal polynomials and define
notations that are useful for quadratures. We consider polynomials
orthogonal to some measure :math:`\d{\alpha(x)}` and define the induced
inner product:
.. math::
\braket{f, g} = \int_{a}^{b} f(x)g(x)\d{\alpha(x)} =
\int_{a}^{b} f(x)g(x) W(x)\d{x}
Formally we shall work with orthonormal polynomials :math:`p_n(x)` --
:math:`\braket{p_m, p_n} = \delta_{mn}` -- though the standard
representations :math:`P_n(x)` typically are not normalized:
.. math::
\braket{P_m, P_n} = \delta_{mn}\sqrt{h_m h_n}.
The polynomials are characterized in terms of the coefficients
:math:`a_n`, :math:`b_n`, :math:`c_n`, :math:`h_n`, :math:`e_n`, and
:math:`\lambda_n`; and the functions :math:`W(x)`, :math:`Q(x)`, and
:math:`L(x)`. In addition, for a given degree :math:`n`, one has
quadrature abscissa :math:`x_{n;i}` -- which are the roots of
:math:`p_n` -- and weights :math:`w_{n;i}`:
.. math::
&P_{n+1} = (a_n x + b_n) P_n - c_n P_{n-1},\\
&h_n = \braket{P_n,P_n},\\
&Q(x)P_n'' + L(x)P_n' + \lambda_n P_n = 0,\\
&P_n = \frac{1}{e_n W(x)}\diff[n]{}{x}[W(x)Q^{n}(x)],\\
&p_n(x_{n;i}) = 0,\\
&\int_{a}^{b} f(x)W(x)\d{x} = \sum_i w_{n;i} f(x_{n;i})
where the last integral is exact for polynomials :math:`f(x)` of
degree :math:`2n` or less. The weights :math:`w_{n;i}` are sometimes
called the "Christoffel Numbers" and may be denoted :math:`\lambda_{n;i}`.
Additional useful relationships are
.. math::
&\begin{aligned}
P_n &= k_n x^n + k'_n x^{n-1} + \cdots, &
a_n &= \frac{k_{n+1}}{k_{n}}, &
b_n &= a_n\left(\frac{k'_{n+1}}{k_{n+1}}
- \frac{k'_{n}}{k_{n}}\right), &
c_n &= a_n\frac{k_{n-1}h_{n}}{k_{n}h_{n-1}}
\end{aligned},\\
&R(x) = W(x)Q(x) = \exp\left(\int\frac{L(x)}{Q(x)}\d{x}\right),\\
&\sum_{i=0}^{n-1} w_{n;i}p_j(x_{n;i})p_k(x_{n;i}) = \delta_{jk}
\quad \forall j,k < n,\\
&w_{n;i} = \int_{a}^{b}
\left[\frac{p_n(x)}{(x-x_{n;i}) p'_n(x)}\right]^2\d{\alpha(x)}
= -\frac{h_n}{p_{n+1}(x_i)p'_n(x_i)}
.. todo:: Check formulae for the weights with :func:`h_roots` which
seems to work.
Here we summarize some of the known and useful analytic results:
+--------+------------------------+-----------------------+----------------------+------------------------------+---------------------------+------------------------------------+--------------+-----------------+------------+-----------------+
|Name |:math:`(a,b)` |:math:`W(x)` |:math:`a_n` |:math:`b_n` |:math:`c_n` |:math:`h_n` |:math:`e_n` |:math:`\lambda_n`|:math:`Q(x)`|:math:`L(x)` |
+========+========================+=======================+======================+==============================+===========================+====================================+==============+=================+============+=================+
|Hermite |:math:`(-\infty,\infty)`|:math:`e^{-x^2}` |`2` |`0` |`2n` |:math:`2^n n!\sqrt{\pi}` |:math:`(-1)^n`|`2n` |`1` |`-2x` |
+--------+------------------------+-----------------------+----------------------+------------------------------+---------------------------+------------------------------------+--------------+-----------------+------------+-----------------+
|Laguerre|:math:`(0,\infty)` |:math:`x^{\beta}e^{-x}`|:math:`\frac{-1}{n+1}`|:math:`\frac{2n+1+\beta}{n+1}`|:math:`\frac{n+\beta}{n+1}`|:math:`\frac{\Gamma(n+\beta+1)}{n!}`|:math:`n!` |`n` |`x` |:math:`\beta+1-x`|
+--------+------------------------+-----------------------+----------------------+------------------------------+---------------------------+------------------------------------+--------------+-----------------+------------+-----------------+
Quadrature Formulae
-------------------
The general idea of a quadrature is to express an integral as
.. math::
\int_{a}^{b} f(x)\d{\alpha(x)} = \int_{a}^{b} f(x) W(x)\d{x}
\approx \sum_{i} w_i f(x_i)
where :math:`W(x)` is a weighting function, :math:`w_i` are a set
of weights and :math:`x_i` are a set of abscissa. In general, given :math:`n`
weights and :math:`n` abscissa, we have :math:`2n` degrees of
freedom, and so we can in general choose these so that the integral is
exact for special forms of :math:`f(x)`: Typically, polynomials of
degree :math:`2n`. Since the quadrature is linear, this is realized
through the :math:`2n` equations:
.. math::
\sum_{n=0}^{n-1} w_n P_m(x) = \int_{a}^{b} \d{\alpha(x)}\; P_m(x)
\Biggr|_{m=0}^{2n-1} = m_m
where :math:`m_m` is the `m`'th moment of the degree `m` polynomial
:math:`P_m(x)`. (One can simply use :math:`P_m(x) = x^m` but the
system is typically somewhat better conditioned if one chooses
orthogonal polynomials).
If the abscissa are given, we can determine the weights to satisfy
:math:`m` "normal" equations through the linear system. We start with
the problem of determining the weights for a given set of abscissa.
We want as high an order as possible, but with all the weights being
positive. (One can relax this somewhat by formally minimizing
:math:`\sigma = \sum_i\abs{w_i}/\abs{\sum_i w_i} \geq 1` to minimize
roundoff error from cancellation when computing the integral.) One
strategy might be to solve as many equations as possible while
minimizing the norm of `w`. This leads to the solution
.. math::
w = \mat{V}_x(\mat{V}_x^T\mat{V}_x)^{-1}m
where :math:`m` is the vector of moments and :math:`V_x` is the incomplete
Vandermonde matrix at the points :math:`x`:
.. math::
\begin{aligned}
\mat{V}_x &= \mat{V}_p(x) = \begin{pmatrix}
P_0(x_0) & P_1(x_0) & \cdots & P_{m-1}(x_0)\\
P_0(x_1) & P_1(x_1) & \cdots & P_{m-1}(x_1)\\
\vdots & \vdots & \ddots & \vdots\\
P_0(x_{n-1}) & P_1(x_{n-1}) & \cdots & P_{m-1}(x_{n-1})\\
\end{pmatrix}, &
m &= \begin{pmatrix}
\int_{a}^{b} P_0(x) \d{\alpha(x)}\\
\int_{a}^{b} P_1(x) \d{\alpha(x)}\\
\vdots\\
\int_{a}^{b} P_{m-1}(x) \d{\alpha(x)}\\
\end{pmatrix}.
\end{aligned}
If there is an all positive solution, then this will be all positive.
To see this, consider the singular value decomposition of `\mat{X}`:
.. math::
\mat{V}_x^{T} = \mat{U} \cdot
\begin{pmatrix} \mat{D} & \mat{0} \end{pmatrix}\cdot
\begin{pmatrix} \mat{V}^T\\ \mat{A}^T \end{pmatrix}
= \mat{U}\cdot\mat{D}\cdot\mat{V}^T
We may write the solution as
.. math::
w = \mat{V}\cdot\mat{D}^{-1}\cdot\mat{U}^T\cdot\vect{m}
+\mat{A}\cdot\alpha
where the projection of `w` in the subspace spanned by :math:`\mat{V}`
is fixed by the equation and the projection in the complementary
subspace spanned by :math:`\mat{A}` is arbitrary (:math:`\alpha`).
The set of all valid `w` spans a linear subspace. With a little thought
about this in lower dimensions, one can convince oneself that if any
points lie within the given "quadrant" of positive `w`, then the point
closest to the origin will satisfy this (i.e. the point `w` with
minimal norm). This is the solution with :math:`\alpha=0` which is
given by the normal equation above.
.. note:: This assumes some properties about the nature of the
set of solutions that are typically true for the integration problem
with reasonable (positive) weights. This should be clarified at
some point.
This formula is acceptable only for low orders, otherwise the
Vandermonde matrix quickly becomes ill-conditioned. A better solution
is to use a robust algorithm such as that in Demmel and Koev,
"Accurate SVDs of polynomial Vandermonde matrices involving
orthonormal polynomials", Linear Algebra and its Applications 411
(2006) 382--396. There they point out that the Vandermonde matrix we
need :math:`\mat{V}_x = \mat{E}\cdot\mat{V}_y` can be expressed in
terms of the square Vandermonde matrix :math:`\mat{V}_y =
\mat{V}_{p}(y)` where :math:`y_i` are the roots of :math:`P_n(y_i)=0`
for the order `n` orthogonal polynomial through the Lagrange
interpolation formula
.. math::
\mat{E}_{ij} =
\prod^{n-1}_{\substack{
k=0\\
k\neq j}}
\frac{x_i - y_k}{y_j - y_k}.
For classical orthogonal polynomials, the roots :math:`y_i` can be
accurately calculated from the recurrence relationships (see below).
Some algebra shows that our weights can be expressed in terms of the
classical weights as
.. math::
w_x = \mat{E}\left(\mat{E}^T\mat{E}\right)^{-1} w_y.
This can also be directly computed from the SVD or QR decompositions
of :math:`\mat{E}^T`. As pointed out in A. J. Cox and N. J. Higham,
"Stability of Householder QR factorization for weighted least squares
problems", Numerical Analysis 1997 (Dundee), Pitman Res. Notes
Math. Ser. 380, pp. 57--73, Longman, Harlow, 1998, if we first sort
the rows in order of decreasing :math:`L_\infty` norm, then the
standard implementation of the QR decomposition is stable. We do this
in :func:`pqr`. Thus:
.. math::
E = P\cdot Q\cdot R\\
w_x = P\cdot Q \cdot (R^{T})^{-1}\cdot w_y
This requires the following knowledge:
1) The Vandermonde matrix should be formed from orthogonal
polynomials :math:`P_n(y)`:
.. math::
V_p(x) = \begin{pmatrix}
P_0(x_0) & P_1(x_0) & \cdots & P_{m-1}(x_0)\\
P_0(x_1) & P_1(x_1) & \cdots & P_{m-1}(x_1)\\
\vdots & \vdots & \ddots & \vdots\\
P_0(x_{n-1}) & P_1(x_{n-1}) & \cdots & P_{m-1}(x_{n-1})\\
\end{pmatrix}.
Note that one can scale out the weights if needed, so this is not
an arduous requirement in most cases.
2) The algorithm requires the roots :math:`y_i` of the highest order
:math:`P_m(y)`:
.. math:: P_m(y_i) = 0.
3) The algorithm requires the Christoffel numbers :math:`\lambda_i`:
.. math::
\sum_i \lambda_i P_j(y_i)P_k(y_i) = \delta_{jk}.
Here is an example of abscissa for integrals between 0 and 1.
>>> def m_(n):
... return 1./(np.arange(n).reshape((n,1))+1)
>>> def X_(x,n):
... return x**np.arange(n).reshape((n,1))
>>> def w(x,n):
... X = np.mat(X_(x,n))
... l = np.linalg.solve(X*X.T, m_(n))
... return X.T*l
As an example, let's consider order two quadratures over the interval
[0,1] containing the endpoints and one interior point. An analysis of
Simpsons rule shows that positive weights exist if the midpoint lies
between 1/3 and 2/3:
>>> np.min(w(np.array([0,0.9/3,1]),3)) # doctest: +ELLIPSIS
-0.0555555555...
>>> round(np.min(w(np.array([0,1/3,1]),3)),14)
0.0
>>> np.min(w(np.array([0,1/2,1]),3)) # doctest: +ELLIPSIS
0.1666666...
>>> round(np.min(w(np.array([0,2/3,1]),3)),14)
0.0
>>> np.min(w(np.array([0,2.1/3,1]),3)) # doctest: +ELLIPSIS
-0.055555555555...
Now, suppose we have
.. plot::
m_ = lambda n: np.mat(1./(np.arange(n)+1)).T
X_ = lambda x, n: np.mat(x**np.arange(n).reshape((n,1)))
def w(x,n):
X = X_(x,n)
l = np.linalg.solve(X*X.T, m_(n))
return X.T*l
def Q(x):
for n in xrange(1, len(x)+1):
if min(w(x,n)) < 0:
n = n-1
break
U,d,Vt = np.linalg.svd(X_(x, n+2))
return np.max(abs(np.mat(U[:,n:]).T*m_(2*n)))
.. plot::
# Gauss points for [-1,1]
def m_(n):
n = np.arange(n) + 1
m = (1.0 - (-1)**n)/n
return np.mat(m).T
X_ = lambda x, n: np.mat(x**np.arange(n).reshape((n,1)))
def w_(x,n):
X = X_(x,n)
l = np.linalg.solve(X*X.T, m_(n))
return X.T*l
def f(x):
'''Return the error assuming the first half of x are the
abscissa and the second half are the weights.'''
x = np.asarray(x)
n = int(len(x)/2)
X = np.asarray(X_(x[:n], 2*n))
w = np.asarray(x[n:]).ravel()
m = np.asarray(m_(2*n)).ravel()
return np.dot(X, w) - m
def find_n(x):
for n in xrange(len(x)):
if np.min(w_(x,n+1)) < 0:
return n
return n+1
def extend(x):
n = int(len(x)/2)
x = np.array([-1] + x[:n].tolist() + [1])
x = (x[:-1] + x[1:])/2
m = find_n(x)
x = x.tolist() + list(w_(x,n).flat)
return np.asarray(x)
def J(x):
n = int(len(x)/2)
j = np.arange(2*n, dtype=float).reshape((2*n,1))
m = np.arange(n, dtype=float).reshape((1,n))
w = x[n:].reshape((1,n))
x = x[:n].reshape((1,n))
J = np.hstack([j*x**(j-1)*w, x**j])
J[0,:n] = 0
return J
from mmf.solve.broyden import Broyden
def step(B):
n = len(B.x)/2
x = np.array(B.x)
x[:n] = np.where(x[:n] < -1, -1, x[:n])
x[:n] = np.where(x[:n] > 1, 1, x[:n])
if np.min(x[n:]) < 0:
m = find_n(x[:n])
x[n:] = w_(x[:n],m).flat
B.update(f(x), x)
def init(x):
x = np.array(x)
B = Broyden(x0=np.array(x), G0=f(x), max_step=0.01)
h = 1e-6
for n in xrange(len(B.x)):
x[n] += h
B.update_J(np.array(x), f(x))
x[n] -= 2*h
B.update_J(np.array(x), f(x))
x[n] += h
return B
def get(N):
'''Return the `N` point quadrature.'''
x = np.array([0,2])
for n in xrange(1,N):
x = extend(x)
B = init(x)
while 1e-12 < np.max(abs(f(B.x))):
print n+1, np.max(abs(f(B.x))), B.x[:N], B.x[N:]
step(B)
x = B.x
return x[:N], x[N:]
def getJ(N):
'''Return the `N` point quadrature.'''
x = np.array([0,2])
for n in xrange(1,N):
x = extend(x)
while 1e-12 < np.max(abs(f(x))):
print n+1, np.max(abs(f(x))), x[:N], x[N:]
x = x - np.linalg.solve(J(x), f(x))
return x[:N], x[N:]
n = 5
x0 = np.hstack([np. linspace(-1,1,n+2)[1:-1],
np.ones(n)/n])
B = Broyden(x0=x0, G0=f(x0))
"""
from __future__ import division
import warnings
import math
import numpy as np
import scipy as sp
import scipy.weave
import scipy.linalg
import scipy.integrate
try:
import cvxopt
import cvxopt.solvers
except ImportError:
warnings.warn("Could not import cvxopt.")
cvxopt = None
from mmf.math.integrate import IntegrationError
from _mquad import mquad
__all__ = ['Decay', 'PowerDecay', 'ExpDecay', 'IntegrationError',
'get_weights', 'get_weights_y', 'integrate', 'quad', 'mquad',
'h_roots']
_abs_tol = 1e-12
_rel_tol = 1e-8
_eps = np.finfo(float).eps
[docs]class Decay(float):
r"""Represents information about the asymptotic form of indefinite
integrals."""
[docs]class PowerDecay(Decay):
r"""Represents a power-law decay.
The function should behave as
.. math::
\lim_{x\rightarrow \pm\infty} = \frac{a}{\abs{x}^{n}}
where this behaviour becomes dominant for
:math:`\abs{x}>\abs{x_{0}}`.
Parameters
----------
n : float
Power of decay :math:`x^{-n}`
x0 : float
Point beyond which decay is the dominant behaviour. This must
have the same sign as the value (and must not be zero).
Notes
-----
The transformation used maps :math:`\abs{x}\in [x_0,\infty)` to
:math:`y\in[0,1)`.
.. math::
x &= x_{0}(1-y)^{\frac{1}{1-n}} = x_{0}e^{\frac{\ln(1-y)}{1-n}},\\
y &= 1 - \left(\frac{x}{x_{0}}\right)^{1-n},\\
\int_{x_{0}}^{\infty}\d{x}\; f(x) &= \int_{0}^{1}\d{y}\; g(y),\\
g(y) &= f\Bigl(x(y)\Bigr)\frac{x_{0}}{n-1}e^{\frac{n\ln(1-y)}{1-n}}.
If :math:`f(x)\rightarrow ax^{-n}` as :math:`x\rightarrow \infty`,
then :math:`g(y)` approaches a constant as :math:`y\rightarrow 1`
which is very easy to integrate.
Examples
--------
>>> f = lambda p:1./(p**2 + 1)
>>> a = PowerDecay(-np.inf, x0=-1.0, n=2)
>>> b = PowerDecay(np.inf, x0=1.0, n=2)
>>> a.y_x(a.x_y(0.5))
0.5
>>> a.y_x(-1.0)
0.0
>>> b.y_x(1.0)
0.0
>>> quad(f, -np.inf, -1.0) # doctest: +ELLIPSIS
(0.785398163397448..., 1.2888...e-10)
>>> quad(a.get_g(f), a.y_x(-np.inf), a.y_x(-1.0)) # doctest: +ELLIPSIS
(0.785398163397448..., 8.7196...e-15)
>>> quad(f, 1.0, np.inf) # doctest: +ELLIPSIS
(0.785398163397448..., 1.2888...e-10)
>>> quad(b.get_g(f), b.y_x(1.0), b.y_x(np.inf)) # doctest: +ELLIPSIS
(0.785398163397448..., 8.71967...e-15)
This is what integrate does automatically:
>>> integrate(f, a, -1.0) # doctest: +ELLIPSIS
(0.785398163397448..., 8.71967...e-15)
>>> integrate(f, 1.0, b) # doctest: +ELLIPSIS
(0.785398163397448..., 8.71967...e-15)
"""
def __new__(cls, x, n, x0=0.0):
return float.__new__(cls, x)
[docs] def __init__(self, x, n, x0=0):
float.__init__(self)
if np.sign(self) != np.sign(x0):
raise ValueError("x and x0 must have the same sign.")
if not np.isreal(x0):
raise ValueError("x0 must be real.")
self.n = n
self.x0 = np.real(x0)
[docs] def x_y(self, y):
r"""Return `x(y)` where :math:`y \in [0,1)` and
:math:`x\in[x_0,\infty)`."""
return self.x0*np.exp(np.log1p(-y)/(1.0 - self.n))
[docs] def y_x(self, x):
r"""Return `y(x)` where :math:`y \in [0,1)` and
:math:`x\in[x_0,\infty)`."""
return 1.0 - np.exp((1.0 - self.n)*np.log(x/self.x0))
[docs] def get_g(self, f):
r"""Return the function `g(y)` which, when integrated over
:math:`y \in [0,1)` is the same as integrating the original
function `f(x)` over :math:`x\in[x_0,\infty)`."""
tmp1 = 1.0 - self.n
tmp2 = self.n/tmp1
def g(y, tmp1=tmp1, tmp2=tmp2, x0=self.x0, x_y=self.x_y):
r"""Function over :math:`y \in [0,1)` that integrates to
the same values of `f(x)` over :math:`x\in[x_0,\infty)`."""
return -f(x_y(y))*x0/tmp1*np.exp(tmp2*np.log1p(-y))
return g
[docs]class ExpDecay(Decay):
r"""Represents an exponential decay.
The function should behave as
.. math::
\lim_{x\rightarrow \pm\infty} = a e^{-(b\abs{x})^{n}}
where this behaviour becomes dominant for
:math:`\abs{x}>\abs{x_{0}}\sim b^{-1}`.
Parameters
----------
b : float
Length-scale: i.e. decay of form :math:`\exp(-bx^n)`
x0 : float
Point beyond which decay is the dominant behaviour. This must
have the same sign as the value. If not specified, then it is
set to `1/b`
Notes
-----
The transformation used maps :math:`\abs{x}\in [x_0,\infty)` to
`y\in[0,1)`.
.. math::
x &= x_{0}-b^{-1}\ln(1-y),\\
y &= 1 - e^{b(x_{0}-x)},\\
\int_{x_{0}}^{\infty}\d{x}\; f(x) &= \int_{0}^{1}\d{y}\; g(y),\\
g(y) &= \frac{f\Bigl(x(y)\Bigr)}{b(1-y)}.
Examples
--------
>>> f = lambda p:3/np.sqrt(np.pi)/(np.exp((3.0*p)**2))
>>> a = ExpDecay(-np.inf, b=3)
>>> b = ExpDecay(np.inf, b=3)
>>> a.y_x(a.x_y(0.5)) # doctest: +ELLIPSIS
0.5...
>>> a.y_x(-3.0)
0.0
>>> b.y_x(3.0)
0.0
>>> quad(f, -np.inf, 0) # doctest: +ELLIPSIS
(0.49999999..., 2.2113658...e-09)
>>> quad(a.get_g(f), a.y_x(-np.inf), a.y_x(0)) # doctest: +ELLIPSIS
(0.5000000..., 1.177...e-09)
This is what integrate does automatically:
>>> integrate(f, a, 0) # doctest: +ELLIPSIS
(0.5, 4.8318...e-13)
"""
def __new__(cls, x, b, x0=None):
return float.__new__(cls, x)
[docs] def __init__(self, x, b, x0=None):
if x0 is None:
x0 = b*np.sign(x)
if np.sign(self) != np.sign(x0):
raise ValueError("x and x0 must have the same sign.")
if not np.isreal(x0):
raise ValueError("x0 must be real.")
self.b = b
self.x0 = np.real(x0)
[docs] def x_y(self, y):
r"""Return `x(y)` where :math:`y \in [0,1)` and
:math:`x\in[x_0,\infty)`."""
return self.x0 - np.log1p(-y)/self.b
[docs] def y_x(self, x):
r"""Return `y(x)` where :math:`y \in [0,1)` and
:math:`x\in[x_0,\infty)`."""
return 1.0 - np.exp(self.b*(self.x0 - x))
[docs] def get_g(self, f):
r"""Return the function `g(y)` which, when integrated over
:math:`y \in [0,1)` is the same as integrating the original
function `f(x)` over :math:`x\in[x_0,\infty)`."""
def g(y, b=self.b,x_y=self.x_y):
r"""Function over :math:`y \in [0,1)` that integrates to
the same values of `f(x)` over :math:`x\in[x_0,\infty)`."""
return f(x_y(y))/b/(1.0-y)
return g
[docs]def integrate(f, a, b, abs_tol=None, rel_tol=None,
points=None, singular_points=None, **kwargs):
"""Return (res, err) where `res` is the integral of the function
`f(x)` from `a` to `b` and `err` is an estimate of the absolute
error.
.. math::
\int_{a}^{b}f(x)\d{x}
Parameters
----------
a, b : float
Limits of integration. These can be finite or infinite.
abs_tol, rel_tol : float
Desired tolerance goals.
points : list of floats
Points where the integrand is significant i.e. where does the
integrand change behaviour. If there is a sharp feature, then
it should be located by these points (the adaptive routine
might otherwise miss it.). These points will be explicitly
included in the integral so they should not be singular
points.
singular_points : list of floats
List of singular points of the integrand. These will be
excluded from the integral but approached with caution.
"""
if abs_tol is None:
abs_tol = _abs_tol
if rel_tol is None:
rel_tol = _rel_tol
if b < a:
res, err = integrate(f=f, a=b, b=a,
abs_tol=abs_tol, rel_tol=rel_tol)
return -res, err
kwargs['epsrel'] = rel_tol
kwargs['epsabs'] = abs_tol
kwargs['full_output'] = 0
if points is None:
points = []
if isinstance(a, Decay):
if a.x0 <= a:
# prevent unneeded transformation.
a = float(a)
if isinstance(b, Decay):
if b <= b.x0:
# prevent unneeded transformation.
b = float(b)
if isinstance(a, Decay) and isinstance(b, Decay):
kwargs['epsabs'] /= np.sqrt(3)
# Partition integral into three parts
r1, e1 = quad(a.get_g(f), a.y_x(a), a.y_x(a.x0),
points=map(a.y_x, points), **kwargs)
r2, e2 = quad(f, a.x0, b.x0, points=points, **kwargs)
r3, e3 = quad(b.get_g(f), b.y_x(b.x0), b.y_x(b),
points=map(b.y_x, points), **kwargs)
res = r1 + r2 + r3
err = np.sqrt(e1*e1 + e2*e2 + e3*e3)
return res, err
elif isinstance(a, Decay):
if b < a.x0:
return quad(a.get_g(f), a.y_x(a), a.y_x(b),
points=map(a.y_x, points), **kwargs)
else:
kwargs['epsabs'] /= np.sqrt(2)
r1, e1 = quad(a.get_g(f), a.y_x(a), a.y_x(a.x0),
points=map(a.y_x, points), **kwargs)
r2, e2 = quad(f, a.x0, b, points=points, **kwargs)
res = r1 + r2
err = np.sqrt(e1*e1 + e2*e2)
return res, err
elif isinstance(b, Decay):
if b.x0 < a:
return quad(b.get_g(f), b.y_x(a), b.y_x(b),
points=map(b.y_x, points), **kwargs)
else:
# Partition integral into two parts
kwargs['epsabs'] /= np.sqrt(2)
r1, e1 = quad(f, a, b.x0, points=points, **kwargs)
r2, e2 = quad(b.get_g(f), b.y_x(b.x0), b.y_x(b),
points=map(b.y_x, points), **kwargs)
res = r1 + r2
err = np.sqrt(e1*e1 + e2*e2)
return res, err
else:
return quad(f, a=a, b=b, points=points, **kwargs)
def _quad(f, a, b, check=False, *varargin, **kwargs):
"""Call :func:`scipy.integrate.quad` and check the result
using :func:`mquad` if `check` is `True."""
ans = sp.integrate.quad(f, a, b, *varargin, **kwargs)
if check:
kw = {}
kw['abs_tol'] = kwargs.get('epsabs', _abs_tol)
kw['points'] = kwargs.get('points', [])
res = mquad(f, a, b, **kw)
if not (abs(ans[0] - res) < 100*max(kw['abs_tol'], ans[1])):
import pdb;pdb.set_trace()
return ans
[docs]def quad(f, a, b, *varargin, **kwargs):
r"""
An improved version of integrate.quad that does some argument
checking and deals with points properly.
Return (ans, err).
Examples
--------
>>> def f(x): return 1./x**2
>>> (ans, err) = quad(f, 1, np.inf, points=[])
>>> abs(ans - 1.0) < err
True
>>> (ans, err) = quad(f, 1, np.inf, points=[3.0, 2.0])
>>> abs(ans - 1.0) < err
True
"""
kwargs.setdefault('limit', 100000)
kwargs.setdefault('epsabs', _abs_tol)
kwargs.setdefault('epsrel', _rel_tol)
points = kwargs.pop('points', None)
if points is not None:
points = set(points)
points = [p for p in points if p < b and a < p]
if 0 == len(points):
points = None
else:
points = list(points)
points.sort()
kwargs['full_output'] = True
if points is None or (np.isfinite(a) and np.isfinite(b)):
result = _quad(f, a, b, points=points,
*varargin, **kwargs)
if 3 == len(result):
# Okay, return ans, err
ans, abs_err = result[:2]
return ans, abs_err
else:
# There was an error. Treat this with warnings and/or
# exceptions.
ans, abs_err, info, message = result[:4]
if len(result) > 4:
message = message + "\n" + result[4]
raise IntegrationError(message, ans, abs_err)
return ans, abs_err
else:
n_subintervals = 1
if not np.isfinite(a):
n_subintervals += 1
if not np.isfinite(b):
n_subintervals += 1
res = 0.0
err2 = 0.0
kwargs['epsabs'] /= np.sqrt(n_subintervals)
if np.isfinite(a):
a_ = a # Set endpoint for next step
else:
r, e = quad(f, a, points[0], *varargin, **kwargs)
res += r
err2 += e*e
a_ = points[0]
assert np.isfinite(a_)
if np.isfinite(b):
r, e = quad(f, a_, b, points=points,
*varargin, **kwargs)
res += r
err2 += e*e
else:
r, e = quad(f, a_, points[-1], points=points,
*varargin, **kwargs)
res += r
err2 += e*e
r, e = quad(f, points[-1], b, *varargin, **kwargs)
res += r
err2 += e*e
return res, math.sqrt(err2)
class GaussKronrod(object):
"""Here we consider a two-stage adaptive set of quadrature points
including a single endpoint at `0`: `[0, x1]` and `[0, x0, x1, x2]`.
We can generate these using maple::
eq:={w0+w1 = 1,
x1*w1 = 1/2,
x1**2*w1 = 1/3,
ww0+ww1+ww2+ww3 = 1,
x0*ww1+x1*ww2+x2*ww3 = 1/2,
x0^2*ww1+x1^2*ww2+x2^2*ww3 = 1/3,
x0^3*ww1+x1^3*ww2+x2^3*ww3 = 1/4,
x0^4*ww1+x1^4*ww2+x2^4*ww3 = 1/5,
x0^5*ww1+x1^5*ww2+x2^5*ww3 = 1/6}:
Digits:=50:
evalf(solve(eq));
{w0 = 0.25,
w1 = 0.75,
ww0 = 0.076388888888888888888888888888888888888888888888889,
ww1 = 0.38274911909514405004857414139328741766445013714012,
ww2 = 0.38942307692307692307692307692307692307692307692308,
ww3 = 0.15143891509289013798561389279474677036973789704792,
x0 = 0.25358983848622454129451073169882552661143894923792,
x1 = 0.66666666666666666666666666666666666666666666666667,
x2 = 0.94641016151377545870548926830117447338856105076208}
By using the endpoint, we can make sure that, if the function is
special there (i.e. there is a significant feature there), then we
will keep approaching it until that feature is resolved.
The two methods are of order 2 and 5 respectively (meaning that
the first set of quadrature is exact for polynomials of order 2
and the second for polynomials of order 5)
Examples
--------
>>> gk = GaussKronrod()
>>> p2 = np.poly(np.random.rand(2))
>>> p5 = np.poly(np.random.rand(5))
>>> f2 = lambda x: np.polyval(p2,x)
>>> f5 = lambda x: np.polyval(p5,x)
>>> F2 = np.diff(np.polyval(np.polyint(p2),[0,1]))[0]
>>> F5 = np.diff(np.polyval(np.polyint(p5),[0,1]))[0]
>>> abs((np.dot(gk.w2, f2(gk.x2))) - F2) < 1e-16
True
>>> abs((np.dot(gk.w4, f5(gk.x4))) - F5) < 1e-16
True
"""
sr3 = math.sqrt(3)
x2 = np.array([0, 2/3])
w2 = np.array([1/4, 3/4])
x4 = np.array([0, (3 - sr3)/5, 2/3,(3 + sr3)/5])
w4 = np.array([11/144,
125/468 + 125/1872*sr3,
81/208,
125/468 - 125/1872*sr3])
#######################################################
# Tools for building quadratures.
# Here are some brute force tools for building quadratures.
def VandermondSVD(x, y, l, P):
r"""Return the singular value decomposition of the Vandermond
matrix
.. math::
V_p(x) = \begin{pmatrix}
P_0(x_0) & P_1(x_0) & \cdots & P_{m-1}(x_0)\\
P_0(x_1) & P_1(x_1) & \cdots & P_{m-1}(x_1)\\
\vdots & \vdots & \ddots & \vdots\\
P_0(x_{n-1}) & P_1(x_{n-1}) & \cdots & P_{m-1}(x_{n-1})\\
\end{pmatrix}.
where :math:`P_i(x)` are orthogonal polynomials of orders
:math:`i\in[0,1,\cdots m]` and :math:`P_m(x)` is the order `m`
polynomial with `m` Christoffel numbers :math:`l_i` and `m` roots
:math:`y_i`:
.. math::
\begin{aligned}
\left.P_m(y_i)\right|_{i=0}^{m-1} &= 0, &
\sum_{i=0}^{m-1} l_i P_j(y_i)P_k(y_i) &= \delta_{jk}.
\end{aligned}
Parameters
----------
x : array of length `n`
Abscissa for Vandermond matrix.
y : array of length `m`
Roots of :math:`P_m(x)`.
l : array of length `m`
Christoffel numbers of order `m`.
P : function
`P(i)` should be the i'th orthogonal polynomial.
Notes
-----
Uses the algorithm in Demmel and Koev, "Accurate SVDs of
polynomial Vandermonde matrices involving orthonormal polynomials",
Linear Algebra and its Applications 411 (2006) 382--396.
"""
E = np.ones((n,m), dtype=float)
h = np.ones(n, dtype=float)
g = np.ones(m, dtype=float)
for i in xrange(n):
for j in xrange(m):
for k in xrange(m):
if k != j:
E[i,j] = E[i,j]*(x[i] - y[k])/(y[j] - y[k])
for i in xrange(n):
h[i] = np.prod(x[i] - y[:])
for j in xrange(m):
g[j] = np.prod(y[j] - y[:] + _tiny)/_tiny
def pqr(M, econ=False):
r"""Return `(p, Q, R)` where `M = (Q*R)[p,:]`, `Q` is unitary, and
`R` is upper trapazoidal (upper triangular if `M` is square). `p`
is a list of indices specifying the permutation required to
row-order `M` in order of decreasing :math:`L_\infty` norm so that
the QR factorization as performed in LAPACK is stable (see
A. J. Cox and N. J. Higham, "Stability of Householder QR
factorization for weighted least squares problems", Numerical
Analysis 1997 (Dundee), Pitman Res. Notes Math. Ser. 380,
pp. 57--73, Longman, Harlow, 1998.)
Examples
--------
>>> M = np.random.rand(30,20)
>>> p, Q, R = pqr(M)
>>> Q.shape
(30, 30)
>>> R.shape
(30, 20)
>>> np.allclose(np.dot(Q,R)[p,:],M)
True
Notes
-----
Uses :func:`sp.linalg.qr` rather than :func:`np.linalg.qp` to get
the full basis.
"""
p_i = np.argsort(-np.max(abs(M), axis=1))
if econ:
mode = 'economic'
else:
mode = 'full'
Q, R = sp.linalg.qr(M[p_i,:], mode=mode)
# invert permutation
p = np.empty(len(p_i), dtype=int)
p[p_i] = np.arange(len(p_i))
return p, Q, R
[docs]def get_weights_y(x, y, wy):
r"""Return the weights `wx` for a quadrature of order at least `n >=
len(y)` for the abscissa `x` give the `n` exact quadrature
abscissa `y` and weights `wy` defining a quadrature of order `2n`.
Parameters
----------
x : array
Abscissa at which quadratures weights `wx` are desired.
y : array
Abscissa for the Gaussian quadrature. I.e., the roots of the
`n`'th orthogonal polynomial :math:`P_n(y_i) = 0`. This should
be computed to machine precision.
wy : array
Weights (Christoffel numbers) corresponding to the abscissa `y`
for the Gaussian quadrature. These should be computed to
machine precision.
Examples
--------
>>> x = np.linspace(-1, 1, 102)[1:-1]
>>> wx_ = [2]
>>> n = 0
>>> while min(wx_) >= 0:
... # Find maximum order with positive weights.
... n += 1
... y, wy = sp.special.orthogonal.p_roots(n)
... wx = wx_
... wx_ = get_weights_y(x, y, wy)
>>> n = n - 1
>>> print n
18
>>> np.allclose(np.dot(wx, x**16), 2./17)
True
>>> np.allclose(np.dot(wx, x**17), 0)
True
Notes
-----
Forms the matrix :math:`E`
.. math::
\mat{E}_{ij} = \prod_{\substack{
k=0\\
k\neq j}^{n-1}}
\frac{x_i - y_k}{y_j - y_k}.
that interpolates from `y` to `x` using the Lagrange interpolation
formula. The forms the QR factorization using row-sorting:
.. math::
\mat{E} = P\cdot\mat{Q}\cdot\mat{R}
where :math:`\mat{R}` is square. The interpolated coefficients are:
.. math::
w_x = P\cdot Q \cdot (R^{T})^{-1}\cdot w_y
We use :func:`sp.linalg.lu_solve` to solve :math:`R^T x = w_y` but
this expects to have arguments `R^T = L*U` where `L` has unit
diagonals, so we need to rescale `R` slightly.
"""
n = len(x)
m = len(y)
i = np.arange(n).reshape((n,1,1))
j = np.arange(m).reshape((1,m,1))
k = np.reshape(j, (1,1,m))
E = np.prod(np.where(k == j,
1,
np.divide(x[i] - y[k], y[j] - y[k])),
axis=2)
#E_ = np.ones((n,m), dtype=float)
#for i in xrange(n):
# for j in xrange(m):
# for k in xrange(m):
# if k != j:
# E_[i,j] = E_[i,j]*(x[i] - y[k])/(y[j] - y[k])
#print abs(E-E_).max()
p, q, r = pqr(E)
r = r[:m,:]
d = np.diag(r)
l = np.tril(r.T, -1)/d.reshape(1, m) + np.diag(d)
t = sp.linalg.lu_solve((l, np.arange(len(wy))), wy)
A = q[:, :m]
# This is the exact piece
wxp = np.dot(A, t)
if m < n and min(wxp) < 0 and cvxopt:
# There is some freedom, so try to make weights as positive as
# possible.
B = q[:, m:]
# Now solve the linear programming problem
# Maximize sum(wx) where wx = wxp + B*x >= 0
# We express this as:
# Maximize c.T*x where G*x + s = h for arbitrary s >= 0
# Thus, c.T = ones*B, G = -B, h = wxp
c = cvxopt.matrix(np.sum(B, axis=0))
G = -cvxopt.matrix(B.T.tolist())
h = cvxopt.matrix(wxp)
cvxopt.solvers.options['show_progress'] = False
res = cvxopt.solvers.lp(c,G,h)
if 'optimal' == res['status']:
dwxp = np.dot(B, res['x']).ravel()
wxp = wxp + dwxp
return wxp[p]
[docs]def get_weights(x, roots, cond=1):
r"""Return the list of weights `wx` for quadratures of increasing
order with positive weights for the abscissa `x` give `(y, wy) =
roots(n)` being `n` and weights for some other (Gaussian) quadrature.
.. note::
The formulation is based on the assumption that the weights and
abscissa are quadratures for polynomials (not including the
weight). The weight is implicit.
Parameters
----------
x : array
Abscissa at which quadratures weights `wx` are desired.
roots : function
`(y, wy) = roots(n)` should be the abscissa and weights for a
Gaussian quadrature. I.e., the roots of the `n`'th orthogonal
polynomial :math:`P_n(y_i) = 0`. This should be computed to
machine precision.
cond : float
The abscissa are chosen so that :math:`\sum \abs{w_i}/\abs{\sum
w_i}\leq` `cond`. Must have `1 <= cond`.
Examples
--------
>>> x = np.linspace(-1, 1, 102)[1:-1]
>>> wx = get_weights(x, sp.special.orthogonal.p_roots)
>>> print len(wx)
18
>>> np.allclose(np.dot(wx[-1], x**16), 2./17)
True
>>> np.allclose(np.dot(wx[-1], x**17), 0)
True
See Also
--------
Uses :func:`get_weights_y`.
"""
if cond < 1:
raise ValueError("Must have cond >= 1 (got %g)"%(cond,))
n = 0
ws = [get_weights_y(x, *roots(n+1))]
for n in xrange(1, len(x)):
w = get_weights_y(x, *roots(n+1))
if (1 == cond and min(w) < 0
or sum(abs(w))/abs(sum(w)) > cond):
break
ws.append(w)
return ws
def _test_weights(x, y, wy):
"""
Ax = Vx.T
Ax[:k,:]*wx = m[:k]
Ay = Vy.T
Ay[:k,:]*wy = m[:k]
Vx = E*Vy
Ax = Ay*E.T
E.T*wx = wy
E = P*Q*R
P.T wx = Q*solve(R.T, wt)
A = R.T*Q.T*P.T
A = E.T
A[:k,:]*wx = wy[:k]
"""
n = len(x)
m = len(y)
scale = abs(y).max()
i = np.arange(n).reshape((n, 1, 1))
j = np.arange(m).reshape((1, m, 1))
k = j.reshape((1, 1, m))
E = np.prod(np.where(k == j,
1,
np.divide(x[i] - y[k], y[j] - y[k])),
axis=2)
i = i.reshape((n, 1))
j = j.reshape((1, m))
Vx = (x[i]/scale)**j
i = j.reshape((m, 1))
Vy = (y[i]/scale)**j
assert np.allclose(np.dot(E, Vy), Vx)
n = np.arange(2*m, dtype=int).reshape((2*m,1))
my = np.sum((y/scale)**n*wy, axis=1) # Moments
assert np.allclose(np.dot(Vy.T, wy), my[:len(y)])
p, q, r = pqr(E)
d = np.diag(r)
l = np.tril(r.T, -1)/d.reshape(1, m) + np.diag(d)
x = sp.linalg.lu_solve((l, np.arange(len(wy))), wy)
for n in xrange(1, len(wy)+1):
A = q[:,:n]
B = q[:,n:]
wx = np.dot(q[:,:n], x[:n])[p]
assert np.allclose(np.dot(Vx.T, wx), my[:len(y)])
return locals()
P, Q, R = Vx
def hermite(x, n, f=None):
"""Compute the `n`'th Hermite polynomial using the stable
recurrence relationship.
.. math::
H_{-1}(x) = 0\\
H_{0}(x) = 1\\
H_{n+1}(x) = 2 x H_{n}(x) - 2 n H_{n-1}(x)
In order to deal with overflow, one can pass an optional `f(x,n)`,
and instead the function :math:`Q_n(x) = H_n(x)\exp(-f(n,x))` will be
computed:
.. math::
Q_{-1}(x) = 0\\
Q_{0}(x) = \exp(f(x,0))\\
Q_{n+1}(x) = 2 x e^{f(x,n)-f(x,n+1)} Q_{n}(x)
- 2 n e^{f(x,n-1)- f(x,n+1)}Q_{n-1}(x)
"""
# h[j] = H_{j-1}(x)
if f is None:
def f(x, n):
return 0
h = [0*x, 0*x + np.exp(-f(x,0))]
for j in xrange(0, n):
f1 = f(x, j+1)
a = np.exp(f(x, j) - f1)
if j == 0:
b = 0
else:
b = np.exp(f(x, j-1) - f1)
h.append(2*x*a*h[j+1] - 2*j*b*h[j])
return h[-1]
def hermite_ln2(x, n):
"""Return :math:`\ln(H_n^2(x))`, the log of the square of the
`n`'th Hermite polynomial using the stable recurrence
relationship.
.. math::
H_{-1}(x) = 0\\
H_{0}(x) = 1\\
H_{n+1}(x) = 2 x H_{n}(x) - 2 n H_{n-1}(x)\\
P_{n} = H_n(x)/(2x)^n\\
P_{n+1} = P_{n}(x) - n P_{n-1}(x)/(2x^2)
"""
def f0(x, m):
return (x**2/2 + 0.5*np.log((2*n+1)/(2*n +1 - x**2))
+sp.special.gammaln(m+1) -
+sp.special.gammaln(m/2+1))
def f1(x, n):
return n*np.log((2*x)**2)/2
q0 = hermite(x, n, f=f0)
if n <= 0:
return np.log(abs(q0)) + f0(x,n)
else:
q1 = hermite(x, n, f=f1)
return 2*np.where(x**2 < min(2*n, np.log(np.finfo(float).max)/2),
np.log(abs(q0)) + f0(x,n),
np.log(abs(q1)) + f1(x,n))
return res
[docs]def h_roots(n):
r"""Return the abscissa and weights for the Gaussian quadrature of
order `n` in the Hermite polynomials.
Notes
-----
We use the scipy routines for the roots, but our own custom
formula for the weights because the scipy routine screws up these
for large n.
"""
y = sp.special.orthogonal.h_roots(n)[0]
#2**(n-1)*factorial(n)*math.sqrt(np.pi)/n**2/(H(y,n-1)[-1])**2
wy = np.exp((n-1)*np.log(2)
+ sp.special.gammaln(n+1)
- hermite_ln2(y,n-1))*math.sqrt(np.pi)/n**2
return y, wy
def _go_hermite(n):
m = 20
n = 25
x = np.random.rand(n)*5-2.5
y = sp.special.orthogonal.h_roots(m)[0]
Vx = np.zeros((n,m), dtype=float)
Vy = np.zeros((m,m), dtype=float)
Q = np.zeros((m,m), dtype=float)
dh = -sp.special.orthogonal.hermite(m+1)(y)
l = math.sqrt(np.pi)*2**(m+1)*sp.misc.factorial(m)/dh**2
for i in xrange(m):
Vy[:,i] = sp.special.orthogonal.hermite(i)(y)
Vx[:,i] = sp.special.orthogonal.hermite(i)(x)
Q[:,i] = sp.special.orthogonal.hermite(i)(y)*np.sqrt(l[i])
# roots
x_ = []
l_ = []
for m in xrange(n):
x_.append(sp.special.orthogonal.h_roots(m)[0])
math.sqrt(np.pi)*2**(m+1)*sp.misc.factorial(m)
_tmp = [0]
def _get_weights(x, moment, pw, n=None, force=False, _tmp=_tmp):
r"""Return a two-dimensional `m` by `len(x)` array `w` the
non-negative weights for quadratures of order 0 up to maximal
degree `m<=n`.
Parameters
----------
x : array
Flat array of abscissa
moment : function
Must return the `n`th moment `m_n = moment(n)`.
pw : function
Must evaluate the `n`th moment at the abscissa times the weight
function `p_n_x = pw(n, x) = P_n(x)*w(x)`.
.. todo:: Test this!
"""
def w():
"Return the weights"
PW = np.asmatrix(np.vstack(pw_))
M = np.asmatrix(np.vstack(moments))
l = np.linalg.solve(PW*PW.T, M)
w = PW.T*l
_tmp[0] = (PW, M, w)
return np.asarray(w).ravel()
if n is None:
n = len(x)
m = 0
moments = [moment(m)]
pw_ = [pw(m, x)]
w_ = [w()]
while m < min(n, len(x)) and (force or all(0 <= w_[-1])):
m = m + 1
moments.append(moment(m))
pw_.append(pw(m, x))
try:
w_.append(w())
except np.linalg.LinAlgError:
break
if any(w_[-1] < 0):
w_ = w_[:-1]
else:
while len(w_) < n:
m = len(w_)
abs_err = abs(np.dot(w_[-1], pw(m, x)) - moment(m))
rel_err = abs_err/(abs(moment(m)) + _tiny)
err = min(abs_err, rel_err)
if err < 100*_eps:
w_.append[w_[-1]]
return np.array(w_)
def get_gauss_1(N):
"""Return a list of the Gaussian quadratures for up to `N` points
on the interval [-1,1] with unit weight.
This uses Newtons method and the fact that the successive
quadrature points interlace. This is sufficient for quadratures
up to 15 points before the Jacobian becomes sufficiently singular.
"""
def J(x,w):
"""Exact Jacobian."""
n = len(x)
j = np.arange(2*n, dtype=float).reshape((2*n,1))
m = np.arange(n, dtype=float).reshape((1,n))
w = np.asarray(w).reshape((1,n))
x = np.asarray(x).reshape((1,n))
J = np.hstack([j*x**(j-1)*w, x**j])
J[0,:n] = 0
return J
def X(x,n):
"""Return the Vandermond matrix of order `n` for abscissa
`x`."""
x = np.asarray(x).reshape((1,len(x)))
j = np.arange(n, dtype=float).reshape((n,1))
return x**j
def mom(n):
"""Return the first `n` moments."""
j = np.arange(n, dtype=float) + 1
return (1.0 - (-1)**j)/j
def err(x, w):
"""Return the array of quadrature errors over the `2*len(x)`
moments.
"""
n = len(x)
err = np.dot(X(x, 2*n), w).ravel() - mom(2*n)
return err
def weights(x,n=None):
"""Return the weights for abscissa `x` giving exact
quadratures to degree `n`."""
if n is None:
n = len(x)
X_ = X(x, n)
return np.dot(X_.T,
np.linalg.solve(np.dot(X_, X_.T),
mom(n))).ravel()
quad = [(np.array([0]), np.array([2]))]
for n in xrange(2,N+1):
x, w = quad[n-2]
# Extend abscissa and weights
x = np.hstack([[-1], x, [1]])
x = (x[:-1] + x[1:])/2
w = weights(x, 1)
# Newton's method.
while 1e-12 < np.max(abs(err(x, w))):
print n, np.max(abs(err(x, w))), x, w
dxw = -np.linalg.solve(J(x, w), err(x, w))
x = x + dxw[:n]
w = w + dxw[n:]
quad.append((x,w))
return quad