# A fixed order double integral

I have a good example where copying code straight from Stackoverflow is a terrible idea. Consider a simple rectangle $[0, 1] \times [0, 2]$ which we want to perform a double integral over. Of course, there’s the classic way to use scipy’s $\texttt{dblquad}$

In [1]:
from scipy.integrate import dblquad

def f(x, y):
return x + 2 * y

dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]

Out[1]:
5.0

This above is all and nice, but $\texttt{dblquad}$ is very accurate, and hence slow. What if we don’t need that level of accuracy? We can try to use the $\texttt{fixed_quad}$ function in scipy. Here’s a Stackoverflow question about the exact same thing, with some sample code.

Let’s copy its code and test it. It seems to work:

In [2]:
from scipy.integrate import fixed_quad


Out[2]:
5.000000000000001

Of course, we have very fast speed up, at least on my laptop

In [3]:
%timeit fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]

10000 loops, best of 3: 39.3 µs per loop

In [4]:
%timeit dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]

1000 loops, best of 3: 261 µs per loop


Seems to work well right? It’s too bad it fails for very simple cases…

In [5]:
def g(x, y):
return x * y

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])

0.9999999999999999
1.333333333333333


Ah, what’s going wrong? Turns out it has to deal with the way $\texttt{fixed_quad}$ deals with the array. It doesn’t really evaluate the tensor product of the quadrature points, but rather along some subset of it.

A better implementation is just to rewrite the code myself (or to use something like quadpy):

In [6]:
from scipy.special import roots_legendre
import numpy as np
roots, weights = roots_legendre(5)

def dbl_integral(func, left: int, right: int, bottom: int, top: int):
"""
Calculates double integral \int_c^d \int_a^b f(x, y) dx dy.
"""
x = (right - left) * (roots + 1) / 2.0 + left

y = (top - bottom) * (roots + 1) / 2.0 + bottom

total = 0
for index_i, i in enumerate(x):
# The first ones_like(y) is to ensure we obtain a vector for some of my usages...
total += np.dot(np.ones_like(y) * func(i, y) * weights[index_i], weights)

return (top - bottom) * (right - left) / 4 * total

In [7]:
print(dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(dbl_integral(f, 0, 1, 0, 2))

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(dbl_integral(g, 0, 1, 0, 2))

5.0
5.000000000000001
5.0
0.9999999999999999
1.333333333333333
1.0


And we also maintain the speed (though not as fast but it’s not hard to write above in einsum or optimize it a bit more), assuming we precalculate the weights and roots:

In [8]:
%timeit dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
%timeit fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
%timeit dbl_integral(g, 0, 1, 0, 2)

1000 loops, best of 3: 246 µs per loop
10000 loops, best of 3: 38.1 µs per loop
10000 loops, best of 3: 74.1 µs per loop


This site uses Akismet to reduce spam. Learn how your comment data is processed.