I have a good example where copying code straight from Stackoverflow is a terrible idea. Consider a simple rectangle $[0, 1] \times [0, 2]$ which we want to perform a double integral over. Of course, there’s the classic way to use scipy’s $\texttt{dblquad}$

```
from scipy.integrate import dblquad
def f(x, y):
return x + 2 * y
dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
```

This above is all and nice, but $\texttt{dblquad}$ is very accurate, and hence slow. What if we don’t need that level of accuracy? We can try to use the $\texttt{fixed_quad}$ function in scipy. Here’s a Stackoverflow question about the exact same thing, with some sample code.

Let’s copy its code and test it. It seems to work:

```
from scipy.integrate import fixed_quad
fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
```

Of course, we have very fast speed up, at least on my laptop

```
%timeit fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
```

```
%timeit dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
```

Seems to work well right? It’s too bad it fails for very simple cases…

```
def g(x, y):
return x * y
print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
```

Ah, what’s going wrong? Turns out it has to deal with the way $\texttt{fixed_quad}$ deals with the array. It doesn’t really evaluate the tensor product of the quadrature points, but rather along some subset of it.

A better implementation is just to rewrite the code myself (or to use something like quadpy):

```
from scipy.special import roots_legendre
import numpy as np
roots, weights = roots_legendre(5)
def dbl_integral(func, left: int, right: int, bottom: int, top: int):
"""
Calculates double integral \int_c^d \int_a^b f(x, y) dx dy.
"""
x = (right - left) * (roots + 1) / 2.0 + left
y = (top - bottom) * (roots + 1) / 2.0 + bottom
total = 0
for index_i, i in enumerate(x):
# The first ones_like(y) is to ensure we obtain a vector for some of my usages...
total += np.dot(np.ones_like(y) * func(i, y) * weights[index_i], weights)
return (top - bottom) * (right - left) / 4 * total
```

```
print(dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(f, 0, 1, 0, 2))
print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(g, 0, 1, 0, 2))
```

And we also maintain the speed (though not as fast but it’s not hard to write above in einsum or optimize it a bit more), assuming we precalculate the weights and roots:

```
%timeit dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
%timeit fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
%timeit dbl_integral(g, 0, 1, 0, 2)
```