Saddle Point Property

I felt dumb while trying to derive this, so here it is. Sourced from page 129 of Braess FEM textbook.

Let $X, M$ be two Hilbert spaces, and $a: X \times X \to \mathbb{R}, \, b: X \times M \to \mathbb{R}$ continuous bilinear forms.
Assume that $a$ is symmetric and that $a(u, u) \ge 0$ for all $u \in X$.
Let $f \in X’, g \in M’$ and let
\mathcal{L}(v, \mu) &= \frac{1}{2}a(v, v) – \langle f, v \rangle + (b(v, \mu) – \langle g, \mu\rangle)
which is simply the Lagrangian of a constrained minimization problem.
Assume that $(u, \lambda)$ satisfies
a(u, v) + b(v, \lambda) &= \langle f, v \rangle \qquad &\forall v \in X, \\
b(u, \mu) &= \langle g, \mu \rangle \qquad &\forall \mu \in M,
then one has the saddle point property
\mathcal{L}(u, \mu) \le \mathcal{L}(u, \lambda) \le \mathcal{L}(v, \lambda) \qquad \forall (v, \mu) \in X \times M.

The first inequality is actually an equality by noting that $\mathcal{L}(u, \mu) = \frac{1}{2}a(u, u) – \langle f, u \rangle= \mathcal{L}(u, \lambda)$ by using (2).
For the other inequality, let $v = u + w$, then
\mathcal{L}(v, \lambda) = \mathcal{L}(u + w, \lambda) &= \frac{1}{2}a(u + w, u + w) – \langle f, u + w \rangle + (b(u + w, \lambda) – \langle g, \lambda\rangle) \\
&= \mathcal{L}(u, \lambda) + \frac{1}{2}a(w, w) – \langle f, w \rangle + a(u, w) + b(w, \lambda) \\
&= \mathcal{L}(u, \lambda) + \frac{1}{2}a(w, w) \ge\mathcal{L}(u, \lambda)
where (1) and (2) are used.

A fixed order double integral

I have a good example where copying code straight from Stackoverflow is a terrible idea. Consider a simple rectangle $[0, 1] \times [0, 2]$ which we want to perform a double integral over. Of course, there’s the classic way to use scipy’s $\texttt{dblquad}$

In [1]:
from scipy.integrate import dblquad

def f(x, y): 
    return x + 2 * y

dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]

This above is all and nice, but $\texttt{dblquad}$ is very accurate, and hence slow. What if we don’t need that level of accuracy? We can try to use the $\texttt{fixed_quad}$ function in scipy. Here’s a Stackoverflow question about the exact same thing, with some sample code.

Let’s copy its code and test it. It seems to work:

In [2]:
from scipy.integrate import fixed_quad

fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]

Of course, we have very fast speed up, at least on my laptop

In [3]:
%timeit fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
10000 loops, best of 3: 39.3 µs per loop
In [4]:
%timeit dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
1000 loops, best of 3: 261 µs per loop

Seems to work well right? It’s too bad it fails for very simple cases…

In [5]:
def g(x, y): 
    return x * y

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])

Ah, what’s going wrong? Turns out it has to deal with the way $\texttt{fixed_quad}$ deals with the array. It doesn’t really evaluate the tensor product of the quadrature points, but rather along some subset of it.

A better implementation is just to rewrite the code myself (or to use something like quadpy):

In [6]:
from scipy.special import roots_legendre
import numpy as np
roots, weights = roots_legendre(5)

def dbl_integral(func, left: int, right: int, bottom: int, top: int):
    Calculates double integral \int_c^d \int_a^b f(x, y) dx dy.
    x = (right - left) * (roots + 1) / 2.0 + left

    y = (top - bottom) * (roots + 1) / 2.0 + bottom

    total = 0
    for index_i, i in enumerate(x):
        # The first ones_like(y) is to ensure we obtain a vector for some of my usages... 
        total += * func(i, y) * weights[index_i], weights)

    return (top - bottom) * (right - left) / 4 * total
In [7]:
print(dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(f, 0, 1, 0, 2))

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(g, 0, 1, 0, 2))

And we also maintain the speed (though not as fast but it’s not hard to write above in einsum or optimize it a bit more), assuming we precalculate the weights and roots:

In [8]:
%timeit dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
%timeit fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
%timeit dbl_integral(g, 0, 1, 0, 2)
1000 loops, best of 3: 246 µs per loop
10000 loops, best of 3: 38.1 µs per loop
10000 loops, best of 3: 74.1 µs per loop

The “Ahhh” Shower

Rodrigo never intended to have an especially busy day on the 12th. Of course, he had to make a trip to the farmer’s market to get those specialty onions he loved (they were sweeter than the ones in the grocery store), but otherwise the Saturday was completely free. It was a much needed reprieve from the onslaught of work recently with another twelve hour shift on the 13th.

The day started out as planned: waking up with the sun’s rays gently urging his eye lids to make way, with no particular rush. The drive to the market was a smooth twenty minutes. Somehow, traffic was especially clear, and so was the line for the produce. Within an hour, Rodrigo was back home caramelizing the onions for soup and toasting bread. The rest of the day was a blur of mindless entertainment from the multitude of streaming sources while sipping on the French onion soup.

Trish’s Saturday was quite the opposite. Since winter still maintained its grasp on the mornings, she wanted to hike along the foothills in the weak heat of the afternoon. The morning therefore was filled with scrubbing and cleaning. She always loved the sense of accomplishment from finishing chores.

As the hike was finishing up, she got a text from Frank, that cute guy she’s been texting.

“hey, lost our fourth from trivia in 30 @ gecko’s, you in? sorry for short notice!”

The deliberation was short. Whilst her legs were jelly, she didn’t smell too bad since it was still chilly and Frank was, by all accounts, a catch.

“ofc, see you soon :)”

Trish and Frank’s team never came close to winning. No one on the team knew any Madonna songs nor read Julius Caesar. She got home more than ten hours after she left for the trailhead.

Though their days progressed thoroughly differently, Rodrigo and Trish both ended with a hot shower, one where “ahhhh”s were spoken, the true mark of a good day.

Nonlinear Piola Transform


This is an extension for Linear Piola Transform, where we discuss the cases where the transformation from the reference element to the physical element is nonlinear such as a parametric element. The main tool here is actually from the exterior calculus framework, and will just be from the FEEC book by Arnold.

Let $\Omega, \hat \Omega$ by the physical element and reference element with an orientation-preserving, differentiable map $\phi: \hat \Omega \to \Omega$. Suppose $p: \Omega \to \mathbb{R}, v: \Omega \to \mathbb{R}^3$ are the functions on the physical element. The quantity $(p, \nabla \cdot v) = \int_\Omega p \nabla \cdot v \, dx$ is of importance.

The key observation here is that by letting $\nu = v^1 dx^2 \wedge dx^3 – v^2 dx^1 \wedge dx^3 + v^3 dx^1 \wedge dx^2$, we have $d\nu = v$ where $d$ is the exterior derivative. A key fact here is that with integral of differential forms is it preserves pullbacks (see second answer) hence
(p, \nabla \cdot v) = \int_\Omega p \nabla \cdot v \, dx = \int_\Omega p \wedge d\nu = \int_{\hat \Omega} \phi^*(p \wedge d\nu)
where $\phi^*$ is the pullback of differential form $\phi$.

Now, it’s just a matter of algebra
\int_{\hat \Omega} \phi^*(p \wedge d\nu) = \int_{\hat \Omega} \phi^*p \wedge \phi^* d\nu = \int_{\hat \Omega} \phi^*p \wedge d\phi^*\nu.
Now, we need to look up what the pullback does to a 0-form $p$ and on the 2-form $\nu$. Well, we can look this up or just algebraically do it (which I’ll do it sooner or later because I couldn’t find a good source online), we have $\phi^* p = p \cdot \phi$ and
\phi^* \nu = (\det \phi’ )(\phi’)^{-1} (v \cdot \phi): \hat \Omega \to \mathbb{R}^3
which is exactly the Piola transform.

Ne plus ultra

1. the highest point capable of being attained

2. the most profound degree of a quality or state


The last three weeks has been terrible in every sense of the word with the outbreak of war in Europe. Even though I’m more than eight time zones way from the action, my anxiety level is through the roof, and has caused my heart rate to spike. Part of the reason is that I have absolutely no control over the situation. Nothing I do can effect the news.

I think one indication of maturity level is how one handles helplessness. As my world opened up while aging, there’s more and more things which I can’t change. The way I fundamentally look, the way mustard taste, the way someone else feels about you. And now, the way global geopolitics is shaping up for the foreseeable future.

My approach right now is best encapsulated by Vonnegut’s “so it goes.” It’s okay to be frustrated at what’s going on. In fact, one should absolutely be aghast at what’s happening to the climate crisis, Ukraine war, the pandemic… but alas, what can I do besides my little part?

It’s odd how I think about this right now. Perhaps it’s a strategy to deal with overwhelming anxiety, but I always believed that I have an internal locus of control. I firmly believe that my future is under my control, but having moved to ABQ, home of a nuclear lab, I fully realized that if nuclear war were to break out, I would be incinerated instantly. Furthermore, there’s also issues of health in my family that I can’t control. It just sucks that as we age, chaos increasingly dominates our life and helplessness goes up.

So it goes.