The Last Thing He Told Me

I honestly spent some three months trying to read Donna Tartt’s The Little Friend. It was well written, and I generally like her written styles (I did finish The Goldfinch and The Secret History) but somehow this particular novel didn’t work for me. Out of frustration, I just borrowed the most popular book on Libby which is available which happens to be the titular book written by Laura Dave.

Oh boy was the difference in writing stark. It was distinctively less sophisticated and the vocabulary more elementary. It made for much faster reading meaning the novel was done in less than a week. This is also probably due to the over-explanation at all points.

Was it worth it? Eh.

I honestly thought the plot was unrealistic with hackneyed tropes: a teenager being difficult, the protagonist doing things for “love”, a clear misunderstanding of what technology can do. Still, I finished it.

Why is that? I don’t know.

An Improvisational Note

For the last few weeks, I’ve been taking a beginner improv class as a way to meet people and broaden my horizons. Overall, the experience has been highly positive and I’ve learned a great deal on not just improv, but comedy as a whole.

The first phrase I have been telling friends what the classes are like is “adult daycare.” Certainly a lowbrow phrase, but how else can one interpret a class where we try to channel different levels of an animal? Or one where we try to perform a Busby Berkeley dance without a priori discussion? Another great example of an improv game that I (and I suspect many kids) would enjoy is equivalent to the everyone-tell-a-story-but-can-only-say-one-word.

At some point, silliness from all becomes just straight up fun. The key phrase there is “from all”: we all paid for the lessons, and thus had a stake in being whimsical. I’m glad none of my classmates  hesitated to do any of the exercises and in fact most put themselves out there. This was one of the lessons from improv: action with confidence supersedes hesitancy, especially if no one knows what you’re doing.

Joking aside, it’s been actually very informative on what exactly makes good improv, well, good. Some can be attributed to the spontaneity of the performers, but there are general frameworks which help add structure. For one, we learned that it’s much easier to grasp a scene if we self-impose a goal and also add some sort of familiar relationships between the characters. People can grasp onto the characters much easier that way, and there’s established frameworks to manipulate.

As something that I signed up for willy-nilly turned out to be quite the happy accident. I really did enjoy the class and the challenges it gave me.

Saddle Point Property

I felt dumb while trying to derive this, so here it is. Sourced from page 129 of Braess FEM textbook.

Let $X, M$ be two Hilbert spaces, and $a: X \times X \to \mathbb{R}, \, b: X \times M \to \mathbb{R}$ continuous bilinear forms.
Assume that $a$ is symmetric and that $a(u, u) \ge 0$ for all $u \in X$.
Let $f \in X’, g \in M’$ and let
\begin{align*}
\mathcal{L}(v, \mu) &= \frac{1}{2}a(v, v) – \langle f, v \rangle + (b(v, \mu) – \langle g, \mu\rangle)
\end{align*}
which is simply the Lagrangian of a constrained minimization problem.
Assume that $(u, \lambda)$ satisfies
\begin{align}
a(u, v) + b(v, \lambda) &= \langle f, v \rangle \qquad &\forall v \in X, \\
b(u, \mu) &= \langle g, \mu \rangle \qquad &\forall \mu \in M,
\end{align}
then one has the saddle point property
\begin{align*}
\mathcal{L}(u, \mu) \le \mathcal{L}(u, \lambda) \le \mathcal{L}(v, \lambda) \qquad \forall (v, \mu) \in X \times M.
\end{align*}

The first inequality is actually an equality by noting that $\mathcal{L}(u, \mu) = \frac{1}{2}a(u, u) – \langle f, u \rangle= \mathcal{L}(u, \lambda)$ by using (2).
For the other inequality, let $v = u + w$, then
\begin{align*}
\mathcal{L}(v, \lambda) = \mathcal{L}(u + w, \lambda) &= \frac{1}{2}a(u + w, u + w) – \langle f, u + w \rangle + (b(u + w, \lambda) – \langle g, \lambda\rangle) \\
&= \mathcal{L}(u, \lambda) + \frac{1}{2}a(w, w) – \langle f, w \rangle + a(u, w) + b(w, \lambda) \\
&= \mathcal{L}(u, \lambda) + \frac{1}{2}a(w, w) \ge\mathcal{L}(u, \lambda)
\end{align*}
where (1) and (2) are used.

A fixed order double integral

I have a good example where copying code straight from Stackoverflow is a terrible idea. Consider a simple rectangle $[0, 1] \times [0, 2]$ which we want to perform a double integral over. Of course, there’s the classic way to use scipy’s $\texttt{dblquad}$

In [1]:
from scipy.integrate import dblquad

def f(x, y): 
    return x + 2 * y

dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
Out[1]:
5.0

This above is all and nice, but $\texttt{dblquad}$ is very accurate, and hence slow. What if we don’t need that level of accuracy? We can try to use the $\texttt{fixed_quad}$ function in scipy. Here’s a Stackoverflow question about the exact same thing, with some sample code.

Let’s copy its code and test it. It seems to work:

In [2]:
from scipy.integrate import fixed_quad

fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
Out[2]:
5.000000000000001

Of course, we have very fast speed up, at least on my laptop

In [3]:
%timeit fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
10000 loops, best of 3: 39.3 µs per loop
In [4]:
%timeit dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
1000 loops, best of 3: 261 µs per loop

Seems to work well right? It’s too bad it fails for very simple cases…

In [5]:
def g(x, y): 
    return x * y

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
0.9999999999999999
1.333333333333333

Ah, what’s going wrong? Turns out it has to deal with the way $\texttt{fixed_quad}$ deals with the array. It doesn’t really evaluate the tensor product of the quadrature points, but rather along some subset of it.

A better implementation is just to rewrite the code myself (or to use something like quadpy):

In [6]:
from scipy.special import roots_legendre
import numpy as np
roots, weights = roots_legendre(5)

def dbl_integral(func, left: int, right: int, bottom: int, top: int):
    """
    Calculates double integral \int_c^d \int_a^b f(x, y) dx dy.
    """
    x = (right - left) * (roots + 1) / 2.0 + left

    y = (top - bottom) * (roots + 1) / 2.0 + bottom

    total = 0
    for index_i, i in enumerate(x):
        # The first ones_like(y) is to ensure we obtain a vector for some of my usages... 
        total += np.dot(np.ones_like(y) * func(i, y) * weights[index_i], weights)

    return (top - bottom) * (right - left) / 4 * total
In [7]:
print(dblquad(lambda y, x: f(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(f, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(f, 0, 1, 0, 2))

print(dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0])
print(fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0])
print(dbl_integral(g, 0, 1, 0, 2))
5.0
5.000000000000001
5.0
0.9999999999999999
1.333333333333333
1.0

And we also maintain the speed (though not as fast but it’s not hard to write above in einsum or optimize it a bit more), assuming we precalculate the weights and roots:

In [8]:
%timeit dblquad(lambda y, x: g(x, y), 0, 1, lambda x: 0, lambda x: 2)[0]
%timeit fixed_quad(lambda x: fixed_quad(g, 0, 1, args=(x, ), n=5)[0], 0, 2, n=5)[0]
%timeit dbl_integral(g, 0, 1, 0, 2)
1000 loops, best of 3: 246 µs per loop
10000 loops, best of 3: 38.1 µs per loop
10000 loops, best of 3: 74.1 µs per loop

The “Ahhh” Shower

Rodrigo never intended to have an especially busy day on the 12th. Of course, he had to make a trip to the farmer’s market to get those specialty onions he loved (they were sweeter than the ones in the grocery store), but otherwise the Saturday was completely free. It was a much needed reprieve from the onslaught of work recently with another twelve hour shift on the 13th.

The day started out as planned: waking up with the sun’s rays gently urging his eye lids to make way, with no particular rush. The drive to the market was a smooth twenty minutes. Somehow, traffic was especially clear, and so was the line for the produce. Within an hour, Rodrigo was back home caramelizing the onions for soup and toasting bread. The rest of the day was a blur of mindless entertainment from the multitude of streaming sources while sipping on the French onion soup.

Trish’s Saturday was quite the opposite. Since winter still maintained its grasp on the mornings, she wanted to hike along the foothills in the weak heat of the afternoon. The morning therefore was filled with scrubbing and cleaning. She always loved the sense of accomplishment from finishing chores.

As the hike was finishing up, she got a text from Frank, that cute guy she’s been texting.

“hey, lost our fourth from trivia in 30 @ gecko’s, you in? sorry for short notice!”

The deliberation was short. Whilst her legs were jelly, she didn’t smell too bad since it was still chilly and Frank was, by all accounts, a catch.

“ofc, see you soon :)”

Trish and Frank’s team never came close to winning. No one on the team knew any Madonna songs nor read Julius Caesar. She got home more than ten hours after she left for the trailhead.

Though their days progressed thoroughly differently, Rodrigo and Trish both ended with a hot shower, one where “ahhhh”s were spoken, the true mark of a good day.

Nonlinear Piola Transform

 

This is an extension for Linear Piola Transform, where we discuss the cases where the transformation from the reference element to the physical element is nonlinear such as a parametric element. The main tool here is actually from the exterior calculus framework, and will just be from the FEEC book by Arnold.

Let $\Omega, \hat \Omega$ by the physical element and reference element with an orientation-preserving, differentiable map $\phi: \hat \Omega \to \Omega$. Suppose $p: \Omega \to \mathbb{R}, v: \Omega \to \mathbb{R}^3$ are the functions on the physical element. The quantity $(p, \nabla \cdot v) = \int_\Omega p \nabla \cdot v \, dx$ is of importance.

The key observation here is that by letting $\nu = v^1 dx^2 \wedge dx^3 – v^2 dx^1 \wedge dx^3 + v^3 dx^1 \wedge dx^2$, we have $d\nu = v$ where $d$ is the exterior derivative. A key fact here is that with integral of differential forms is it preserves pullbacks (see second answer) hence
\begin{align*}
(p, \nabla \cdot v) = \int_\Omega p \nabla \cdot v \, dx = \int_\Omega p \wedge d\nu = \int_{\hat \Omega} \phi^*(p \wedge d\nu)
\end{align*}
where $\phi^*$ is the pullback of differential form $\phi$.

Now, it’s just a matter of algebra
\begin{align*}
\int_{\hat \Omega} \phi^*(p \wedge d\nu) = \int_{\hat \Omega} \phi^*p \wedge \phi^* d\nu = \int_{\hat \Omega} \phi^*p \wedge d\phi^*\nu.
\end{align*}
Now, we need to look up what the pullback does to a 0-form $p$ and on the 2-form $\nu$. Well, we can look this up or just algebraically do it (which I’ll do it sooner or later because I couldn’t find a good source online), we have $\phi^* p = p \cdot \phi$ and
\begin{align*}
\phi^* \nu = (\det \phi’ )(\phi’)^{-1} (v \cdot \phi): \hat \Omega \to \mathbb{R}^3
\end{align*}
which is exactly the Piola transform.

Ne plus ultra

1. the highest point capable of being attained

2. the most profound degree of a quality or state