P&B #106

This is a problem from Putnam and Beyond:

Given that $9a^2 + 8ab + 7b^2 \le 6$, show that $7a + 5b + 12ab \le 9$.

My solution is extremely janky; nevertheless, it is still a solution that is slightly different from the book’s.

Our hypothesis implies that $-12ab \ge -9 + \frac{27}{2}a^2 + \frac{21}{2}b^2$ by manipulation. Note that our conclusion is equivalent to $9 – 7a – 5b – 12ab \ge 0$. Hence, we have

\begin{align*}
9 – 7a – 5b – 12ab &\ge \frac{27}{2}a^2 + \frac{21}{2}b^2- 7a – 5b \stackrel{?}{\ge} 0.
\end{align*}

Now consider the two quadratics $\frac{27}{2}a^2 – 7a$ and $\frac{21}{2}b^2 – 5b$. They are negative between $0$ and their other root, which is $a = \frac{14}{27}$ and $b = \frac{10}{21}$. Since the objective function is strictly increasing for in the positive numbers, checking the last root reveals that it is indeed satisfied as $7a + 5b + 12 ab = \frac{1696}{189} < 9$. Again, janky but doable in a reasonable time. (Of course, there's more proper methods with Lagrange multipliers and such...)

A Geometry Problem

Here’s an excellent question from AoPS:

In the figure to the left, circle $B$ is tangent to circle $A$ at $X$, circle $C$ is tangent to circle $A$ at $Y$, and circles $B$ and $C$ are tangent to each other. If $AB = 6, AC = 5,$ and $BC = 9$, what is $AX$?

The solution really requires no geometry besides the very simple definitions of a circle and a little bit of algebra. Let $r_B, r_C$ be the radius of circles $B, C$ respectively, then we have $r_B + r_C = BC = 9$. Second, $AX = AY = AC + CY = 5 + r_C = 6 + r_B$. Solving these two systems of equation gives 4 and 5. Hence the radius is 10.

Won’t You Be My Neighbor?

I grew up with basic cable, and strict parents. Mr. Rogers’ show (or PBS in general) was never the favorite if I had the luxury of choice; Saturday morning cartoons takes that award. But as an adult, I wished that I watched his show a bit more as a child.

The more I learned about him, the more I think he is the moder day equivalent of Jesus. From his heartfelt reaction to meeting a former guest on the show to washing Officer’s feet, I really could have learned a lot from him. When Cassady and I spent Thursday night at the dollar theater seeing the 2018 film, I was a bit wary at first.

How do you portray someone with no scandals and blemishes? I didn’t want a film which was obsequious, but  rather show the more human-side of him.

I think the film did a good job.

It’s an extremely touching film which showed a lot of the behind-the-scenes of the iconic TV show. More importantly, they showed the humorous side of Fred, along with the pragmatic side. The writers showcased not only his willingness to rise against racism, but his reluctance of gay rights in an era where that was a dark mark.

I really do wonder what he thinks about the current landscape. Would he think he failed? Or just get back to work educating children?

Scaling Arguments

This is a pretty important concept in PDEs and its numerical approximations. Specifically, tt shows up in Bramble-Hilbert lemma, and domain decomposition analysis.
Most of this post is pretty much written right after reading Toselli and Widlund’s book, so there are a lot of resemblance.

Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ which is ‘nice’ (say Lipschitz boundary) with radius $h$. Now let $u, v \in H^1(\Omega)$ such that
\begin{align*}
|v|_{H^1(\Omega)} \le C||u||_{H^1(\Omega)}
\end{align*}
and we wish to obtain the $h$ dependence from $C$.

What we do is to first consider a scaled domain $\hat \Omega$ which is just $\Omega$ scaled to be of radius 1, with the change of basis $x = h\hat x$.
If we find the corresponding inequality on $\hat \Omega$, then the constant $C$ will not depend on $h$.
Let $\hat v(\hat x) := v(h\hat x)$, then we note that $\hat \nabla \hat v(\hat x) = h\hat \nabla v(h\hat x)$ where $\hat \nabla $ is the gradient with respect to $\hat x$.
Then,
\begin{align*}
|v|^2_{H^1(\Omega)} &= \int_\Omega |\nabla v(x)|^2 \, dx \\
&= \int_{\hat \Omega} |\hat\nabla v(h \hat x)|^2 h^n \, d\hat x \\
&= \int_{\hat \Omega} |\hat\nabla \hat v(\hat x)|^2 h^{-2} h^n \, d\hat x = h^{n-2}|\hat v|_{H^1(\hat \Omega)}^2
\end{align*}

But for $L^2$ norm, there is no $h^2$ scaling, hence
\begin{align*}
||u||_{L^2(\Omega)}^2 &= \int_\Omega |u(x)|^2 \, dx \\
&= \int_{\hat \Omega} |u(h \hat x)|^2 h^n \, d\hat x = h^n ||\hat u||_{L^2(\hat \Omega)}^2.
\end{align*}
This is why derivatives mixing causes scaling issues.

Poland Can(not) Into Space

My god.; this weekend was utterly wasted for the most part. I spent a good 8 hours in a training session on Saturday, then afterwards started a game of Civ5 as Poland on emperor difficulty.

My position started out pretty precarious, with two warmongering civilizations Songhai AND Mongols neighboring me. But luckily, it seems that they started going to war against each other with the Mongols capturing a city-state to the east of me, and the Songhai’s capital actually.

I got my first three cities running, then just started rolling over the Mongols once I researched and built crossbowmen. Not sure where the Khan’s troops were, but a combination of longswords men and crossbowmen really just steamed rolled the Mongols.

By that time, I obtained the technology for producing winged Hussars and cannons. This duo was literally impossible to stop by the AIs. I conquered the capital of Byzantine, then rolled over to the Shoshones. And then somehow everyone in the world decided to declare war on me.

And then I conquered all of them by the 1950s in-game-time.

I guess the most fun part of civ 5 is reaching late game; by then, the juggernaut becomes impossible to stop. The AI is so bad at naval combat that my battleships were shelling every city with no counterplay from them.

Furi

What a fun little game. I finished this game before I traveled to ICOSAHOM and France, and didn’t get a chance to write about it. To preface this, the only reason I bought this game is because of Dunkey’s video.

The soundtrack is actually the best part about this videogame. I’ve been listening to it before I even picked up the game on Steam, and it… just drives you on. Undoubtedly the best part of the whole experience.

In terms of the actual game itself, it is quite short (with an alternate ending). The bosses’ also have varying difficulty levels which makes some levels incredibly frustrating, and others a simple grind once you find the trick. I really hated the level with the poison gas, and didn’t appreciate how easy the second-to-last boss was (who apparently trained his whole life for our battle).

Other than that, the story line has some quirks. The non-skippable cutscenes really are the main character development, plus some quotes from the battles themselves. TBH, I had to look up the story because I didn’t pay attention most of the time.

Overall, a fun game to pick up for a weekend or two.

The one appendage man

It’s been a rough few days for my body. My left arm has some ulnar nerve impingement issues, and have been extra sensitive.

This past Sunday I fell from my bike and scraped my right knee, and injured my right arm’s wrist.

I have one remaining non-impaired limb left.

Putnam 2003 B2

(Work has been going slow lately… hence all these math posts.)

Let $n$ be a positive integer. Starting with $1, \frac{1}{2}, \cdots, \frac{1}{n}$, form a new sequence of $n-1$ entries $3/4, 5/12, \cdots, (2n-1)/(2n(n-1))$ by taking the averages of two consecutive entries in the first sequence. Repeat the averaging process on the second sequence to obtain a third sequence of $n-1$ entries, and continue until the final sequence of only a single number $x_n$. Show that $x_n < 2/n$. Solution: Given a sequence $a_1, a_2, \ldots, a_n$, the averaging process will give us $(a_1 + a_2)/2, \ldots, (a_{n-1} + a_n)/2$. If we proceed with the averaging procedure, it is not hard to see that the binomial coefficients show up. In fact, we can see that \begin{align*} x_n &:= \frac{\sum_{k=1}^n \binom{n-1}{k-1} \frac{1}{k}}{2^{n-1}} \\ &= \frac{\sum_{k=1}^n \frac{(n-1)!}{(k-1)!(n-k)!} \frac{1}{k}}{2^{n-1}} \\ &= \frac{\sum_{k=1}^n \frac{1}{n}\binom{n}{k}}{2^{n-1}} = \frac{2^n - 1}{n 2^{n-1}} < 2/n. \end{align*} (Definitely one of the easier problems.)

Putnam 2003 A2

Let $a_1, \ldots, a_n$ and $b_1, \ldots, b_n$ be non-negative real numbers. Show that $$(a_1\ldots a_n)^{1/n} + (b_1\ldots b_n)^{1/n} \le [(a_1 + b_1) \cdots (a_n + b_n)]^{1/n}.$$

Solution: we will use the generalized Holder’s inequality which states that
$$||f_1\ldots f_n ||_1 \le ||f_1||_{\lambda_1} \cdots ||f_n||_{\lambda_n}$$
for lambda weights $\lambda_1^{-1} + \cdots + \lambda_n^{-1} = 1$ all greater than 1.

Assuming this is true, let $f_i = (a_i^{1/n}, b_i^{1/n})$ and the norms be the discrete $l^p$ norm. This will give us $||f_1 \ldots f_n||_1 = (a_1\ldots a_n)^{1/n} + (b_1\ldots b_n)^{1/n}$ as everything is non-negative. The weight will be uniform $\lambda_i = n$, then the right hand side will be
$$||f_i||_{n} = (a_i + b_i)^{1/n}$$
and we have our inequality.

The sole remaining thing to prove is the generalized Holder’s inequality. We will assume the famous base case of the two element case. In the inductive case, we have
\begin{align*}
||f_1\cdots f_{n+1}||_1 &\le ||f_1 \cdots f_n||_{\lambda_{n+1}/(\lambda_{n+1} – 1)} ||f_{n+1}||_{\lambda_{n+1}} \\
&= ||(f_1 \cdots f_n)^{\lambda_{n+1}/(\lambda_{n+1} – 1)}||_1^{(\lambda_{n+1} – 1)/\lambda_{n+1}} ||f_{n+1}||_{\lambda_{n+1}}.
\end{align*}
From here, just change the weights and use the inductive case and we are done.