Life’s Update

I’ve been working on a few things:

  1. Position-based fluids for my CS5643 class, which is a pain to debug. Normally, I have tons of intuition on what numbers are suppose to be but my intuition is nil here. I’m unsure as we’re guessing the correct parameters or even the right formulas and procedure.It’s a great class, just the project are incredibly tedious. For example, my particles right now just seem to float of into the x direction and stay there. Why would it do that?

    Hopefully, what ends up happening is something like the following, except less pretty.

  2. I finally installed Hearthstone for my Elementary OS Linux box. It was incredibly easy following this tutorial.
  3. In terms of research, I’ve been reading up on different ways to generate the Krylov basis for GMRES as described in this paper. I’ll probably write up and refined the old  “idiot’s guide” again for this paper information (and fix the bad latex in there).
  4. This soundtrack (not allowed to embed).
  5. My card handeling has suffered :(.

WISDOM

MY WISDOM TOOTH IS COMING OUT AND IT IS SO ANNOYING. COMBINE THAT WITH CANKER SORES AND A SLIGHT SORE THROAT, MY DAMN MOUTH IS SILLY.

Idiot’s Guide to GMRES (written by and idiot)

I’m doing some programming work for Professor Bindel in the CS department, and it’s pretty novel stuff to me. I figured if I write out what I know about the original method which he is trying to improve on, it’ll help me with the overall picture. Here it is:

The entire premise of GMRES (generalized minimal residual method) is to solve the system of equations like Ax = b where A is a matrices and x, b are vectors with appropriate dimensions. First though, we need some additional machinery before we can cover the main method (don’t worry, the machinery does most of the work).

Overview

First some definitions and an overview before nitty-gritty: a Krylov subspace is a magic thing which is quite simple to think about. It’s defined as K_k(A, b) = \text{span}\{b, Ab, A^2b, \dots, A^{k-1}b\}. It’s really nothing but a series of vectors which we get from multiplying A with b lots of times.

What GMRES aims to do is to minimize the residual (A\hat{x} - b where \hat{x} \in K_k(A,b)).

Arnoldi Iterations

I’m going to assume that you seen what the Gram-Schmidt method is; if you don’t recall, it’s the algorithm where given a set of vectors that span a space, you return an orthonormal basis for the subspace which is spans. The Arnoldi iterations are something extremely similar to that, except it’s got a few caveats.

Imagine that we already have a bunch of the vectors from the Krylov subspace from above, but we really can’t work with them. They might be very poorly conditioned in that they’re basically pointing to the same direction (it resembles the power iterations). We basically apply GS on those Krylov vectors to obtain an orthonormal basis. Our end goal is to obtain an expression of the form AV_{k} = V_{k+1} H_n where A was from the matrix defined in our problem and V matrices are the Krylov subspace vectors combined into a matrix. The H is called a upper Hessenberg matrix (i.e. an upper triangular matrix with the first subdiagonal filled in).

The algorithm is as follows (shamelessly copied from wiki):

  • Start with an arbitrary vector ”q”1 with norm 1.
  • Repeat for ”k” = 2, 3, …
  •  q_k \leftarrow Aq_{k-1} \,
  • for j from 1 to k − 1
  •  h_{j,k-1} \leftarrow q_j^* q_k \,
  • q_k \leftarrow q_k - h_{j,k-1} q_j \,
  • endfor
  • h_{k,k-1} \leftarrow \|q_k\| \,
  •  q_k \leftarrow \frac{q_k}{h_{k,k-1}} \,

What does it mean? It’s just like (modified) GS! For each additional vector from the Krylov subspace, you take out the stuff that are orthogonal to vectors already processed and store it in the H matrix. Sounds simple but looks rough…

GMRES

Now we can actually get to the method (which is simple if you’re still reading). We try to minimize the residual \|Ax_n - b\| (in the Euclidean norm for those wondering), for x \in K_k(A, b) (i.e. the kth Krylov subspace). We can rewrite that as x = V_k y as from our discussion of the Arnoldi iterations above, V_k is the matrices of the Krylov subspace.

Now, we perform some algebra…

\|Ax_k - b\|=\|AV_ky - b\|=\|V_{k+1}H_{k+1}y-b\|=\|H_{k+1}y-V_{k+1}^Tb\| + C

The first equation comes from substitution, the second from using the Arnoldi iterations results, and the last step is kind of tricky. C in our equation is the norm of the projection of b onto the orthogonal complement (i.e. the subspace complement to the space) to the Krylov subspace. Think of this as saying the Pythagorean theorem on the part that lies within the span, and those that lies complement to it.

Finally,
The last bit is just the observation that the first column of V_{k+1}[\latex] is b, normalized to unit length. So V_{k+1}^Tb=\|b\|e_1[\latex].

ProblemS with gmres

If you notice the Arnoldi iterations, you have more and more vectors to loop over as the Krylov subspace gets larger… and you have to store all those vectors too! This is why people consistently use GMRES with restarts, where they erase all the previous iterations, and use reconstruct the Krylov subspace from the current, closest solution.

The problem with this, is there are situations where convergence to the solution actually depends on the restart! If you don’t choose a good restart value, convergence to the solution might not occur. That’s bad.

Another way to deal with this is to demand the matrix of vectors by well conditioned… but I don’t know much about this. And another way, is to use Chebychev polynomials to somehow do it (more on this later after I read the paper).

The Push

I finally decided to drop algebra, which puts me at 20.5 credit hours this semester. Honestly, it wasn’t a tough decision.

More and more, I see my pure math as an mental excursion to go on when one wants to find fascinating sights. But that’s all it is though, a mental treat as I don’t see myself becoming a pure mathematician.

Am I smart enough for pure math? Maybe.

Am I motivated enough for pure math? Maybe.

One does not pursue grad school when the answer to those questions are just maybe.

Perfect Synergy

The past summer, I was at the REU in UMBC. The head of the math department guy preached about the future of mathematics is the intersection of mathematics, statistics and computing; I agree to a large extent. 

What I ended up working on seemed to be computing, with more computing, with very little mathematical analysis. The system of differential equations that described pancreatic cells was far too advanced for me to do some significant analysis. Heck, that’s how I feel about most of pure mathematics: the interesting problems are always out of reach from what I know.

Before this past winter break, I found a cool book called “Risk and Reward” in the math library. It’s all about the, surprisingly deep, game of casino blackjack, from the strategies to the mathematics behind the analysis. In the analysis portion of the book, 90

If I ever do become a professor of some sorts, I want to be able to teach a class based on games like blackjack and poker. For projects of sorts, the task will be to calculate the probability of various conditions and to figure out the strategies from the math I provide in class. Maybe for a (optional) final, it’s to actually play the games that we studied in class…

As an exercise over the break, I took an existing program and added the functionality to “practice” the card counting went over the book. It’s based on this project which I butchered (honestly, I spent 2 days doing this… trying to understand someone else’s code is tough). To run the program, make sure Python is installed, and run Blackjack.py. Download

Equation to a Modern Family

vlcsnap-2014-01-16-12h30m49s7

 

On the recent Modern Family episode, Claire was seen giving a lecture to the AP Calculus teacher on how many hours each student are expected to spend each night on school work (totally off for the record…).

Strangely enough there was the word “Schrodinger’s” on the board. Never in AP calculus did I

  1. Do partial differentiation
  2. Learn about Schrodinger’s

To be honest, I’m not entirely sure of the derivation to that equation…

Still, they did get something right! The theorem to the far left seems to be Rolle’s Theorem!

/r/math Problem of the Week 2

Two real numbers x and y are chosen at random in the interval (0, 1) with respect to the uniform distribution. What is the probability that the closest integer to x/y is even? Express your answer in terms of pi.

If one were to draw the area where x/y is even, it’s pretty clear from the get-go that we will have something like the following:

square

 

I looked at the problem from the viewpoint of lines (x/y = 1/2, x/y = 3/2, …). This results in the following series from summing the area of the triangles (don’t forget to divide by 2):

\frac{1}{4} + (\frac{1}{3} - \frac{1}{5} + \frac{1}{7} - \frac{1}{9} + \cdots)

Luckily, the series at the end is basically \frac{pi}{4} from the arctan series expansion. Thus the answer is \frac{5 + \pi}{4}.

My Petty Revenge

I was finishing up my workout at Noyes tonight on the mat. Doing stretches and cool down stuff. Well, I got thirsty and decided to go have a drink. Note that my towel, notebook, and workout gloves are still on the mat.

Then while I was gone for 2 minutes, this girl decides to come steal my place on the mat. I didn’t want to be rude because I can do abs back home but wanted to give her a piece of my mind.

Good thing that I had a small fart in my loaded up just for the occasion.

Take that.