During grad school, a tangible indication that I was actually learning things was to note the publication date of the papers I were reading, and actually comprehending a majority of it. From basic analysis with textbooks written and polished to a shine during the 50s-60s, to the fundamentals of numerics written in the 60s and 70s to finally my advisor’s research papers written in the 2000s; it just feels nice to really see that I was growing as a researcher.
I guess part of the reason why PhDs are useful is the fact that I don’t have to “start” over from the basics when learning another topic. Analysis will always be relevant in whatever mathematical field one chooses to dive into, and the intuition developed *should* also carry over.
I guess my point here is that in the span of 3 days, I have learned how to code up PINNs (2015s ish…) using Tensorflow lol. See here. Man, ML is such a new field.