The Bob’s Burgers addiction on Garmin

Bob’s Burgers has been the ambient track to my life for awhile now. It provides a comforting background noise from which I can choose to tune in to or out of at ease due to the irrelevant (but good humored) nature. Day to day, I would pull up Hulu, and just resume… however, this is not ideal. I only have the ad supported version of Hulu meaning no offline or minimized playing, and it has ads.

“But Marshall, Bob’s Burgers is only available on Hulu. Where else are you watching it?”

Good question. For awhile now, there have been people uploading virtually full episodes onto YouTube. As this is against copyright, these videos all have various ways to skirt around the initial screening: cropped videos, truncated episodes, slightly distorted sounds. They still get taken down relatively quickly. For an explanation of why this is happening, see posts like this. But I would also use these videos, as they generally don’t have shuffled episodes too, meaning I can be surprised by what comes up next. I can also play this very much like a podcast due to having Premium.

Recently, I got a Garmin smartwatch with the capabilities to play stored music. Looking to curb my phone dependence, I thought about putting Bob’s Burgers on the watch in an audio format. However, I wanted a few constraints:

  1. Since I listen to this sometimes to zone out/nap, as much as I like the introductory jingle, it needs to go.
  2. The audio should also generally be faded out before the credits, as it’s usually a long jingle which is too distracting.
  3. Audio cannot be too “jumpy.”

This turned out to be very easy due to the structured nature of the episodes. Suppose one has the files of the episodes in “.mkv” files in a legal manner, then the following lines

for file in *.mkv; do
echo "Processing $file..."
filename="${file
ffmpeg -i "$file" -map 0:a:0 \
-ss 00:00:20 -to 00:20:20 \
-filter:a "afade=t=in:st=20:d=5,afade=t=out:st=1215:d=5" \
-metadata title="$file" -metadata artist="The Bobs" \
-id3v2_version 3 "${filename}.mp3"
done

creates mp3 files of the audio which simply fades in/out the credits and out song. The last point about normalizing the audio turned out to not be important, but using the following line into the for loop might help

  ffmpeg-normalize --progress -c:a libmp3lame "${filename}.mp3" -o normalized/${filename}.mp3

By moving this onto Garmin, I know can shuffle through various episodes!

A Python Riddle

From the book Fluent Python (which you can get from the Humble Bundle right now):

What would the following piece of code do?

t = (1, 2, [30, 40])
t[2] += [50, 60]

Four choices:

    1. t becomes (1,2,[30,40,50,60])
    2. A TypeError is raised because tuples does not support item assignments?
    3. Neither
    4. Both 1. and 2.

The answer is the link, which is quite surprising.

It turns out that t[2].extend([50, 60])doesn’t break Python, and this riddle is really a super esoteric corner case…

Vectorization over C

The title is probably misleading, but this is a lesson I needed to talk about.

I wrote out some simple code for the quadrature over the reference triangle last time, which involves a double loop. To my chagrin, my immediate reaction to speeding up the code was to put it into Cython, and give it some type declaration.

This did speed up my integrals, but not as much as vectorization. By simply condensing one of the loops into a dot product, and using vector-function evaluation, I sped up my code a substantial amount, especially with higher order integration of “hard” functions.

def quadtriangle_vector(f, w00, w10, x00, x10):
total = 0
for i in range(len(w00)):
total += w00[i] * np.dot(w10 / 2, f([(1 + x00[i]) * (1 - x10) / 2 - 1, x10]))
return total

To see what I mean, consider the following function

from scipy.special import eval_jacobi as jac
def f(x):
return jac(2, 1, 1, np.sin(x[0] - x[1]))
p = 20
x00, w00 = rj(p + 1, 0, 0)
x10, w10 = rj(p + 1, 1, 0)

The speedup I get is staggering.


Also, I tried to fully vectorize by removing the outer-loop. This actually slowed down the code a bit. Maybe I did it wrong? But for now, I’m decently happy with the speed.

The (lack of a) Matrix

I think I finally understand why software packages like PETSc has an option for an operator when doing something like conjugate gradient. Why isn’t having a matrix good enough for everyone?

Well turns out that while all linear operators can be translated to a matrix, it may not be the best way to represent the operator. As an example, consider a basis transformation from Bernstein polynomials to Jacobi (or vice versa). It’s certainly possible to find and construct a matrix which does the operation, but it’s ugly.

On the other hand, it’s not that bad to write a code which utilizes the properties of the polynomials and convert it within in O(n^2) time. The key is that a Jacobi polynomial is a sum of Bernsteins, and Bernsteins can be degree raised or dropped at will.

This function will outperform the matrix in many sense. For one, there’s no need to construct a matrix, which will take n^2 operations in the first place. Next, matrix multiplication will take a n^3 operation, so if we optimize enough, we will always beat it. Finally, it’s really less painful to code, because each line of the function serves a visible purpose.

Anyways, I’m sold.

(I’ll eventually publish the code in the summer)

Notes: SSD edition

Some notes from the past week:

  1. It is incredibly easy to be an impostor in a more academic party. First of all, most of the people will be already intoxicated to the point where bullshit science can’t be discerned from actual science. This is good as I can just say random facts I remember from Popular Science.Another acceptable thing to do is to just ask questions upon questions. “What’s your research? … Oh that’s so cool! Tell me more about it! … So does this connect to insert scientific news here? Wow.” That’ll burn around 5 minutes minimum.The main problem comes when you run out of questions in the initial barrage. It also fails when the person is laconic or can’t speak English.

  2. Installing a SSD is extremely easy, but installing operating systems are hard. Right now, I have around 8 entries on my GRUB menu before I migrate everything over to my new distro.I followed the mount guide provided here, which seems intuitive enough on where to put mount points. I’ve also learned that
    mount

    and

    df -h

    are my friends. There’s also that good GParted software.

  3. The Lloyd Trefethren numerical linear algebra book is quite good for a quick overview of the subject. It doesn’t get bogged down with the analysis, and generally refers to other books (mainly the Van Loan) throughout.
  4. Holy shit URF mode.
  5. I need to be more brave in a certain subject….

Ray Casting with JOGL

I won’t post the entire code here, because it’s pretty damn ugly. But here’s what I ended up doing:

  1. I used the
    gluunproject

    statement to find the beginning and end points to extrapolate a line from.

  2. Now that I have a line, I use the formula provided by Wikipedia using the vector formulation of the line.
  3. Simply do a loop over the vertices and find the minimum.

Sorry for note posting recently… I got caught up in things… 🙁

VTune Profiler Error: “The Data Cannot be displayed, there is no viewpoint available for data ”

The solution to this if you’re using the GUI on Linux can quite possibly be that ptrace_scope was set to 1.

From the Intel forums:

Note: In Ubuntu 11.4, you may need to disable ptrace_scope.

cat /proc/sys/kernel/yama/ptrace_scope; “0” is expected, if it is “1”,

Then do

$echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope

Took me a while to find… hope this saves someone some time.

Hash

Doing another SPH implementation for parallel computing. It’s amazing how fast adding a hash table, instead of looking at all particles does for one’s speed. 1429.22 seconds to only 327 seconds. That’s 5 times as fast!

Wine IE

  • Installs Wine to play Hearthstone.
  • Made a gif for a project.
  • Double clicked gif to see how it turned out.
  • gif opened in IE under Wine.