Week 2: Baseball Games, Nobel Laureates, and the Rocky Horror Picture Show

Share This:

Sunday, June 12, 2022

By:

Benjamin Johnson

Once again, the past week has been extremely eventful.  Between work and extracurriculars, the only activity I’ve missed out on is sleep.  No complaints, though, I wouldn’t have it any other way. 

Throughout the week, I’ve been developing my understanding of machine learning and the various ways to train a Reservoir Computer (RC) to obtain desired outputs.  Rather than starting with a power spectral series of the Lorenz system as I detailed last week, I began with a simpler case: the Van der Pol equation.  Possessing only one parameter called mu, this second order non-linear differential equation is much easier to model than the three coupled Lorenz equations. 

I first implemented the equation as a Python function that could be passed into the RC for modeling.  Next, I focused on training the RC over a set of sample mu values, then attempted to manually predict the equation for a new mu value by averaging the output matrices produced by the RC over the training sample.  The output matrices convert the state of the RC to output data (a representation of the trajectory of the system), so they’re very important in the function of the algorithm.  Initially, I applied a simple averaging function for proof of concept, and it appeared to work adequately, albeit somewhat inaccurately. 

To improve the predictive capacity of this technique, I then implemented a Gaussian distribution to serve as a radial basis function for the averaging process.  The Gaussian has the shape of a normal distribution, and the output matrices of the RC’s were factored into the average proportionally to the equation.  This means that output matrices computed at mu values closer to the prediction mu value were weighted much more highly than output matrices corresponding to more distant mu values.  Including this selection into the averaging greatly improved the accuracy of predictions, but it still could not compete with the purely automated function of the RC with no modification (simply allowing the RC to compute its own output matrices).  We may decide that there is more work to be done or that this training technique is not entirely viable.

Outside of work, I had a great time with the other interns.  On Thursday, we traveled to the American Center of Physics where we had lunch with Nobel laureate John Mather.  We asked him a great deal of questions, picking his brain for physics opinions and advice.  I had never met a Nobel laureate previously, so this was a fascinating experience.

On Friday, SPS took us to a Nationals baseball game, which they won heftily (guess we’re their good luck charm).  Although I don’t watch much (or any) baseball, it was a blast, so many thanks to SPS for the field trip.  We then attended a midnight showing of the Rocky Horror Picture Show, which was every bit as fun as I had anticipated.  I had watched the movie a few times before, but never attended a live show with a shadow cast, so it was a radically different experience.  A couple of gentlemen in the row in front of us were clearly veterans, and they had jokes for just about every line in the film.  I could hardly pay attention to the movie over the burning in my sides from laughing, and I’m definitely looking forward to going again.

On Saturday, a few interns and I attended the March for Our Lives event at the Washington Monument, then journeyed to the Pride Parade at Dupont Circle.  Afterwards, we checked out a nearby block party celebrating Pride Month, then trekked to the Lincoln Memorial to watch a fireworks show over the treetops to the southeast. 

Once again, I had an immense time this week, and I’m looking forward to the adventures to come.

Be back soon.

Benjamin Johnson