Julian Schmitt

Julian Schmitt

Harvard College

From Snow Droughts

NOAA PSL/GFDL Snow Droughts Intership

Funded by the NOAA Hollings Scholarship Program

During summer 2021, I conducted research on snow droughts in the Western United States under the direction of of Mimi Hughes (NOAA PSL Boulder Lab) and Nathaniel Johnson and Kai-Chih Tseng (GFDL/Princeton). Our work sought to answer explore what sorts of snow conditions we might expect in the coming century deriving this variability from a large ensemble climate model. I am currently preparing to give a talk at AMS in January and working towards publication. Check the project out on GitHub.

The goal of the project is to use the variability that the large ensemble captures to explore the potential distribution of onset of snow drought/ low-snow conditions in the US West. As a model-based project, I developed methodology to classify drought severity conditions based on historical snowfall, finding that climatological shifts are first observed early in the 20th century. We further leverage ensemble variability to highlight when regions across the west are both expected to transition to low-snow climates (based on historical snowfall) and may transition to low snow condtions, which, due to variability in atmospheric conditions could happen up to 15 years earlier. I am presenting my findings at AMS 2022 and am currently working towards publication. View a more detailed summary at the button below:

to Earthquakes

Harvard Seismology Group

Undergraduate Research Position

Beginning spring 2020 I joined the Harvard Seismology group as a research assistant for a term-time position, which ulimately morphed into a 15-month position, following my PI Marine A. Denolle to the University of Washington during Spring 2021. From group meetings, inter-department collaboration, seminars and presentations at two fall conferences (SCEC and AGU) I've experienced what research in a graduate lab is like.

Both of the projects I worked on utilized Amazon's EC2 and S3 cloud computing and storage interfaces. Leveraging these tools allowed us to overcome challenges associated with storing, moving, and computing with hundreds of terabytes of seismological data. The first project involved roughly 20 TB of nodal data from sensors deployed in the Los Angeles basin. Using pair-wise comparisons (or cross-correlations) of the waveforms we derived estimates for the severity of expected groundmotion for a potential future San Andreas Fault area earthquake. We collaborated with a team at UC San Diego who were running numerical eathquake simulations to verify these findings. My role specifically was developing the script and cloud architecture to run the empirical cross-correlation calculations along with post-processing and plotting results, presented weekly at the project meeting.

The second project scaled up the first by an order of magnitude, with a target of computing 20 years of cross correlations for all of California. I developed a docker-parallelized script to scrape over 70 TB of raw waveforms from the FDSN in under 72 hours resulting in a 10x improvement in both cost and speed over a similar technique published in MacCarthy et al. 2020. The video above is the preview for the talk I gave at AGU.