Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in arXiv preprint, 2020
An investigation into the dynamics of roll solutions at the zigzag boundary of the planar Swift-Hohenberg equation. Done as a part of an REU at Ohio University during the summer of 2019.
Download here
Published:
Abstract: Given the inability to directly observe the conditions of a fault line, inversion of parameters describing them has been a subject of practical interest for the past couple of decades. To resolve this under a linear elasticity forward model, we consider Bayesian inference in the infinite-dimensional setting given some surface displacement measurements, resulting in a posterior distribution characterizing the initial fault displacement. We employ adjoint-based gradient computation in order to resolve the underlying partial differential equation constrained optimization problem and take care to leverage both dimensionality reductions in the parameter space and the low-rank nature of the resulting posterior covariance, owing to sparse measurements locations, to do said computation in a scalable manner.
Published:
Abstract: We consider sensitivity analysis of Bayesian inverse problems with respect to modeling uncertainties. To this end, we consider sensitivity analysis of the information gain, as measured by the Kullback-Leibler divergence from the posterior to the prior. This choice provides a principled approach that leverages key structures within the Bayesian inverse problem. Also, the information gain reduces to the Bayesian D-optimal design criterion in the case of linear Gaussian inverse problems. However, the derivatives of the information gain are not simple to compute, nor are finite differences always possible let alone scalable. To solve half the puzzle, in this talk we present the method of computing eigenvalue sensitivities for implicitly defined linear operators appearing in PDE-constrained optimization problems. Specifically, we consider eigenvalue sensitivities of the so-called data misfit Hessian and its preconditioned counterpart. We start with simple examples and work our way up to the expressions in the information gain. Our approach relies on adjoint based methods for gradient and Hessian computation. The resulting expressions for the sensitivities will be exact and can be computed in a scalable manner.
Published:
Abstract: We consider sensitivity analysis of Bayesian linear inverse problems with respect to modeling uncertainties. To this end, we consider sensitivity analysis of the information gain, as measured by the Kullback—Leibler divergence from the posterior to the prior. This choice provides a principled approach that leverages key structures within the Bayesian inverse problem. Also, the information gain admits a closed-form expression in the case of linear Gaussian inverse problems. The derivatives of the information gain are extremely challenging to compute. To address this challenge, we present accurate and efficient methods that combine eigenvalue sensitivities and hyper-differential sensitivity analysis that take advantage of adjoint based gradient and Hessian computation. This results in a computational approach whose cost, in number of PDE solves, does not grow upon mesh refinement. These results are presented in an application-driven model problem, considering a simplified earthquake model to infer fault slip from surface measurements.
Graduate course, New York University, Department of Computer Science, 2017
In the fall semester for both 2017 and 2018, I was a tutor and TA for the fundamental algorithms course, taught by Alan Siegel, at New York University.
Undergraduate course, New York University, Department of Computer Science, 2018
In the spring semester for both 2018 and 2019, I was a tutor and TA for the basic algorithms course, taught by Alan Siegel, at New York University.
Undergraduate course, New York University, Department of Computer Science, 2020
In the spring semester of 2020, I worked as an undergraduate tutor for the numerical computing course, taught by Margaret Wright.
Undergraduate course, North Carolina State University, Department of Mathematics, 2020
In the fall semester of 2020, I taught a recitation section and graded for the Elements of Calculus Course.
Undergraduate course, North Carolina State University, Department of Mathematics, 2021
In the spring semester of 2021, I taught a recitation section and graded for the course Calculus II taught by Patrick Sprenger.
Undergraduate course, North Carolina State University, Department of Mathematics, 2021
In the first 6-week summer semester of 2021, I taught all recitation sections and graded for the course Calculus II taught by Ashley Tharp.