Oriel Savir

Hey, I'm Oriel

Thanks for visiting

About

Hello! My name is Oriel Savir. I'm a graduate from Johns Hopkins University, where I studied Applied Mathematics & Computer Science. Throughout my time in college, I gained experience in deep learning research, software development, and open source tech. As a researcher, I am, broadly speaking, motivated by problems that sit in the gap between theory and empirical experimentation for generalizability & robustness, representation learning, probabilistic modeling, reinforcement learning, and alignment & interpretability for safety. I'm also interested in math research (at the moment this is a hobby, although I would love to connect my ML work to this) – particularly analysis, topology, number theory, and differential equations. As a programmer, I am currently interested in systems for performance and stability in ML (squeezing the most juice out of hardware) and agentic applications. I'm always eager to learn more, meet passionate people, and make the most out of life!

Interests

Deep Learning, ML Systems/Performance, Representation Learning, Generalizability & Robustness, Alignment & Interpretability, Safe AI

Publications

Learning Affine-Equivariant Proximal Operators

Oriel Savir, Zhenghan Fang, Jeremias Sulam

IEEE ICASSP, 2026

Experience

ML Kernels Engineer

Annapurna Labs (AWS)

Starting Summer 2026

Will work on SOTA ML kernels for large-scale LLM training and inference on next-generation accelerators.

Deep Learning Researcher

Mathematical Institute for Data Science (MINDS)

Mar 2024 – Present

First-Author on `Learning Affine-Equivariant Proximal Operators`, accepted at ICASSP 2026; invented a novel construct for learning guaranteed proximals with equivariance preservation. Significantly increased noise-level robustness of proximal learning for inverse problems in imaging.

PythonPyTorchMathematicsCNNsDeep LearningOptimizationNeural Networks

Summer Intern

Investment Management Firm

Jul 2025 – Aug 2025

Worked on investment strategies and equity research on technology and consumer companies. Developed models and conducted extensive analysis of fundamentals and technicals.

Member of the Technical Staff Intern

Cockroach Labs

May 2025 – Jul 2025

Worked on an efficient command to extract full-schema DDL from live CockroachDB nodes. Optimized for latency and reliability with comprehensive stress-testing to mirror deployment env.

GoDistributed SystemsSQL

Senior Teaching Assistant, Deep Learning (CS 482/682)

Johns Hopkins University

Jan 2025 – Dec 2025

Served on teaching staff for Hopkins' graduate-level deep learning course. Helped teach over 400 graduate-level students.

TeachingDeep LearningPyTorch

Software Engineering Intern

Capital One

May 2024 – Aug 2024

Developed Capital One's first in-house feature store for the credit issuance machine learning pipeline. Built v1 of a Python SDK for creating, managing, and querying feature stores at scale.

PythonSDK DevelopmentDistributed ComputingSnowflakeAWS EMRDynamoDBDuckDBApache SparkDockerPolarsDelta Lake

Software Engineering Intern

XTractor

Jun 2023 – Dec 2023

Led web development of a tool for tabular data extraction out of messy, old documents used by researchers at top universities. Worked with a team of talented upper-level and graduate students to do a challenging task before it was easy.

TypeScriptNext.jsAWS S3AWS SageMakerPyTorchTailwind CSS

Computational Biophysics Research Assistant

JHU Department of Biophysics

Apr 2023 – Dec 2023

Modeled dynamics of clathrin-mediated endocytosis using stochastic reaction-diffusion simulations in C++. Contributed to an HPC reaction dynamics simulation software.

PythonC++NumPySciPyGNU Scientific LibraryMatplotlibDifferential Calculus

Software Engineer

Delineo Disease Modeling Group

Jan 2022 – Oct 2022

Led a development team modeling disease spread mechanisms using data science and monte carlo simulations of major U.S. cities.

PythonNext.jsTensorFlowMongoDBPyMongoSynthPops

Extra Homepage Tidbits

Fun stuff

  • - I love all animals and have two dogs: Ben (a 14-year-old Shih Tzu, former show diva) and Einstein (a 4-month-old goldendoodle)! As is obligatory on the internet, photos of the two will be at the bottom of the page.
  • - I am a self-taught guitarist and have been learning for two years. I play an acoustic, which I find versatile enough. As a huge rock fan, I plan to get an electric guitar setup very soon though. I have also been noodling around with some music theory and composition/song-writing (though my guitar skills are better than my voice for now!). Eventually, I would love to get to a point of releasing originals.
  • - I also love EDM (especially DnB) and have been learning to use Ableton for a couple months.
  • - I have lived in three continents: North America, Asia, and Europe. A goal is to spend at least some time in all seven (including Antarctica)!
  • - I occasionally play video games. My all-time favorites are The Witcher, Portal, Soulsborne, and The Last of Us.
  • - I have been consistent about going to the gym for the past few months for strength training. I find it's not only healthy, but helps a lot with building dedication; highly recommended.
  • - I enjoy reading, particularly dystopian fiction, sci-fi, detective, and non-fiction on philosophy, physics, and math.
  • - I love to cook new receipes and am generally willing to try most foods at least once (with few exceptions)!

Things I believe (a constantly evolving list):

  • - The world is beautiful and complex, and there's many opportunities to ask questions about how things work. This is under-appreciated.
  • - Mathematics is extremely powerful and, with effort, can be used to understand everything. Learning math develops priceless cognitive and logical problem-solving skills. Thus, learning more math is a good idea for anyone.
  • - The best way to learn is by doing.
  • - Pineapple can work on pizza, but there is a time and place for everything. NY pizza is the best slice.
  • - The Artificial Intelligence boom and its impact will be the defining force of most of the 21st century.
  • - Modern LLMs and training methods are very impressive, but future AI progress will require addressing the task of long-horizon planning and action without dense feedback. The current failure mode is more structural than parameteric, and cannot be addressed by scaling alone. Latent dynamics modeling, representation learning, and program synthesis will be expanded upon for better structured world models. Reinforcement learning will make a strong comeback when learned dynamics support long-horizon planning and world understanding.
  • - Interpretability and alignment are crucial for the long-term success of AI for the benefit of humanity.

Dogs!

Ben: The Shih Tzu

Ben

Einstein: The Goldendoodle

Einstein