I was a postdoc in Computer Science at Stanford, working with Tatsu Hashimoto, Percy Liang, and Tengyu Ma. I got my PhD from MIT, where I was fortunate to be advised by Aleksander Madry and Nir Shavit. Before that, I graduated from IIT Bombay with a Bachelors and Masters in Electrical Engineering. In the past, I have had the opportunity to intern at Google Brain and Vicarious.
I am interested in developing machine learning tools that perform reliably in the real world, and characterizing the consequences if they fail to do so. My research has been supported by a Google PhD fellowship and an Open Philanthropy early career grant.
Whose Opinions Do Language Models Reflect?
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto
ICML 2023
Is a Caption Worth a Thousand Images? A Controlled Study for
Representation Learning
Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang, Tatsunori Hashimoto
ICLR 2023
Editing a classifier by rewriting its prediction rules
Shibani Santurkar*, Dimitris Tsipras*, Mahalaxmi Elango, David Bau, Antonio Torralba, Aleksander Madry
NeurIPS 2021
[Blog post]
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong*, Shibani Santurkar*, Aleksander Madry
ICML 2021 (Long Presentation)
[Blog posts: part 1 and part 2]
BREEDS: Benchmarks for Subpopulation Shift
Shibani Santurkar*, Dimitris Tsipras*, Aleksander Madry
ICLR 2021
[Blog post],
[Code and Data]
From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
Dimitris Tsipras*, Shibani Santurkar*, Logan Engstrom, Andrew Ilyas, Aleksander Madry
ICML 2020
[Blog post]
Identifying Statistical Bias in Dataset Replication
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
ICML 2020
[Blog post]
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
ICLR 2020 (Oral Presentation)
[Blog posts: part 1 and part 2]
A Closer Look at Deep Policy Gradients
Andrew Ilyas*, Logan Engstrom*, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
ICLR 2020 (Oral Presentation)
[Blog post]
Image Synthesis with a Single (Robust) Classifier
Shibani Santurkar*, Dimitris Tsipras*, Brandon Tran*, Andrew Ilyas*, Logan Engstrom*, Aleksander Madry
NeurIPS 2019
[Blog post],
[Code],
[Demo]
Learning Perceptually-Aligned Representations via Adversarial Robustness
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Brandon Tran*, Aleksander Madry
[Blog post],
[Code]
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Logan Engstrom*, Brandon Tran, Aleksander Madry
NeurIPS 2019 (Spotlight Presentation)
[Blog post],
[Datasets]
Robustness May Be at Odds with Accuracy
Dimitris Tsipras*, Shibani Santurkar*, Logan Engstrom*, Alexander Turner, Aleksander Madry
ICLR 2019
How Does Batch Normalization Help Optimization?
Shibani Santurkar*, Dimitris Tsipras*, Andrew Ilyas*, Aleksander Madry
NeurIPS 2018 (Oral Presentation)
[Blog post] , [Short video]
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Madry
NeurIPS 2018 (Spotlight Presentation)
A Classification-Based Study of Covariate Shift in GAN Distributions
Shibani Santurkar, Ludwig Schmidt, Aleksander Madry
ICML 2018
Deep Tensor Convolution on Multicores
David Budden, Alexander Matveev, Shibani Santurkar, Shraman Ray Chaudhuri, Nir Shavit
ICML 2017
Toward streaming synapse detection with compositional convnets
Shibani Santurkar, David Budden, Alexander Matveev, Heather Berlin, Hayk Saribekyan, Yaron Meirovitch, Nir Shavit