Reinterpreting demand estimation

[Latest version]

I translate two models of demand estimation (Berry and Haile, 2014, 2024) to the Neyman–Rubin model and show a Vytlacil (2002)-style equivalence result.

Abstract This paper connects the literature on demand estimation to the literature on causal inference by interpreting nonparametric structural assumptions as restrictions on counterfactual outcomes. It offers nontrivial and equivalent restatements of key demand estimation assumptions in the Neyman–Rubin potential outcomes model, for both settings with market-level data (Berry and Haile, 2014) and settings with demographic-specific market shares (Berry and Haile, 2024). This exercise helps bridge the literatures on structural estimation and on causal inference by separating notational and linguistic differences from substantive ones.

Empirical Bayes shrinkage (mostly) does not correct the measurement error in regression, with Jiaying Gu and Soonwoo Kwon

[Latest version] [arXiv]

Abstract In the value-added literature, it is often claimed that regressing on empirical Bayes shrinkage estimates corrects for the measurement error problem in linear regression. We clarify the conditions needed; we argue that these conditions are stronger than the those needed for classical measurement error correction, which we advocate for instead. Moreover, we show that the classical estimator cannot be improved without stronger assumptions. We extend these results to regressions on nonlinear transformations of the latent attribute and find generically slow minimax estimation rates.

Certified Decisions, with Isaiah Andrews

[Latest version] [arXiv]

We connect statistical inference with statistical decisions by thinking of inference as providing guarantees for decisions. This turns out to be essentially without loss—certified decisions implicitly conduct inference. Such certified decisions allow downstream decision-makers safety guarantees.

Abstract Hypothesis tests and confidence intervals are ubiquitous in empirical research, yet their connection to subsequent decision-making is often unclear. We develop a theory of certified decisions that pairs recommended decisions with inferential guarantees. Specifically, we attach _P-certificates_---upper bounds on loss that hold with probability at least $1-\alpha$---to recommended actions. We show that such certificates allow "safe," risk-controlling adoption decisions for ambiguity-averse downstream decision-makers. We further prove that it is without loss to limit attention to P-certificates arising as minimax decisions over confidence sets, or what Manski (2021) terms "as-if decisions with a set estimate." A parallel argument applies to E-certified decisions obtained from e-values in settings with unbounded loss.

Potential weights and implicit causal designs in linear regression

[Latest version] [arXiv]

I introduce a simple and generic diagnostic for design-based causal interpretation of regression estimands.

Abstract When we interpret linear regression estimates as causal effects justified by quasi-experiments, what do we mean? This paper characterizes the necessary implications when researchers ascribe a design-based interpretation to a given regression. To do so, we define a notion of potential weights, which encode counterfactual decisions a given regression makes to unobserved potential outcomes. A plausible design-based interpretation for a regression estimand implies linear restrictions on the true distribution of treatment; the coefficients in these linear equations are exactly potential weights. Solving these linear restrictions leads to a set of implicit designs that necessarily include the true design if the regression were to admit a causal interpretation. These necessary implications lead to practical diagnostics that add transparency and robustness when design-based interpretation is invoked for a regression. They also lead to new theoretical insights: They serve as a framework that unifies and extends existing results, and they lead to new results for widely used but less understood specifications.

Empirical Bayes When Estimation Precision Predicts Parameters

[Minor revisions requested by Econometrica] [Latest version] [arXiv] [R package] [Slides]

I introduce a new empirical Bayes shrinkage method in the normal location model $Y_i \mid \theta_i, \sigma_i \sim N(\theta_i, \sigma_i^2)$. I do not assume $\theta_i$ is independent from $\sigma_i$. I find that my proposed method selects much higher mobility tracts on average than the standard empirical Bayes shrinkage procedure.

Abstract Empirical Bayes methods usually maintain a prior independence assumption: The unknown parameters of interest are independent from the known standard errors of the estimates. This assumption is often theoretically questionable and empirically rejected. This paper instead models the conditional distribution of the parameter given the standard errors as a flexibly parametrized family of distributions, leading to a family of methods that we call CLOSE. This paper establishes that (i) CLOSE is rate-optimal for squared error Bayes regret, (ii) squared error regret control is sufficient for an important class of economic decision problems, and (iii) CLOSE is worst-case robust when our assumption on the conditional distribution is misspecified. Empirically, using CLOSE leads to sizable gains for selecting high-mobility Census tracts. Census tracts selected by CLOSE are substantially more mobile on average than those selected by the standard shrinkage method.

Optimal Conditional Inference in Adaptive Experiments, with Isaiah Andrews

[arXiv]

We study batched bandit experiments and identify a small free lunch for adaptive inference. For $ \varepsilon $-greedy-type experiments, we characterize the optimal conditional inference procedure given history of bandit assignment probabilities.

Abstract We study batched bandit experiments and consider the problem of inference conditional on the realized stopping time, assignment probabilities, and target parameter, where all of these may be chosen adaptively using information up to the last batch of the experiment. Absent further restrictions on the experiment, we show that inference using only the results of the last batch is optimal. When the adaptive aspects of the experiment are known to be location-invariant, in the sense that they are unchanged when we shift all batch-arm means by a constant, we show that there is additional information in the data, captured by one additional linear function of the batch-arm means. In the more restrictive case where the stopping time, assignment probabilities, and target parameter are known to depend on the data only through a collection of polyhedral events, we derive computationally tractable and optimal conditional inference procedures.

Nonparametric Treatment Effect Identification in School Choice

[Revision requested by Journal of Econometrics] [arXiv] [Twitter TL;DR]

I characterize the effect of treatment effect heterogeneity in settings where school choice mechanisms are used to estimate causal effect of schools.

Abstract This paper studies nonparametric identification and estimation of causal effects in centralized school assignment. In many centralized assignment settings, students are subjected to both lottery-driven variation and regression discontinuity (RD) driven variation. We characterize the full set of identified atomic treatment effects (aTEs), defined as the conditional average treatment effect between a pair of schools, given student characteristics. Atomic treatment effects are the building blocks of more aggregated notions of treatment contrasts, and common approaches estimating aggregations of aTEs can mask important heterogeneity. In particular, many aggregations of aTEs put zero weight on aTEs driven by RD variation, and estimators of such aggregations put asymptotically vanishing weight on the RD-driven aTEs. We develop a diagnostic tool for empirically assessing the weight put on aTEs driven by RD variation. Lastly, we provide estimators and accompanying asymptotic results for inference on aggregations of RD-driven aTEs.

Resting / subsumed papers

Mean-variance constrained priors have finite maximum Bayes risk in the normal location model

[arXiv]

Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear IV Models, with Daniel Chen and Greg Lewis

[Accepted at NeurIPS 2020 Workshop on Machine Learning for Economic Policy] [arXiv] [Twitter TL;DR]

Causal Inference and Matching Markets

Undergraduate thesis advised by Scott Duke Kominers and David C. Parkes. Awarded the Thomas T. Hoopes Prize at Harvard College.

[Simulable Mechanisms] [Cutoff Mechanisms] [Regression discontinuity with endogenous cutoff]