I am a postdoctoral researcher at SIEPR. In July 2025, I will join the Department of Economics at Stanford University as an Assistant Professor. My research interests lie broadly in theoretical and applied econometrics, with a focus on causal inference and statistical decision theory.

I look forward to working with Stanford students and postdocs. Please don’t hesitate to reach out: My email is jiafeng [atsign] stanford dot edu

Previously, I obtained my Ph.D. in Business Economics in 2024 at Harvard University. I obtained my A.B. and S.M. degrees summa cum laude in Applied Mathematics from Harvard College in 2019. My Ph.D. research is supervised by Isaiah Andrews, Elie Tamer, Jesse Shapiro, and Edward Glaeser. Previously, I worked at Microsoft Research New England with Greg Lewis and at QuantCo.

I publish under my Chinese name, Jiafeng Chen (pronunciation); I go by Kevin.

Working Papers

Potential weights and implicit causal designs in linear regression

[Latest version] [arXiv]

I introduce a simple and generic diagnostic for design-based causal interpretation of regression estimands.

Abstract When do linear regressions estimate causal effects in quasi-experiments? This paper provides a generic diagnostic that assesses whether a given linear regression specification on a given dataset admits a design-based interpretation. To do so, we define a notion of potential weights, which encode counterfactual decisions a given regression makes to unobserved potential outcomes. If the specification does admit such an interpretation, this diagnostic can find a vector of unit-level treatment assignment probabilities---which we call an implicit design---under which the regression estimates a causal effect. This diagnostic also finds the implicit causal effect estimand. Knowing the implicit design and estimand adds transparency, leads to further sanity checks, and opens the door to design-based statistical inference. When applied to regression specifications studied in the causal inference literature, our framework recovers and extends existing theoretical results. When applied to widely-used specifications not covered by existing causal inference literature, our framework generates new theoretical insights.

Empirical Bayes When Estimation Precision Predicts Parameters

[Revision requested by Econometrica] [Latest version] [arXiv] [R package] [Slides]

I introduce a new empirical Bayes shrinkage method in the normal location model $Y_i \mid \theta_i, \sigma_i \sim N(\theta_i, \sigma_i^2)$. I do not assume $\theta_i$ is independent from $\sigma_i$. I find that my proposed method selects much higher mobility tracts on average than the standard empirical Bayes shrinkage procedure.

Abstract Empirical Bayes methods usually maintain a prior independence assumption: The unknown parameters of interest are independent from the known standard errors of the estimates. This assumption is often theoretically questionable and empirically rejected. This paper instead models the conditional distribution of the parameter given the standard errors as a flexibly parametrized family of distributions, leading to a family of methods that we call CLOSE. This paper establishes that (i) CLOSE is rate-optimal for squared error Bayes regret, (ii) squared error regret control is sufficient for an important class of economic decision problems, and (iii) CLOSE is worst-case robust when our assumption on the conditional distribution is misspecified. Empirically, using CLOSE leads to sizable gains for selecting high-mobility Census tracts. Census tracts selected by CLOSE are substantially more mobile on average than those selected by the standard shrinkage method.

Optimal Conditional Inference in Adaptive Experiments, with Isaiah Andrews

[arXiv]

We study batched bandit experiments and identify a small free lunch for adaptive inference. For $ \varepsilon $-greedy-type experiments, we characterize the optimal conditional inference procedure given history of bandit assignment probabilities.

Abstract We study batched bandit experiments and consider the problem of inference conditional on the realized stopping time, assignment probabilities, and target parameter, where all of these may be chosen adaptively using information up to the last batch of the experiment. Absent further restrictions on the experiment, we show that inference using only the results of the last batch is optimal. When the adaptive aspects of the experiment are known to be location-invariant, in the sense that they are unchanged when we shift all batch-arm means by a constant, we show that there is additional information in the data, captured by one additional linear function of the batch-arm means. In the more restrictive case where the stopping time, assignment probabilities, and target parameter are known to depend on the data only through a collection of polyhedral events, we derive computationally tractable and optimal conditional inference procedures.

Mean-variance constrained priors have finite maximum Bayes risk in the normal location model

[arXiv]

I partially answer my own question on StackExchange. I show that the maximum squared error Bayes risk of a misspecified prior (with correctly specified mean and variance, normalized to zero and one) is at most 535. I think the correct value is 2.

Abstract Consider a normal location model $X \mid \theta \sim N(\theta, \sigma^2)$ with known $\sigma^2$. Suppose $\theta \sim G_0$, where the prior $G_0$ has zero mean and unit variance. Let $G_1$ be a possibly misspecified prior with zero mean and unit variance. We show that the squared error Bayes risk of the posterior mean under $G_1$ is bounded, uniformly over $G_0, G_1, \sigma^2 > 0$.

Nonparametric Treatment Effect Identification in School Choice

[Revision requested by Journal of Econometrics] [arXiv] [Twitter TL;DR]

I characterize the effect of treatment effect heterogeneity in settings where school choice mechanisms are used to estimate causal effect of schools.

Abstract This paper studies nonparametric identification and estimation of causal effects in centralized school assignment. In many centralized assignment settings, students are subjected to both lottery-driven variation and regression discontinuity (RD) driven variation. We characterize the full set of identified atomic treatment effects (aTEs), defined as the conditional average treatment effect between a pair of schools, given student characteristics. Atomic treatment effects are the building blocks of more aggregated notions of treatment contrasts, and common approaches estimating aggregations of aTEs can mask important heterogeneity. In particular, many aggregations of aTEs put zero weight on aTEs driven by RD variation, and estimators of such aggregations put asymptotically vanishing weight on the RD-driven aTEs. We develop a diagnostic tool for empirically assessing the weight put on aTEs driven by RD variation. Lastly, we provide estimators and accompanying asymptotic results for inference on aggregations of RD-driven aTEs.

Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear IV Models, with Daniel Chen and Greg Lewis

[Accepted at NeurIPS 2020 Workshop on Machine Learning for Economic Policy] [arXiv] [Twitter TL;DR]

We consider using machine learning to estimate the first stage in linear instrumental variables.

Abstract We provide some simple theoretical results that justify incorporating machine learning in a standard linear instrumental variable setting, prevalent in empirical research in economics. Machine learning techniques, combined with sample-splitting, extract nonlinear variation in the instrument that may dramatically improve estimation precision and robustness by boosting instrument strength. The analysis is straightforward in the absence of covariates. The presence of linearly included exogenous covariates complicates identification, as the researcher would like to prevent nonlinearities in the covariates from providing the identifying variation. Our procedure can be effectively adapted to account for this complication, based on an argument by Chamberlain (1992). Our method preserves standard intuitions and interpretations of linear instrumental variable methods and provides a simple, user-friendly upgrade to the applied economics toolbox. We illustrate our method with an example in law and criminal justice, examining the causal effect of appellate court reversals on district court sentencing decisions.
Github Gist

Causal Inference and Matching Markets

Undergraduate thesis advised by Scott Duke Kominers and David C. Parkes. Awarded the Thomas T. Hoopes Prize at Harvard College.

[Simulable Mechanisms] [Cutoff Mechanisms] [Regression discontinuity with endogenous cutoff]

Abstract We consider causal inference in two-sided matching markets, particularly in a school choice context, where the researcher is interested in understanding the treatment effect of schools on students. We characterize two classes of mechanisms that can be considered natural experiments, simulable mechanisms and cutoff mechanisms, which are mathematically general and encompass a large set of allocation mechanisms used in practice. We propose estimation and inference procedures for causal effects given each of these mechanisms, and characterize the statistical properties of the resulting causal estimators. Our approach allows us to relax the simplifying large-market assumption made in earlier work (Abdulkadiroglu, Angrist, Narita, and Pathak 2017, 2019), and we show that classical regression discontinuity procedures extend to settings where the discontinuity cutoff is endogenously chosen. Our results provide a rigorous statistical basis for causal inference and program evaluation in a number of settings where treatment assignment is complex.

Publications

Logs with zeros? Some problems and solutions, with Jonathan Roth

[2024, Quarterly Journal of Economics] [Paper (final manuscript)] [arXiv] [Development Impact Blog] [Twitter TL;DR]

It turns out that one can’t define away the $\log(0)$ problem. Fundamentally, there is a trilemma on defining scale-invariant average effects. We discuss a few fixes.

Abstract Many economic settings involve an outcome $Y$ that is weakly positive but can equal zero (e.g. earnings). In such settings, it is common to estimate an average treatment effect (ATE) for a transformation of the outcome that behaves like $\log(Y)$ when $Y$ is large but is defined at zero (e.g. $\log(1+Y)$, $\mathrm{arcsinh}(Y)$). This paper argues that ATEs for such log-like transformations should not be interpreted as approximating a percentage effect, since unlike a percentage, they depend arbitrarily on the units of the outcome when the treatment affects the extensive margin. Intuitively, this dependence arises because an individual-level percentage effect is not well-defined for individuals whose outcome changes from zero to non-zero when receiving treatment, and the units of the outcome implicitly determine how much weight the ATE places on such extensive margin changes. We further establish that when the outcome can equal zero, there is no treatment effect parameter that is an average of individual-level treatment effects, unit-invariant, and point-identified. We discuss a variety of alternative approaches that may be sensible in settings with an intensive and extensive margin, including (i) expressing the ATE in levels as a percentage (e.g. using Poisson regression), (ii) explicitly calibrating the value placed on the intensive and extensive margins, and (iii) estimating separate effects for the two margins (e.g. using Lee bounds). We illustrate these approaches in three empirical applications.

Semiparametric Estimation of Long-Term Treatment Effects, with David M. Ritzwoller

[2023, Journal of Econometrics] [arXiv] [Twitter TL;DR] [Code]

We compute the semiparametric efficiency bounds for two models of long-term treatment effects and introduce the accompanying double/debiased machine learning estimators as well as sieve two-step estimators. Simulation evidence shows that our estimation strategies improve bias and variance properties.

Abstract Long-term outcomes of experimental evaluations are necessarily observed after long delays. We develop semiparametric methods for combining the short-term outcomes of an experimental evaluation with observational measurements of the joint distribution of short-term and long-term outcomes to estimate long-term treatment effects. We characterize semiparametric efficiency bounds for estimation of the average effect of a treatment on a long-term outcome in several instances of this problem. These calculations facilitate the construction of semiparametrically efficient estimators. The finite-sample performance of these estimators is analyzed with a simulation calibrated to a randomized evaluation of the long-term effects of a poverty alleviation program.

Synthetic Control As Online Linear Regression

[2023, Econometrica] [arXiv] [Twitter TL;DR] [NBER SI 2022 Labor Studies Method Session]

It turns out that synthetic control has a connection with online convex optimization, which I use to derive novel guarantees.

Abstract This paper notes a simple connection between synthetic control and online learning. Specifically, we recognize synthetic control as an instance of Follow‐The‐Leader (FTL). Standard results in online convex optimization then imply that, even when outcomes are chosen by an adversary, synthetic control predictions of counterfactual outcomes for the treated unit perform almost as well as an oracle weighted average of control units' outcomes. Synthetic control on differenced data performs almost as well as oracle weighted difference‐in‐differences, potentially making it an attractive choice in practice. We argue that this observation further supports the use of synthetic control estimators in comparative case studies.

Efficient estimation of average derivatives in NPIV models: Simulation comparisons of neural network estimators, with Xiaohong Chen and Elie Tamer

[2023, Journal of Econometrics] [arXiv]

We conduct a large Monte Carlo study on using neural networks to estimate models of nonparametric instrumental variables.

Abstract Artificial Neural Networks (ANNs) can be viewed as nonlinear sieves that can approximate complex functions of high dimensional variables more effectively than linear sieves. We investigate the performance of various ANNs in nonparametric instrumental variables (NPIV) models of moderately high dimensional covariates that are relevant to empirical economics. We present two efficient procedures for estimation and inference on a weighted average derivative (WAD): an orthogonalized plug-in with optimally-weighted sieve minimum distance (OP-OSMD) procedure and a sieve efficient score (ES) procedure. Both estimators for WAD use ANN sieves to approximate the unknown NPIV function and are $\sqrt{n}$-asymptotically normal and first-order equivalent. We provide a detailed practitioner’s recipe for implementing both efficient procedures. We compare their finite-sample performances in various simulation designs that involve smooth NPIV function of up to 13 continuous covariates, different nonlinearities and covariate correlations. Some Monte Carlo findings include: (1) tuning and optimization are more delicate in ANN estimation; (2) given proper tuning, both ANN estimators with various architectures can perform well; (3) easier to tune ANN OP-OSMD estimators than ANN ES estimators; (4) stable inferences are more difficult to achieve with ANN (than spline) estimators; (5) there are gaps between current implementations and approximation theories. Finally, we apply ANN NPIV to estimate average partial derivatives in two empirical demand examples with multivariate covariates.

JUE Insight: The (Non-) Effect of Opportunity Zones on Housing Prices, with Edward L. Glaeser and David Wessel

[2022, Journal of Urban Economics] [NBER Working Paper] [Replication files] [Updated Working Paper (updated data)] [Bloomberg] [Brookings]

We rule out large immediate price effects on residential real estate from the Opportunity Zone program.

Abstract Will the Opportunity Zones (OZ) program, America’s largest new place-based policy in decades, generate neighborhood change? We compare single-family housing price growth in OZs with price growth in areas that were eligible but not included in the program. We also compare OZs to their nearest geographic neighbors. Our most credible estimates rule out price impacts greater than 0.5 percentage points with 95% confidence, suggesting that, so far, home buyers don’t believe that this subsidy will generate major neighborhood change. OZ status reduces prices in areas with little employment, perhaps because buyers think that subsidizing new investment will increase housing supply. Mixed evidence suggests that OZs may have increased residential permitting.

Auctioneers Sometimes Prefer Entry Fees to Extra Bidders, with Scott Duke Kominers

[2021, International Journal of Industrial Organization (EARIE Special Issue)]

Auctioneers can profit from entry fees, even though they create a thin market by doing so.

Abstract We investigate a market thickness–market power tradeoff in an auction setting with endogenous entry. We find that charging admission fees can sometimes dominate the benefit of recruiting additional bidders, even though the fees themselves implicitly reduce competition at the auction stage. We also highlight that admission fees and reserve prices are different instruments in a setting with uncertainty over entry costs, and that optimal mechanisms in such settings may be more complex than simply setting a reserve price. Our results provide a counterpoint to the broad intuition of Bulow and Klemperer (1996) that market thickness often takes precedence over market power in auction design.

A Semantic Approach to Financial Fundamentals with Suproteem K. Sarkar

[ACL 2020, Proceedings of the Second Workshop on Financial Technology and NLP (FinNLP)]

We explore using embeddings from large language models (BERT) for financial applications.

Abstract The structure and evolution of firms’ operations are essential components of modern financial analyses. Traditional text-based approaches have often used standard statistical learning methods to analyze news and other text relating to firm characteristics, which may shroud key semantic information about firm activity. In this paper, we present the Semantically-Informed Financial Index (SIFI), an approach to modeling firm characteristics and dynamics using embeddings from transformer models. As opposed to previous work that uses similar techniques on news sentiment, our methods directly study the business operations that firms report in filings, which are legally required to be accurate. We develop text-based firm classifications that are more informative about fundamentals per level of granularity than established metrics, and use them to study the interactions between firms and industries. We also characterize a basic model of business operation evolution. Our work aims to contribute to the broader study of how text can provide insight into economic behavior.