The Journal of Things We Like (Lots)
Select Page
Danya Shocair Reda, Producing Procedural Inequality Through the Empirical Turn, 94 U. Colo. L. Rev. __ (forthcoming, 2023), available at SSRN.

Data is all the fashion, not just in the legal academy and other academic disciplines, but in our daily lives. We have been glued to COVID-19 statistics to make decisions about whether to wear masks, send our kids to school, or take that rescheduled trip. While these graphs and statistics have been helpful, they have not been without controversy. The pandemic has brought into full relief how data can be manipulated, misunderstood, and even misleading.

Danya Reda’s Producing Procedural Inequality Through the Empirical Turn questions and critiques how data is gathered and used in another important context—the federal civil rulemaking process. Reda’s prior work has contributed to how we think about the civil justice system and the rules that govern it. She has shown that elite lawyers and judges constructed and marshalled a cost-and-delay narrative that affects the civil rulemaking process. She has interrogated the effect of casting the rulemaking process as political. In this article, Reda takes her critique a step further by arguing that the rulemakers’ attempts at neutrality—and their attempts to keep the process “neutral” using data—distort the rulemaking process and deepen systemic inequality.

Reda takes aim at the concept of neutrality in the civil justice system. The problem with anchoring the rulemaking process in neutrality is two-fold. First, the Federal Rules of Civil Procedure intervene in a system already structured unequally by law. The parties’ positionality, the law governing the dispute, and the decisionmakers—both judge and jury—are far from neutral. Second, decisions about what rules should govern are contested, meaning different views persist about how the rules should function. That, in and of itself, demonstrates that the rules can never be neutral. Any decision about the rules is necessarily normative, informed by value judgments and insights.

I, and others, have questioned how the rulemaking committee members’ ideologies, identities, and litigation experience affect the committee’s procedural reforms. Because those reforms are often cast as out of step with the realities of litigation and skewed in favor of organizational defendant interests, this scholarship can create an impression that committee members’ actions are deeply purposeful, maybe even nefarious. Reda’s article takes a broader, balanced view of why the committee members make their decisions. Her explanation—while not minimizing the effects of identity, ideology, and litigation experience on member behavior—asks us to think about the ways in which the “who” of the committee member intersects with the “how.” That “how” requires us to think about the rulemaking process and the committee’s fidelity to empirical data.

Reda does not argue that empirical data is useless, but that the way the committee uses data is problematic. The committee’s definition of empirical is unclear and often includes information that should not be categorized as such. Because of this “category error,” the committee will rely on information that is not empirical, at least for the committee’s stated purpose. When the committee has empirical information, it misapplies it. By not understanding what the information means, the committee’s work is not data-informed. Finally, because the committee believes it is using empirical data properly, it fails to utilize other potentially useful and productive empirical information that could inform and improve its decisionmaking.

To demonstrate how misusing empirical information has hampered the committee’s work, Reda pored over rulemaking records, including those regarding the 2010 Duke Civil Litigation Conference in which the committee endeavored to anchor its work in an “empirical foundation.” The Duke Conference led to the controversial 2015 discovery amendments, which the committee celebrated as data driven. Yet Reda shows that category error, misinterpretation, and missed opportunity plagued the conference and the ensuing rulemaking process.

From the outset, the committee failed to identify useful empirical information. It called for data, but it received general attorney-perception survey results. The lone exception was a Federal Judicial Center closed case study, which connected the surveys to the particular case the attorney had litigated. Despite the differences in the quality of the information, the committee lumped it all together as empirical.

Building on the lack of categorization, the committee misinterpreted the data. Reda shows that the committee interpreted the survey data and the case study data as conflicting. In fact, the survey information revealed that attorneys believed discovery was too expensive while the closed case study showed that most discovery was not outsized. The committee interpreted this as showing that discovery worked well in most cases, but not in a subset of cases. This interpretive error had significant consequences for discovery reform. As experts pointed out during the conference, and as Reda reinforces in her article, the general perception surveys reflect impressions resulting from cognitive biases. Survey respondents naturally focused on the most extreme cases they could recall. The closed case study, by contrast, reflected a more accurate picture of how discovery worked on the ground.

Finally, in the name of neutrality, the committee missed useful empirical data. Committee members resisted hearing practitioners explain how a rule change might benefit particular clients. They defined generalized impressionistic survey data as impartial, but specific, contextual attorney feedback as biased. Reda explains that the opposite is true. Generalized data is less useful and quite biased. Had the committee sought out practitioners and asked them to identify how the discovery proportionality rule might operate in a particular case, the committee could have received a complete picture of what the proposed rule change might do.

In other words, accepting that ours is an adversarial system, Reda argues that the committee should hear from advocates regarding how a rule operates for their clients. Yet commitments to neutrality prevent committee members from gathering the very information that would better inform their work. The rulemaking process further entrenches existing inequalities because committee members do not grapple with the value judgments and biases they impose on the rules. Producing Procedural Inequality deftly shows that the neutrality of the rules is a myth and that the committee process should reflect the same. Reda does not blame committee members, but argues they need better resources and assistance in understanding what is before them. Committee members necessarily design rules that will affect the outcomes of cases. This is a heady task for sure, but hardly a neutral one.

Download PDF
Cite as: Brooke D. Coleman, Data-Driven Procedural Inequality, JOTWELL (May 13, 2022) (reviewing Danya Shocair Reda, Producing Procedural Inequality Through the Empirical Turn, 94 U. Colo. L. Rev. __ (forthcoming, 2023), available at SSRN), https://courtslaw.jotwell.com/data-driven-procedural-inequality/.