Legal education is ripe for properly designed experiments. It’s time to get started.
In my last post, Legal Education is a Data Desert (096), I described the deficiencies in data available and mobilized on behalf of clear-eyed assessment of legal education outcomes. While noting some conspicuous exceptions, I said that there’s simply not enough attention to the critical data necessary to evaluate legal education and, with it, to undertake and assess reform.
This post (121) continues this discussion by noting some of the questions that greater data would help illuminate through the development and use of experiments. In a later post, I’ll take a deeper dive into this data imperative, drawing some attention to data collection and analysis efforts that are currently underway.
To begin with some preliminaries, evidence-based legal education reform requires a commitment among stakeholders to the following principles:
- The performance of law schools must be assessed on the basis of a set of delineated educational outcomes. These outcomes need not be universal, as law schools can and ought to develop objectives that are tied to their own strategic plans. Moreover, there are enormous benefits from experiments, borne of the expertise of particular legal educators working on behalf of their institutions and tied to imaginative, focused goals forged in collective conversations by faculties and administrators in these law schools.
- Although these objectives can be based upon local knowledge, each law school should be committed to setting out clearly their objectives, and hopefully in a very transparent way, and developing metrics of assessment. Inescapably, some of these metrics will involve qualitative criteria. Consider, for example, the complicated matter of scholarly impact – citation analysis, sure, but impact can surely account for more qualitative factors. And the vital question of student well-being entails qualitative measures also. That all said, we meander away from the core of an evidence-based approach to performance and reform when we deny the many ways in which performance can and should be measured by quantitative analysis.
- Quantitative and qualitative analysis must be evidence-based. An art more than a science? Hardly. We have the great potential to employ the scientific method to assess performance, so long as we can responsibly collect and collate data. We are not serving our many constituencies very well to glibly proclaim that, as to evaluating the work of law schools (the good, the bad, and the ugly) we more or less know it when we see it.
These principles should guide the effort to assess the performance of law schools against established objectives. And, beyond the objectives forged internally, these principles should help us in the broad goal of making meaningful, measurable improvements.
Open to experiments
There are lots of dimensions to this plea for more data and sophisticated empirical work. At the very least, we ought to be open to experiments. And we should be prepared to use the best available methods of scientific analysis to evaluate performance and reform. Among techniques in this vein, randomized control trials (RCT) are the gold standard. See, e.g., Hariton & Locascio, “Randomised controlled trials—the gold standard for effectiveness research,” BJOG, Dec. 1, 2018 (noting that RCT “provides a rigorous tool to examine cause-effect relationships between an intervention and outcome”). A number of leading scholars (who I won’t here name, as I would risk leaving folks out) are fruitfully using RCT in their impactful empirical work. Less common – much less common – is the use of RCT to examine legal education. So, this is nearly unchartered territory.
The incentives for engaging these questions remain weak. Law schools are hardly hiring scholars on the tenure-track to do research on legal education. Indeed, those who do so are more often than not doing so, usually with tenure already in hand, as a side focus, not as a principal vocation. I can speculate on the reasons for this, but I will just leave this as an observation about our modern legal academy. A more serious impediment is the fact that law schools are seriously wary to experiment. Why not? Constraints embedded in law school accreditation is one reason; so, too, is the baleful influence of U.S. News rankings. But perhaps the most serious constraint is the internal political difficulty of creating tranches of students, necessary to establish a treatment sufficient to meet the strict standards of empirical work in the causal inference tradition.
We ought to think creatively about ways of overcoming these obstacles. Accreditation need not be a roadblock so long as the basic requirements of a satisfactory legal education, organized (for better or worse) around the input-flavored structure of contemporary ABA regulation are met. More ambitiously, the ABA Section on Legal Education should be pushed hard to create regulatory “sandboxes,” not unlike what Utah has undertaken with respect to legal services delivery. See “Narrowing the Access-to-Justice Gap by Reimagining Regulation,” Report and Recommendations of the Utal Work Group on Regulatory Reform (Aug. 2019); see also Post 112 (discussing Utah initiative). There is some hope that the ABA will open a door in the short run, even if slightly, to law school experimentation, including experimentation within law schools.
US News rankings remains a ubiquitous constraint, but it possible incentives giving flipped. For example, what if serious data-driven experimentation had the effect of enhancing law school performance and, with it, meaningful changes in schools’ competitive position? Moreover, what are law schools doing currently to make game-changing decisions? The playbook is well-defined and even a medium-close survey of law school rankings movement over the past decade or so suggest that there is little by way of magic elixir that is enabling schools to make massive, stable movement. Perhaps data will help even the most U.S. News rankings-obsessed deans and provosts realize their goals of climbing up this peculiarly meaningful ladder.
As to the internal optics of student-centered or faculty-centered experimentation, the case must be cogently made and the commitment to transparent, empirically rigorous analysis must be unwavering. It is not impossible to imagine a path forward.
There are a plethora of questions in the legal education space that we can at least partially answer through carefully designed experiments. To illustrate the general point, let me mention four examples:
1. The first-year curriculum
While marginal differences persist, the overall structure of the first-year remains largely intact. 1L students are presented with a required curriculum at its core (save for one or two electives). How does this curriculum affect student outcomes? Hard to know. Of course, we cannot compare student performance even along the thin dimension of grades in the upper division insofar as every student takes the same core curriculum. And there have been precious little examination, at least of which I am aware, of comparative success on the bar, much less other achievements post-graduation. So how do we know whether and to what extent the 1L core curriculum is efficacious in any way we can measure?
An RCT involving students in the first year would provide some potentially valuable data. The design of the experiment is fairly easy to manage, as it would entail assigning students randomly to a section (or two) in which a modified curriculum is provided. The challenge, of course, is how best to modify this curriculum.
By way of anecdote, let me offer my own experience, more than three decades ago (gasp!) at the Harvard Law School. HLS randomized a quarter of the class, putting us into what was called “the experimental section,” albeit the initiative as not a radical one. See Robert J. Cudha, Jr., “The Great Experiment,” Harv. Crimson, Mar. 6, 1985. As part of what we called “the Great Experiment,” we were subject to a more collaborative teaching endeavor, with all first-year instructors working together to establish lessons plans that were much more synthetic (so, for example, we covered issues of “consent” at exactly the same time in Contracts, Torts, and Criminal Law), and interspersed with “bridge” periods in which we took a week or more from the regular cadence of the term to study an interdisciplinary subject, such as law & economics and law & society.
How did it work out? To be sure, some us grumbled that we were being treated as guinea pigs; and others not in the experimental section were relieved that they were getting the standard HLS 1L fare. But this was at least a crude version of an RCT. I could not tell you whether there was a serious effort to collect data and assess post-first year performance, in law school or in practice. But I seriously doubt that this was done. Yet, this effort could take on a 21st century cast. We could indeed randomize our first year students to sections with meaningful differences and in a way that might yield enormously valuable data to help assess outcomes.
2. Advanced writing requirement
A majority of law schools require a writing requirement of some sort for students as a condition for graduation. See, this fairly typical example. While there is considerable variation in exactly how this requirement can be satisfied, my experience with law school accreditation and membership review by the AALS suggests that most law schools require some version of a “scholarly” paper as a graduation condition. And this is a uniform requirement.
Questions persist about the value of this requirement to students who look to develop practical legal skills, skills which will equip them to thrive in an environment of dynamic change and one in which the development of robust management, leadership, and technological skills is essential. If we are intentional about developing a T-shaped curriculum, see, e.g., Post 078 (discussing IFLP curriculum), we need to be strategic about the curriculum required of our students.
Pressure mounts to give students an option to do a different kind of writing requirement. However, so long as student choices are not random, we will not know what we need to know about the impact of this different requirement of writing (or, perhaps more ambitiously, a different skills requirement entirely). Experimentation would be valuable here as well.
3. Comparing clinic experiences
Students who aspire to a clinical experience during their second or third years typically have the opportunity to pursue what interests them. This is consistent with the elective space of the upper division and is a tradition at perhaps every law school in the land.
But we might think of an experiment in which students are randomly assigned to a clinic, supposing that there are clinics of a sufficient scale and of diversity to account for all students interesting in pursuing this experience. Ideally, there would be many advanced clinical experiences, so that a student would be able to find work of sincere and substantial interest.
In a similar vein, we might think of the core clinical courses as valuable to every student who seeks this experiential option and, especially if a clinical course is required (as has been proposed by many), randomizing students across the clinical core would provide data to help us assess the comparative efficacy of certain kinds and types of clinical legal education.
4. Admissions testing
The recent rise of the Graduate Record Exam (GRE) as an alternative test to the Law School Admissions Test (LSAT) for law school admissions, see, e.g., Rubino, “More Law Schools Join The GRE Party,” Above the Law, Sept. 12, 2019, presents us with an opportunity to assess the merits and demerits of these two prevailing tests.
A law school committed to admitting a certain number of students to the entering class through the GRE would give the law school a basis to assess, at least, the correlation between the test and first-year performance. Of course, as more happens after the first year, noise increases. But it would still be interesting, and maybe even revealing, to track students in cohorts admitted through (partially, to be sure, given other admissions criteria) these two tests. Such an evidence-based evaluation will prove important, given the central, and perhaps over-sold, role of the LSAT in law school admissions.
* * *
There are many, many other potential objects of examination. And maybe the examples above are not the best ones. However, separate from the examples of educational questions is my larger point, and that is that experiments constructed around rigorous techniques can yield data which could be extraordinarily helpful in providing information for law school decision makers to use to develop educational policies and, too, make assessments of how effective are these policies in meeting the law school’s stated objectives.
Moreover, these experiments are just an opening wedge into what we really want to accomplish, and that is a comparative analysis of different law school approaches in order to generate best practices and the most promising approaches. Shared data, made available in a form that law school stakeholders, and neutral experts, can examine and analyze, would be enormously valuable in our quest for better educational models.
Better for what?
This is a complex question, and what we have learned through the relentless efforts by an ever widening group of scholars, practitioners, and other interested folk – illustrated well through the authors on Legal Evolution – is that we are just really beginning to establish criteria to illuminate in convincing ways the connection between our systems and schemes of legal education and success in the profession. The commitment to evidence-based assessments of legal education’s performance and its reform is a promising element of this grand effort.