Equating, Scaling, and Civil Procedure

April 16th, 2015 / By

Still wondering about the February bar results? I continue that discussion here. As explained in my previous post, NCBE premiered its new Multistate Bar Exam (MBE) in February. That exam covers seven subjects, rather than the six tested on the MBE for more than four decades. Given the type of knowledge tested by the MBE, there is little doubt that the new exam is harder than the old one.

If you have any doubt about that fact, try this experiment: Tell any group of third-year students that the bar examiners have decided to offer them a choice. They may study for and take a version of the MBE covering the original six subjects, or they may choose a version that covers those subjects plus Civil Procedure. Which version do they choose?

After the students have eagerly indicated their preference for the six-subject test, you will have to apologize profusely to them. The examiners are not giving them a choice; they must take the harder seven-subject test.

But can you at least reassure the students that NCBE will account for this increased difficulty when it scales scores? After all, NCBE uses a process of equating and scaling scores that is designed to produce scores with a constant meaning over time. A scaled score of 136 in 2015 is supposed to represent the same level of achievement as a scaled score of 136 in 2012. Is that still true, despite the increased difficulty of the test?

Unfortunately, no. Equating works only for two versions of the same exam. As the word “equating” suggests, the process assumes that the exam drafters attempted to test the same knowledge on both versions of the exam. Equating can account for inadvertent fluctuations in difficulty that arise from constructing new questions that test the same knowledge. It cannot, however, account for changes in the content or scope of an exam.

This distinction is widely recognized in the testing literature–I cite numerous sources at the end of this post. It appears, however, that NCBE has attempted to “equate” the scores of the new MBE (with seven subjects) to older versions of the exam (with just six subjects). This treated the February 2015 examinees unfairly, leading to lower scores and pass rates.

To understand the problem, let’s first review the process of equating and scaling.

Equating

First, remember why NCBE equates exams. To avoid security breaches, NCBE must produce a different version of the MBE every February and July. Testing experts call these different versions “forms” of the test. For each of the MBE forms, the designers attempt to create questions that impose the same range of difficulty. Inevitably, however, some forms are harder than others. It would be unfair for examinees one year to get lower scores than examinees the next year, simply because they took a harder form of the test. Equating addresses this problem.

The process of equating begins with a set of “control” questions or “common items.” These are questions that appear on two forms of the same exam. The February 2015 MBE, for example, included a subset of questions that had also appeared on some earlier exam. For this discussion, let’s assume that there were 30 of these common items and 160 new questions that counted toward each examinee’s score. (Each MBE also includes 10 experimental questions that do not count toward the test-taker’s score but that help NCBE assess items for future use.)

When NCBE receives answer sheets from each version of the MBE, it is able to assess the examinees’ performance on the common items and new items. Let’s suppose that, on average, earlier examinees got 25 of the 30 common items correct. If the February 2015 test-takers averaged only 20 correct answers to those common items, NCBE would know that those test-takers were less able than previous examinees. That information would then help NCBE evaluate the February test-takers’ performance on the new test items. If the February examinees also performed poorly on those items, NCBE could conclude that the low scores were due to the test-takers’ abilities rather than to a particularly hard version of the test.

Conversely, if the February test-takers did very well on the new items–while faring poorly on the common ones–NCBE would conclude that the new items were easier than questions on earlier tests. The February examinees racked up points on those questions, not because they were better prepared than earlier test-takers, but because the questions were too easy.

The actual equating process is more complicated than this. NCBE, for example, can account for the difficulty of individual questions rather than just the overall difficulty of the common and new items. The heart of equating, however, lies in this use of “common items” to compare performance over time.

Scaling

Once NCBE has compared the most recent batch of exam-takers with earlier examinees, it converts the current raw scores to scaled ones. Think of the scaled scores as a rigid yardstick; these scores have the same meaning over time. 18 inches this year is the same as 18 inches last year. In the same way, a scaled score of 136 has the same meaning this year as last year.

How does NCBE translate raw points to scaled scores? The translation depends upon the results of equating. If a group of test-takers performs well on the common items, but not so well on the new questions, the equating process suggests that the new questions were harder than the ones on previous versions of the test. NCBE will “scale up” the raw scores for this group of exam takers to make them comparable to scores earned on earlier versions of the test.

Conversely, if examinees perform well on new questions but poorly on the common items, the equating process will suggest that the new questions were easier than ones on previous versions of the test. NCBE will then scale down the raw scores for this group of examinees. In the end, the scaled scores will account for small differences in test difficulty across otherwise similar forms.

Changing the Test

Equating and scaling work well for test forms that are designed to be as similar as possible. The processes break down, however, when test content changes. You can see this by thinking about the data that NCBE had available for equating the February 2015 bar exam. It had a set of common items drawn from earlier tests; these would have covered the six original subjects. It also had answers to 190 new items; these would have included both the original subjects and the new one (Civil Procedure).

With these data, NCBE could make two comparisons:

1. It could compare performance on the common items. It undoubtedly found that the February 2015 test-takers performed less well than previous test-takers on these items. That’s a predictable result of having a seventh subject to study. This year’s examinees spread their preparation among seven subjects rather than six. Their mastery of each subject was somewhat lower, and they would have performed less well on the common items testing those subjects.

2. NCBE could also compare performance on the new Civil Procedure items with performance on old and new items in other subjects. NCBE won’t release those comparisons, because it no longer discloses raw scores for subject areas. I predict, however, that performance on Civil Procedure items was the same as on Evidence, Property, or other subjects. Why? Because Civil Procedure is not intrinsically harder than these other subjects, and the examinees studied all seven subjects.

Neither of these comparisons, however, would address the key change in the MBE: Examinees had to prepare seven subjects rather than six. As my previous post suggested, this isn’t just a matter of taking all seven subjects in law school and remembering key concepts for the MBE. Because the MBE is a closed-book exam that requires recall of detailed rules, examinees devote 10 weeks of intense study to this exam. They don’t have more than 10 weeks, because they’re occupied with law school classes, extracurricular activities, and part-time jobs before mid-May or mid-December.

There’s only so much material you can cram into memory during ten weeks. If you try to memorize rules from seven subjects, rather than just six, some rules from each subject will fall by the wayside.

When Equating Doesn’t Work

Equating is not possible for a test like the new MBE, which has changed significantly in content and scope. The test places new demands on examinees, and equating cannot account for those demands. The testing literature is clear that, under these circumstances, equating produces misleading results. As Robert L. Brennan, a distinguished testing expert, wrote in a prominent guide: “When substantial changes in test specifications occur, either scores should be reported on a new scale or a clear statement should be provided to alert users that the scores are not directly comparable with those on earlier versions of the test.” (See p. 174 of Linking and Aligning Scores and Scales, cited more fully below.)

“Substantial changes” is one of those phrases that lawyers love to debate. The hypothetical described at the beginning of this post, however, seems like a common-sense way to identify a “substantial change.” If the vast majority of test-takers would prefer one version of a test over a second one, there is a substantial difference between the two.

As Brennan acknowledges in the chapter I quote above, test administrators dislike re-scaling an exam. Re-scaling is both costly and time-consuming. It can also discomfort test-takers and others who use those scores, because they are uncertain how to compare new scores to old ones. But when a test changes, as the MBE did, re-scaling should take the place of equating.

The second best option, as Brennan also notes, is to provide a “clear statement” to “alert users that the scores are not directly comparable with those on earlier versions of the test.” This is what NCBE should do. By claiming that it has equated the February 2015 results to earlier test results, and that the resulting scaled scores represent a uniform level of achievement, NCBE is failing to give test-takers, bar examiners, and the public the information they need to interpret these scores.

The February 2015 MBE was not the same as previous versions of the test, it cannot be properly equated to those tests, and the resulting scaled scores represent a different level of achievement. The lower scaled scores on the February 2015 MBE reflect, at least in part, a harder test. To the extent that the test-takers also differed from previous examinees, it is impossible to separate that variation from the difference in the tests themselves.

Conclusion

Equating was designed to detect small, unintended differences in test difficulty. It is not appropriate for comparing a revised test to previous versions of that test. In my next post on this issue, I will discuss further ramifications of the recent change in the MBE. Meanwhile, here is an annotated list of sources related to equating:

Michael T. Kane & Andrew Mroch, Equating the MBE, The Bar Examiner, Aug. 2005, at 22. This article, published in NCBE’s magazine, offers an overview of equating and scaling for the MBE.

Neil J. Dorans, et al., Linking and Aligning Scores and Scales (2007). This is one of the classic works on equating and scaling. Chapters 7-9 deal specifically with the problem of test changes. Although I’ve linked to the Amazon page, most university libraries should have this book. My library has the book in electronic form so that it can be read online.

Michael J. Kolen & Robert L. Brennan, Test Equating, Scaling, and Linking:
Methods and Practices (3d ed. 2014). This is another standard reference work in the field. Once again, my library has a copy online; check for a similar ebook at your institution.

CCSSO, A Practitioner’s Introduction to Equating. This guide was prepared by the Council of Chief State School Officers to help teachers, principals, and superintendents understand the equating of high-stakes exams. It is written for educated lay people, rather than experts, so it offers a good introduction. The source is publicly available at the link.

, No Comments Yet

The February 2015 Bar Exam

April 12th, 2015 / By

States have started to release results of the February 2015 bar exam, and Derek Muller has helpfully compiled the reports to date. Muller also uncovered the national mean scaled score for this February’s MBE, which was just 136.2. That’s a notable drop from last February’s mean of 138.0. It’s also lower than all but one of the means reported during the last decade; Muller has a nice graph of the scores.

The latest drop in MBE scores, unfortunately, was completely predictable–and not primarily because of a change in the test takers. I hope that Jerry Organ will provide further analysis of the latter possibility soon. Meanwhile, the expected drop in the February MBE scores can be summed up in five words: seven subjects instead of six. I don’t know how much the test-takers changed in February, but the test itself did.

MBE Subjects

For reasons I’ve explained in a previous post, the MBE is the central component of the bar exam. In addition to contributing a substantial amount to each test-taker’s score, the MBE is used to scale answers to both essay questions and the Multistate Performance Test (MPT). The scaling process amplifies any drop in MBE scores, leading to substantial drops in pass rates.

In February 2015, the MBE changed. For more than four decades, that test has covered six subjects: Contracts, Torts, Criminal Law and Procedure, Constitutional Law, Property, and Evidence. Starting with the February 2015 exam, the National Conference of Bar Examiners (NCBE) added a seventh subject, Civil Procedure.

Testing examinees’ knowledge of Civil Procedure is not itself problematic; law students study that subject along with the others tested on the exam. In fact, I suspect more students take a course in Civil Procedure than in Criminal Procedure. The difficulty is that it’s harder to memorize rules drawn from seven subjects than to learn the rules for six. For those who like math, that’s an increase of 16.7% in the body of knowledge tested.

Despite occasional claims to the contrary, the MBE requires lots of memorization. It’s not solely a test of memorization; the exam also tests issue spotting, application of law to fact, and other facets of legal reasoning. Test-takers, however, can’t display those reasoning abilities unless they remember the applicable rules: the MBE is a closed-book test.

There is no other context, in school or practice, where we expect lawyers to remember so many legal principles without reference to codes, cases, and other legal materials. Some law school exams are closed-book, but they cover a single subject that has just been studied for a semester. The “closed book” moments in practice are much fewer than many observers assume. I don’t know any trial lawyers who enter the courtroom without a copy of the rules of evidence and a personalized cribsheet reminding them of common objections and responses.

This critique of the bar exam is well known. I repeat it here only to stress the impact of expanding the MBE’s scope. February’s test takers answered the same number of multiple choice questions (190 that counted, plus 10 experimental ones) but they had to remember principles from seven fields of law rather than six.

There’s only so much that the brain can hold in memory–especially when the knowledge is abstract, rather than gained from years of real-client experience. I’ve watched many graduates prepare for the bar over the last decade: they sit in our law library or clinic, poring constantly over flash cards and subject outlines. Since states raised passing scores in the 1990s and early 2000s, examinees have had to memorize many more rules in order to answer enough questions correctly. From my observation, their memory banks were already full to overflowing.

Six to Seven Subjects

What happens, then, when the bar examiners add a seventh subject to an already challenging test? Correct answers will decline, not just in the new subject, but across all subjects. The February 2015 test-takers, I’m sure, studied just as hard as previous examinees. Indeed, they probably studied harder, because they knew that they would have to answer questions drawn from seven bodies of legal knowledge rather than six. But their memories could hold only so much information. Memorized rules of Civil Procedure took the place of some rules of Torts, Contracts, or Property.

Remember that the MBE tests only a fraction of the material that test-takers must learn. It’s not a matter of learning 190 legal principles to answer 190 questions. The universe of testable material is enormous. For Evidence, a subject that I teach, the subject matter outline lists 64 distinct topics. On average, I estimate that each of those topics requires knowledge of three distinct rules to answer questions correctly on the MBE–and that’s my most conservative estimate.

It’s not enough, for example, to know that there’s a hearsay exemption for some prior statements by a witness, and that the exemption allows the fact-finder to use a witness’s out-of-court statements for substantive purposes, rather than merely impeachment. That’s the type of general understanding I would expect a new lawyer to have about Evidence, permitting her to research an issue further if it arose in a case. The MBE, however, requires the test-taker to remember that a grand jury session counts as a “proceeding” for purposes of this exemption (see Q 19). That’s a sub-rule fairly far down the chain. In fact, I confess that I had to check my own book to refresh my recollection.

In any event, if Evidence requires mastering 200 sub-principles of this detail, and the same is true of the other five traditional MBE subjects, that was 1200 very specific rules to memorize and keep in memory–all while trying to apply those rules to new fact patterns. Adding a seventh subject upped the ante to 1400 or more detailed rules. How many things can one test-taker remember without checking a written source? There’s a reason why humanity invented writing, printing, and computers.

But They Already Studied Civil Procedure

Even before February, all jurisdictions (to my knowledge) tested Civil Procedure on their essay exams. So wouldn’t examinees have already studied those Civ Pro principles? No, not in the same manner. Detailed, comprehensive memorization is more necessary for the MBE than for traditional essays.

An essay allows room to display issue spotting and legal reasoning, even if you get one of the sub-rules wrong. In the Evidence example given above, an examinee could display considerable knowledge by identifying the issue, noting the relevant hearsay exemption, and explaining the impact of admissibility (substantive use rather than simply impeachment). If the examinee didn’t remember the correct status of grand jury proceedings under this particular rule, she would lose some points. She wouldn’t, however, get the whole question wrong–as she would on a multiple-choice question.

Adding a new subject to the MBE hit test-takers where they were already hurting: the need to memorize a large number of rules and sub-rules. By expanding the universe of rules to be memorized, NCBE made the exam considerably harder.

Looking Ahead

In upcoming posts, I will explain why NCBE’s equating/scaling process couldn’t account for the increased difficulty of this exam. Indeed, equating and scaling may have made the impact worse. I’ll also explore what this means for the ExamSoft discussion and what (if anything) legal educators might do about the increased difficulty of the MBE. To start the discussion, however, it’s essential to recognize that enhanced level of difficulty.

, View Comments (2)

ExamSoft and NCBE

April 6th, 2015 / By

I recently found a letter that Erica Moeser, President of the National Conference of Bar Examiners (NCBE) wrote to law school deans in mid-December. The letter responds to a formal request, signed by 79 law school deans, that NCBE “facilitate a thorough investigation of the administration and scoring of the July 2014 bar exam.” That exam suffered from the notorious ExamSoft debacle.

Moeser’s letter makes an interesting distinction. She assures the deans that NCBE has “reviewed and re-reviewed” its scoring, equating, and scaling of the July 2014 MBE. Those reviews, Moeser attests, revealed no flaw in NCBE’s process. She then adds that, to the extent the deans are concerned about “administration” of the exam, they should “note that NCBE does not administer the examination; jurisdictions do.”

Moeser doesn’t mention ExamSoft by name, but her message seems clear: If ExamSoft’s massive failure affected examinees’ performance, that’s not our problem. We take the bubble sheets as they come to us, grade them, equate the scores, scale those scores, and return the numbers to the states. It’s all the same to NCBE if examinees miss points because they failed to study, law schools taught them poorly, or they were groggy and stressed from struggling to upload their essay exams. We only score exams, we don’t administer them.

But is the line between administration and scoring so clear?

The Purpose of Equating

In an earlier post, I described the process of equating and scaling that NCBE uses to produce final MBE scores. The elaborate transformation of raw scores has one purpose: “to ensure consistency and fairness across the different MBE forms given on different test dates.”

NCBE thinks of this consistency with respect to its own test questions; it wants to ensure that some test-takers aren’t burdened with an overly difficult set of questions–or conversely, that other examinees don’t benefit from unduly easy questions. But substantial changes in exam conditions, like the ExamSoft crash, can also make an exam more difficult. If they do, NCBE’s equating and scaling process actually amplifies that unfairness.

To remain faithful to its mission, it seems that NCBE should at least explore the possible effects of major blunders in exam administration. This is especially true when a problem affects multiple jurisdictions, rather than a single state. If an incident affects a single jurisdiction, the examining authorities in that state can decide whether to adjust scores for that exam. When the problem is more diffuse, as with the ExamSoft failure, individual states may not have the information necessary to assess the extent of the impact. That’s an even greater concern when nationwide equating will spread the problem to states that did not even contract with ExamSoft.

What Should NCBE Have Done?

NCBE did not cause ExamSoft’s upload problems, but it almost certainly knew about them. Experts in exam scoring also understand that defects in exam administration can interfere with performance. With knowledge of the ExamSoft problem, NCBE had the ability to examine raw scores for the extent of the ExamSoft effect. Exploration would have been most effective with cooperation from ExamSoft itself, revealing which states suffered major upload problems and which ones experienced more minor interference. But even without that information, NCBE could have explored the raw scores for indications of whether test takers were “less able” in ExamSoft states.

If NCBE had found a problem, there would have been time to consult with bar examiners about possible solutions. At the very least, NCBE probably should have adjusted its scaling to reflect the fact that some of the decrease in raw scores stemmed from the software crash rather than from other changes in test-taker ability. With enough data, NCBE might have been able to quantify those effects fairly precisely.

Maybe NCBE did, in fact, do those things. Its public pronouncements, however, have not suggested any such process. On the contrary, Moeser seems to studiously avoid mentioning ExamSoft. This reveals an even deeper problem: we have a high-stakes exam for which responsibility is badly fragmented.

Who Do You Call?

Imagine yourself as a test-taker on July 29, 2014. You’ve been trying for several hours to upload your essay exam, without success. You’ve tried calling ExamSoft’s customer service line, but can’t get through. You’re worried that you’ll fail the exam if you don’t upload the essays on time, and you’re also worried that you won’t be sufficiently rested for the next day’s MBE. Who do you call?

You can’t call the state bar examiners; they don’t have an after-hours call line. If they did, they probably would reassure you on the first question, telling you that they would extend the deadline for submitting essay answers. (This is, in fact, what many affected states did.) But they wouldn’t have much to offer on the second question, about getting back on track for the next day’s MBE. Some state examiners don’t fully understand NCBE’s equating and scaling process; those examiners might even erroneously tell you “not to worry because everyone is in the same boat.”

NCBE wouldn’t be any more help. They, as Moeser pointed out, don’t actually administer exams; they just create and score them.

Many distressed examinees called law school staff members who had helped them prepare for the bar. Those staff members, in turn, called their deans–who contacted NCBE and state bar examiners. As Moeser’s letters indicate, however, bar examiners view deans with some suspicion. The deans, they believe, are too quick to advocate for their graduates and too worried about their own bar pass rates.

As NCBE and bar examiners refused to respond, or shifted responsibility to the other party, we reached a stand-off: no one was willing to take responsibility for flaws in a very high-stakes test administered to more than 50,000 examinees. That is a failure as great as the ExamSoft crash itself.

, No Comments Yet

ExamSoft: By the Numbers

March 26th, 2015 / By

Earlier this week I explained why the ExamSoft fiasco could have lowered bar passage rates in most states, including some states that did not use the software. But did it happen that way? Only ExamSoft and the National Conference of Bar Examiners have the data that will tell us for sure. But here’s a strong piece of supporting evidence:

Among states that did not experience the ExamSoft crisis, the average bar passage rate for first-time takers from ABA-accredited law schools fell from 81% in July 2013 to 78% in July 2014. That’s a drop of 3 percentage points.

Among the states that were exposed to the ExamSoft problems, the average bar passage rate for the same group fell from 83% in July 2013 to 78% in July 2014. That’s a 5 point drop, two percentage points more than the drop in the “unaffected” states.

Derek Muller did the important work of distinguishing these two groups of states. Like him, I count a state as an “ExamSoft” one if it used that software company and its exam takers wrote their essays on July 29 (the day of the upload crisis). There are 40 states in that group. The unaffected states are the other 10 plus the District of Columbia; these jurisdictions either did not contract with ExamSoft or their examinees wrote essays on a different day.

The comparison between these two groups is powerful. What, other than the ExamSoft debacle could account for the difference between the two? A 2-point difference is not one that occurs by chance in a population this size. I checked and the probability of this happening by chance (that is, by separating the states randomly into two groups of this size) is so small that it registered as 0.00 on my probability calculator.

It’s also hard to imagine another factor that would explain the difference. What do Arizona, DC, Kentucky, Louisiana, Maine, Massachusetts, Nebraska, New Jersey, Virginia, Wisconsin, and Wyoming have in common other than that their test takers were not directly affected by ExamSoft’s malfunction? Large states and small states; Eastern states and Western states; red states and blue states.

Of course, as I explained in my previous post, examinees in 10 of those 11 jurisdictions ultimately suffered from the glitch; that effect came through the equating and scaling process. The only jurisdiction that escaped completely was Louisiana, which used neither ExamSoft nor the MBE. That state, by the way, enjoyed a large increase in its bar passage rate between July 2013 and July 2014.

This is scary on at least four levels:

1. The ExamSoft breakdown affected performance sufficiently that states using the software suffered an average drop of 2 percentage points in bar passage.

2. The equating and scaling process amplified the drop in raw scores. These processes dropped pass rates as much as three more percentage points across the nation. In states where raw scores were affected, pass rates fell an average of 5 percentage points. In other states, the pass rate fell an average of 3 percentage points. (I say “as much as” here because it is possible that other factors account for some of this drop; my comparison can’t control for that possibility. It seems clear, however, that equating and scaling amplified the raw-score drop and accounted for some–perhaps all–of this drop.)

3. Hundreds of test takers–probably more than 1,500 nationwide–failed the bar exam when they should have passed.

4. ExamSoft and NCBE have been completely unresponsive to this problem, despite the fact that these data have been available to them.

One final note: the comparisons in this post are a conservative test of the ExamSoft hypothesis, because I created a simple dichotomy between states exposed directly to the upload failure and those with no direct exposure. It is quite likely that states in the first group differed in the extent to which their examinees suffered. In some states, most test takers may have successfully uploaded their essays on the first try; in others, a large percentage of examinees may have struggled for hours. Those differences could account for variations within the “ExamSoft” states.

ExamSoft and NCBE could make those more nuanced distinctions. From the available data, however, there seems little doubt that the ExamSoft wreck seriously affected results of the July 2014 bar exam.

* I am grateful to Amy Otto, a former student who is wise in the way of statistics and who helped me think through these analyses.

, No Comments Yet

ExamSoft After All?

March 24th, 2015 / By

Why did so many people fail the July 2014 bar exam? Among graduates of ABA-accredited law schools who took the exam for the first time last summer, just 78% passed. A year earlier, in July 2013, 82% passed. What explains a four-point drop in a single year?

The ExamSoft debacle looked like an obvious culprit. Time wasted, increased anxiety, and loss of sleep could have affected the performance of some test takers. For those examinees, even a few points might have spelled the difference between success and failure.

Thoughtful analyses, however, pointed out that pass rates fell even in states that did not use ExamSoft. What, then, explains such a large performance drop across so many states? After looking closely at the way in which NCBE and states grade the bar exam, I’ve concluded that ExamSoft probably was the major culprit. Let me explain why–including the impact on test takers in states that didn’t use ExamSoft–by walking you through the process step by step. Here’s how it could have happened:

Tuesday, July 29, 2014

Bar exam takers in about forty states finished the essay portion of the exam and attempted to upload their answers through ExamSoft. But for some number of them, the essays wouldn’t upload. We don’t know the exact number of affected exam takers, but it seems to have been quite large. ExamSoft admitted to a “six-hour backlog” and at least sixteen states ultimately extended their submission deadlines.

Meanwhile, these exam takers were trying to upload their exams, calling customer service, and worrying about the issue (wouldn’t you, if failure to upload meant bar failure?) instead of eating dinner, reviewing their notes for the next day’s MBE, and getting to sleep.

Wednesday, July 30, 2014

Test takers in every state but Louisiana took the multiple choice MBE. In some states, no one had been affected by the upload problem. In others, lots of people were. They were tired, stressed, and had spent less time reviewing. Let’s suppose that, due to these issues, the ExamSoft victims performed somewhat less well than they would have performed under normal conditions. Instead of answering 129 questions correctly (a typical raw score for the July MBE), they answered just 125 questions correctly.

August 2014: Equating

The National Conference of Bar Examiners (NCBE) received all of the MBE answers and began to process them. The raw scores for ExamSoft victims were lower than those for typical July examinees, and those scores affected the mean for the entire pool. Most important, mean scores were lower for both the “control questions” and other questions. “Control questions” is my own shorthand for a key group of questions; these are questions that have appeared on previous bar exams, as well as the most current one. By analyzing scores for the control questions (both past and present) and new questions, NCBE can tell whether one group of exam takers is more or less able than an earlier group. For a more detailed explanation of the process, see this article.

These control questions serve an important function; they allow NCBE to “equate” exam difficulty over time. What if the Evidence questions one year are harder than those for the previous year? Pass rates would fall because of an unfairly hard exam, not because of any difference in the exam takers’ ability. By analyzing responses to the control questions (compared to previous years) and the new questions, NCBE can detect changes in exam difficulty and adjust raw scores to account for them.

Conversely, these analyses can confirm that lower scores on an exam stem from the examinees’ lower ability rather than any change in the exam difficulty. Weak performance on control questions will signal that the examinees are “less able” than previous groups of examinees.

But here’s the rub: NCBE can’t tell from this general analysis why a group of examinees is less able than an earlier group. Most of the time, we would assume that “less able” means less innately talented, less well prepared, or less motivated. But “less able” can also mean distracted, stressed, and tired because of a massive software crash the night before. Anything that affects performance of a large number of test takers, even if the individual impact is relatively small, will make the group appear “less able” in the equating process that NCBE performs.

That’s step one of my theory: struggling with ExamSoft made a large number of July 2014 examinees perform somewhat below their real ability level. Those lower scores, in turn, lowered the overall performance level of the group–especially when compared, through the control questions, to earlier groups of examinees. If thousands of examinees went out partying the night before the July 2014 MBE, no one would be surprised if the group as a whole produced a lower mean score. That’s what happened here–except that the examinees were frantically trying to upload essay questions rather than partying.

August 2014: Scaling

Once NCBE determines the ability level of a group of examinees, as well as the relative difficulty of the test, it adjusts the raw scores to account for these factors. The adjustment process is called “scaling” and it consists of adding points to the examinees’ raw scores. In a year with an easy test or “less able” examinees, the scaling process adds just a few points to each examinee’s raw score. Groups who faced a harder test or were “more able,” get more points. [Note that the process is a little more complicated that this; each examinee doesn’t get exactly the same point addition. The general process, however, works in this way–and affects the score of every single examinee. See this article for more.]

This is the point at which the ExamSoft crisis started to affect all examinees. NCBE doesn’t scale scores just for test takers who seem less able than others; it scales scores for the entire group. The mean scaled score for the July 2014 MBE was 141.5, almost three points lower than the mean scaled score in July 2013 (which was 144.3). This was also the lowest scaled score in ten years. See this report (p. 35) for a table reporting those scores.

It’s essential to remember that the scaling process affects every examinee in every state that uses the MBE. Test takers in states unaffected by ExamSoft got raw scores that reflected their ability, but they got a smaller scaling increment than they would have received without ExamSoft depressing outcomes in other states. The direct ExamSoft victims, of course, suffered a double whammy: they obtained a lower raw score than they might have otherwise achieved, plus a lower scaling boost to that score.

Fall 2014: Essay Scoring

After NCBE finished calculating and scaling MBE scores, the action moved to the states. States (except for Louisiana, which doesn’t use the MBE), incorporated the artificially depressed MBE scores into their bar score formulas. Remember that those MBE scores were lower for every exam taker than they would have been without the ExamSoft effect.

The damage, though, didn’t stop there. Many (perhaps most) states scale the raw scores from their essay exams to MBE scores. Here’s an article that explains the process in fairly simple terms, and I’ll attempt to sum it up here.

Scaling takes raw essay scores and arranges them on a skeleton provided by that state’s scaled MBE results. When the process is done, the mean essay score will be the same as the mean scaled MBE score for that state. The standard deviations for both will also be the same.

What does that mean in everyday English? It means that your state’s scaled MBE scores determine the grading curve for the essays. If test takers in your state bombed the MBE, they will all get lower scores on the essays as well. If they aced the MBE, they’ll get higher essay scores.

Note that this scaling process is a group-wide one, not an individual one. An individual who bombed the MBE won’t necessarily flunk the essays as well. Scaling uses indicators of group performance to adjust essay scores for the group as a whole. The exam taker who wrote the best set of essays in a state will still get the highest essay score in that state; her scaled score just won’t be as high as it would have been if her fellow test takers had done better on the MBE.

Scaling raw essay scores, like scaling the raw MBE scores, produces good results in most years. If one year’s graders have a snit and give everyone low scores on the essay part of the exam, the scaling process will say, “wait a minute, the MBE scores show that this group of test takers is just as good as last year’s. We need to pull up the essay scores to mirror performance on the MBE.” Conversely, if the graders are too generous (or the essay questions were too easy), the scaling process will say “Uh-oh. The MBE scores show us that this year’s group is no better than last year’s. We need to pull down your scores to keep them in line with what previous graders have done.”

The scaled MBE scores in July 2014 told the states: “Your test takers weren’t as good this year as last year. Pull down those essay scores.” Once again, this scaling process affected everyone who took the bar exam in a state that uses the MBE and scales essays to the MBE. I don’t know how many states are in the latter camp, but NCBE strongly encourages states to scale their essay scores.

Fall 2014: MPT Scoring

You guessed it. States also scale MPT scores to the MBE. Once again, MBE scores told them that this group of exam takers was “less able” than earlier groups so they should scale down MPT scores. That would have happened in every state that uses both the MBE and MPT, and scales the latter scores to the former.

Conclusion

So there you have it: this is how poor performance by ExamSoft victims could have depressed scores for exam takers nationwide. For every exam taker (except those in Louisiana) there was at least a single hit: a lower scaled MBE score. For many exam takers there were three hits: lower scaled MBE score, lower scaled essay score, and lower scaled MPT score. For some direct victims of the ExamSoft crisis, there was yet a fourth hit: a lower raw score on the MBE. But, as I hope I’ve shown here, those raw scores were also pebbles that set off much larger ripples in the pond of bar results. If you throw enough pebbles into a pond all at once, you trigger a pretty big wave.

Erica Moeser, the NCBE President, has defended the July 2014 exam results on the ground that test takers were “less able” than earlier groups of test takers. She’s correct in the limited sense that the national group of test takers performed less well, on average, on the MBE than the national group did in previous years. But, unless NCBE has done more sophisticated analyses of state-by-state raw scores, that doesn’t tell us why the exam takers performed less “ably.”

Law deans like Brooklyn’s Nick Allard are clearly right that we need a more thorough investigation of the July 2014 bar results. It’s too late to make whole the 2,300 or so test takers who may have unfairly failed the exam. They’ve already grappled with a profound sense of failure, lost jobs, studied for and taken the February exam, or given up on a career practicing law. There may, though, be some way to offer them redress–at least the knowledge that they were subject to an unfair process. We need to unravel the mystery of July 2014, both to make any possible amends and to protect law graduates in the future.

I plan to post some more thoughts on this, including some suggestions about how NCBE (or a neutral outsider) could further examine the July 2014 results. Meanwhile, please let me know if you have thoughts on my analysis. I’m not a bar exam insider, although I studied some of these issues once before. This is complicated stuff, and I welcome any comments or corrections.

Updated on September 21, 2015, to correct reported pass rates.

, View Comments (3)

What About the Bar Exam?

November 25th, 2013 / By

This post continues my earlier discussion of an educational framework that would shift the first 1-1/2 years of law school to the undergraduate curriculum. A two-year JD program, incorporating clinical education and advanced doctrinal work, would complement the undergraduate degree. As I wrote in my previous post, the undergraduate degree would not qualify students to practice law; as in our current system, only JDs would be eligible for bar admission.

In this framework, what would happen to the bar exam? The bar currently focuses on subjects covered during the first half of law school. If students completed that work in college, delaying the exam until after completion of a JD would be unsatisfactory. What’s the answer?

I can see several ways to address this issue, and I welcome suggestions from others. But here’s the answer that currently appeals to me: Divide the bar exam into two portions. The first portion would replace the LSAT as an entrance exam for JD programs. Rather than demonstrate their proficiency in logic games, future lawyers would show their competency in legal reasoning and basic legal doctrine. The second part of the exam, administered to JD’s, would focus on more advanced problem solving, counseling, and other practice skills. Both parts of the exam would include appropriate testing on professional responsibility.

A New Entry Exam

Law schools use the LSAT as an entry exam because we don’t have much else to rely upon. College grades offer one measure of potential success, but grading systems (and grade inflation) vary across colleges. Law schools set no prerequisites, so we can’t measure applicants’ success at mastering those fields. In theory, we could review applicants’ research and writing skills directly, but no one has much appetite for that. (And what would US News do to sell subscriptions if we abandoned a quantifiable admissions test?)

The MCAT, which informs medical school admissions, is quite different from the LSAT. The exam includes a section that probes reading comprehension and general reasoning, but two-thirds of the exam “tests for mastery of basic concepts in biology, general chemistry, organic chemistry, and physics.”

If students completed the first 1-1/2 years of legal study in college, we could devise a JD entrance exam that looked more like the MCAT. This exam would draw upon the current Multistate Bar Exam (MBE), Multistate Performance Test (MPT), and Multistate Professional Responsibility Exam (MPRE). A slimmed-down MBE would test basic knowledge and reasoning in foundational fields like Torts, Criminal Law, Property, and Constitutional Law. The MPT would examine analysis, reasoning, and writing as applied to basic legal issues. And an adapted version of the MPRE would cover basic principles of professional responsibility.

This type of exam would assure JD programs that applicants had mastered key principles during their undergraduate study. The exam would also demonstrate that proficiency to licensing bodies. Equally important, the exam could replace our obsession with LSAT scores. If we’re not willing to give up quantitative rankings and merit-based scholarships, we could at least reward new JD students for studying hard in college and mastering basic legal subjects.

Teaching to the Test

Would this new entrance exam require professors of undergraduate law courses to teach to the test? Would an undergraduate version of Property, for example, stress memorization rather than exploration of analysis and policy? That hasn’t happened in college science courses, despite the foreboding presence of the MCAT. Science professors, like law professors, realize that students can’t learn basic principles without also understanding how to apply them.

Undergraduate law courses, like our current 1L ones, would teach students basic doctrinal elements. They would also teach students how to apply those principles and reason with them. Students probably would reinforce their learning by taking review courses, just as they currently take LSAT prep courses and MCAT review ones. I’d rather see aspiring JD students pay for courses that review essential professional material than shell out money to learn tricks for beating the LSAT.

The entrance exam I envision, furthermore, would stress more basic principles than the current MBE. Currently, the MBE and MPT require 1-1/2 days of testing; the MPRE adds another 2 hours. I would create an entrance exam consuming six hours at most. The exam could cover all of the subjects currently included on the MBE (including Civil Procedure, scheduled to debut in February 2015), but coverage within each subject would be more limited. I would test only fundamental principles that students need to know by heart, not details that a reasonable lawyer would research. Similarly, I would include only some MPRE material on this exam, deferring other material to the post-JD test.

Post-JD Testing

So far I’ve proposed that aspiring lawyers would (a) complete a college major in law, encompassing the material we currently cover in the first 1-1/2 years of law school; (b) pass a modified version of the MBE-MPT-MPRE to enter law school; and (c) complete a 2-year JD program. Before gaining bar admission, these individuals would leap two more hurdles: They would (d) establish their good character, much as bar applicants do today, and (e) demonstrate their proficiency in legal analysis, reasoning, professional responsibility, and lawyering skills.

JD students would study some advanced doctrinal subjects in the 2-year program I envision, but I would not test those subjects for bar admission. Licensing focuses on minimum competency; we don’t test advanced doctrinal areas today, and we don’t need to add that testing to my proposed framework.

Instead, measures of post-JD proficiency could focus on legal analysis, reasoning, and professional responsibility–along with a healthy dose of lawyering skills. Adding a half year to legal study, as my four-plus-two program does, would allow better preparation in practice skills.

Licensing authorities could test those skills in one of two ways, or through a combination of the two. First, they could create their own tests of mastery. A written exam might include more sophisticated versions of the files used for the MPT. State supreme courts could also require students to complete live tests of client counseling, negotiation, and other skills. The medical profession has started using simulations to test graduates on their clinical skills; Step 2 CS of the licensing exam assesses performance through twelve simulated patient encounters. The legal profession could develop similar tests.

Alternatively, state supreme courts could require bar applicants to complete designated courses in these lawyering skills. Students would qualify for bar admission by successfully completing the required courses during their JD program. Courts and bar associations could assure quality in these courses by visiting and certifying them every few years.

Conclusion

It would be relatively easy to adapt the bar examination to a four-plus-two framework for legal education. Adaptation, in fact, could improve our licensing system by forcing us to reflect on the knowledge and skills that demonstrate basic competence in the legal profession. Step one of the bar, tested after college, would focus on basic legal doctrines, legal reasoning, analysis, and professional responsibility. Step two, assessed during or after the JD program, would further assess analytic skills while also examining key competencies for client representation.

, View Comments (3)

Bar Passage and Accreditation

July 4th, 2013 / By

The Standards Review Committee of the ABA’s Section of Legal Education has been considering a change to the accreditation standard governing graduates’ success on the bar examination. The heart of the current standard requires schools to demonstrate that 75% of graduates who attempt the bar exam eventually pass that exam. New Standard 315 would require schools to show that 80% of their graduates (of those who take the bar) pass the exam by “the end of the second calendar year following their graduation.”

I support the new standard, and I urge other academics to do the same. The rule doesn’t penalize schools for graduates who decide to use their legal education for purposes other than practicing law; the 80% rate applies only to graduates who take the bar exam. The rule then gives those graduates more than two years to pass the exam. Because the rule measures time by calendar year, May graduates would have five opportunities to pass the bar before their failure would count against accreditation. As a consumer protection provision, this is a very lax rule. A school that can’t meet this standard is not serving its students well: It is either admitting students with too little chance of passing the bar or doing a poor job of teaching the students that it admits.

The proposal takes on added force given the plunge in law school applications. As schools attempt to maintain class sizes and revenue, there is a significant danger that they will admit students with little chance of passing the bar exam. Charging those students three years of professional-school tuition, when they have little chance of joining the profession, harms the students, the taxpayers who support their loans, and the economy as a whole. Accreditation standards properly restrain schools from overlooking costs like those.

Critics of the proposal rightly point out that a tougher standard may discourage schools from admitting minority students, who pass the bar at lower rates than white students. This is a serious concern: Our profession is still far too white. On the other hand, we won’t help diversity by setting minority students up to fail. Students who borrow heavily to attend law school, but then repeatedly fail the bar exam, suffer devastating financial and psychological blows.

How can we maintain access for minority students while protecting all students from schools with low bar-passage rates? I discuss three ideas below.

The $30,000 Exception

When I first thought about this problem, I considered suggesting a “$30,000” exception to proposed Standard 315. Under this exception, a school could exclude from the accreditation measure any student who failed the bar exam but paid less than $10,000 per year ($30,000 total) in law school tuition and fees.

An exception like this would encourage schools to give real opportunities to minority students whose credentials suggest a risk of bar failure. Those opportunities would consist of a reasonably priced chance to attend law school, achieve success, and qualify for the bar. Law schools can’t claim good karma for admitting at-risk students who pay high tuition for the opportunity to prove themselves. That opportunity benefits law schools as much, or more, than the at-risk students. If law schools want to support diversification of our profession–and we should–then we should be willing to invest our own dollars in that goal.

A $30,000 exception would allow schools to make a genuine commitment to diversity, without worrying about an accreditation penalty. The at-risk students would also benefit by attending school at a more reasonable cost. Even if those students failed the bar, they could more easily pay off their modest loans with JD Advantage work. A $30,000 exception could be a win-win for both at-risk students and schools that honestly want to create professional access.

I hesitate to make this proposal, however, because I’m not sure how many schools genuinely care about minority access–rather than about preserving their own profitability. A $30,000 exception could be an invitation to admit a large number of at-risk students and then invest very little in those students. Especially with declining applicant pools, schools might conclude that thirty students paying $10,000 apiece is better than thirty empty seats. Since those students would not count against a school’s accreditation, no matter how many of them failed the bar exam, schools might not invest the educational resources needed to assist at-risk students.

If schools do care about minority access, then a $30,000 exception to proposed Standard 315 might give us just the leeway we need to admit and nurture at-risk students. If schools care more about their profitability, then an exception like that would be an invitation to take advantage of at-risk students. Which spirit motivates law schools today? That’s a question for schools to reflect upon.

Adjust Bar Passing Scores

One of the shameful secrets of our profession is that we raised bar-exam passing scores during the last three decades, just as a significant number of minority students were graduating from law school. More than a dozen states raised the score required to pass their bar exam during the 1990’s. Other states took that path in more recent years: New York raised its passing score in 2005; Montana has increased the score for this month’s exam takers; and Illinois has announced an increase that will take effect in July 2015.

These increases mean that it’s harder to pass the bar exam today than it was ten, twenty, or thirty years ago. In most states, grading techniques assure that scores signal the same level of competence over time. This happens, first, because the National Conference of Bar Examiners (NCBE), “equates” the scores on the Multistate Bar Exam (MBE) from year to year. That technique, which I explain further in this paper, assures that MBE scores reflect the same level of performance each year. An equated score of 134 on the February 2013 MBE reflects the same performance as a score of 134 did in 1985.

Most states, meanwhile, grade their essay questions in a way that similarly guards against shifting standards. These states scale essay scores to the MBE scores achieved by examinees during the same test administration. This means that the MBE (which is equated over time) sets the distribution of scores available for the essay portion of the exam. If the July 2013 examinees in Ohio average higher MBE scores than the 2012 test-takers, the bar examiners will allot them correspondingly higher essay scores. Conversely, if the 2013 examinees score poorly on the MBE (compared to earlier testing groups in Ohio), they will receive lower essay scores as well. You can read more about this process in the same paper cited above.

These two techniques mean that scores neither inflate nor deflate over time; the measuring stick within each state remains constant. A score of 264 on the July 2013 Illinois bar exam will represent the same level of proficiency as a score of 264 did in 2003 or 1993.

When a state raises its passing score, therefore, it literally sets a higher hurdle for new applicants. Beginning in 2015, Illinois will no longer admit test-takers who score 264 on the exam; instead it will require applicants to score 272–eight points more than applicants have had to score for at least the last twenty years.

Why should that be? Why do today’s racially diverse applicants have to achieve higher scores than the largely white applicants of the 1970s? Law practice may be harder today than it was in the 1970s, but the bar exam doesn’t test the aspects of practice that have become more difficult. The bar exam doesn’t measure applicants on their mastery of the latest statutes, their ability to interact with clients and lawyers from many cultures, or their adeptness with new technologies. The bar exam tests basic doctrinal principles and legal analysis. Why is the minimum level of proficiency on those skills higher today than it was thirty or forty years ago?

If we want to diversify the profession, we have to stop raising the bar as the applicant pool diversifies. I do not believe that states acted with racial animus when increasing their passing scores; instead, the moves seem more broadly protectionist, occurring during times of recession in the legal market and as the number of law school graduates has increased. Those motives, however, deserve no credit. The bottom line is that today’s graduates have to meet a higher standard than leaders of the profession (those of us in our fifties and sixties) had to satisfy when we took the bar.

Some states have pointed to the low quality of bar exam essays when voting to raise their passing score. As I have explained elsewhere, these concerns are usually misplaced. Committees convened to review a state’s passing score often harbor unrealistic expectations about how well any lawyer–even a seasoned one–can read, analyze, and write about a new problem in 30 minutes. Bad statistical techniques have also tainted these attempts to recalibrate minimum passing scores.

Let’s roll back passing scores to where they stood in the 1970s. Taking that step would diversify the profession by allowing today’s diverse graduates to qualify for practice on the same terms as their less-diverse elders. Preserving accreditation of schools that produce a significant percentage of bar failures, in contrast, will do little to promote diversity.

Work Harder to Support Students’ Success

Teaching matters. During my time in legal education, I have seen professors improve skills and test scores among students who initially struggled with law school exams or bar preparation. These professors, notably, usually were not tenure-track faculty who taught Socratic classes or research seminars. More often, they were non-tenure-track instructors who were willing to break the law school box, to embrace teaching methods that work in other fields, to give their students more feedback, and to learn from their own mistakes. If one teaching method didn’t work, they would try another one.

If we want to improve minority access to the legal profession, then more of us should be willing to commit time to innovative teaching. Tenure-track faculty are quick to defend their traditional teaching methods, but slow to pursue rigorous tests of those methods. How do we know that the case method or Socratic questioning are the best ways to educate students? Usually we “know” this because (a) it worked for us, (b) it feels rigorous and engaging when we stand at the front of the classroom, (c) we’ve produced plenty of good lawyers over the last hundred years, and (d) we don’t know what else to do anyway. But if our methods leave one in five graduates unable to pass the bar (the threshold set by proposed Standard 315), then maybe there’s something wrong with those methods. Maybe we should change our methods rather than demand weak accreditation standards?

Some faculty will object that we shouldn’t have to “teach to the bar exam,” that schools must focus on skills and knowledge that the bar doesn’t test. Three years, however, is a long time. We should be able to prepare students effectively to pass the bar exam, as well as build a foundation in other essential skills and knowledge. The sad truth is that these “other” subjects and skills are more fun to teach, so we focus on them rather than on solid bar preparation.

It is disingenuous for law schools to disdain rigorous bar preparation, because the bar exam’s very existence supports our tuition. Students do not pay premium tuition for law school because we teach more content than our colleagues who teach graduate courses in history, classics, mathematics, chemistry, or dozens of other subjects. Nor do we give more feedback than those professors, supervise more research among our graduate students, or conduct more research of our own. Students pay more for a law school education than for graduate training in most other fields because they need our diploma to sit for the bar exam. As long as lawyers limit entry to the profession, and as long as law schools serve as the initial gatekeeper, we will be able to charge premium prices for our classes. How can we eschew bar preparation when the bar stimulates our enrollments and revenue?

If we want to diversify the legal profession, then we should commit to better teaching and more rigorous bar preparation. We shouldn’t simply give schools a pass if more than a fifth of their graduates repeatedly fail the bar. If the educational deficit is too great to overcome in three years, then we should devote our energy to good pipeline programs.

Tough Standards

Some accreditation standards create unnecessary costs; they benefit faculty, librarians, or other educational insiders at the expense of students. Comments submitted to the ABA Task Force on the Future of Legal Education properly question many of those standards. The Standards Review Committee likewise has questioned onerous standards of that type.

Proposed Standard 315, however, is tough in a different way. That standard holds schools accountable in order to protect students, lenders, and the public. Private law schools today charge an average of $120,000 for a JD. At those prices, schools should be able to assure that at least 80% of graduates who choose to take the bar exam will pass that exam within two calendar years. If schools can’t meet that standard, then they shouldn’t bear the mark of ABA accreditation.

, View Comments (2)

The Third Year

April 1st, 2013 / By

Paul D. Carrington, Professor and former Dean of Duke Law School, has given us permission to post this thoughtful essay about the third year of law school. As a long-time member of the Texas bar, he responds to a recent “President’s Opinion” in the Texas Bar Journal:

In the March issue of the Texas Bar Journal, President Files expressed opposition to the proposal presently being advanced in New York to allow students to take that state’s bar exam and enter practice after two years in law school. President Files mistakenly supposes that the third year is indispensable to professional competence.

Making law study a three-year deal was not an idea advanced as a means of improving the quality of legal services delivered to prospective clients. The three-year degree was fashioned at Harvard in 1870 to impress other citizens with the social status of those holding Harvard Law degrees. Many of the students at Harvard at that time looked at the curriculum and left without a degree. Who needs a year-long, six-credit course on Bills and Notes?

Harvard itself understood that great lawyering does not require prolonged formal education. It awarded an honorary Ph.D. to Thomas Cooley to celebrate his great career in the law. Cooley never took a single class in law school, or even in college. He had a year of elementary school and a year in a law office before he moved to Michigan at the age of nineteen and hung out his shingle. He soon moved on to be the clerk to the Michigan Supreme Court, then to be its Chief Justice, then the founding dean of the University of Michigan Law School, then the author of the leading works in the nation on constitutional law and on torts, the president of the American Bar Association, and the designer and founding chair of the Interstate Commerce Commission regulating the nation’s railroads. It was possible for young Cooley to “read the law” and become perhaps the best lawyer in America.

It was still an option to read the law when I entered the profession in Texas in 1955. The applicant who scored the highest grade on the bar exam that I took that year had never attended law school. He had spent some years in a law office. And in three days he wrote coherent legal opinions on twenty-seven diverse problem cases. But he had not paid law school tuition. Had he chosen to attend the University of Texas Law School in 1952-1955, it would have cost him fifty dollars a year for tuition. I went to Harvard and paid six hundred dollars a year. That was enough to pay the modest salaries of the small band of law professors numerous enough to conduct big classes for three years.

In the 20th century, the organized bar first took up the cause of requiring three years of study. The motivating concern was not the competence of the lawyers providing legal services. The aim was to elevate, or at least protect, the status of the legal profession: if medical students were all required to stay for four years, lawyers seeking elevated status needed to stay for three. Benjamin Cardozo and Henry Stimson, two of the wisest and best 20th century lawyers, looked at what their third year schoolmates were doing, sneered at the waste of time, and went on to take the New York Bar Examination and become famous for their good professional judgment. Many and perhaps a majority of other early 20th century American lawyers attended two-year programs of law study in the numerous night schools.

The requirement of three years of formal study became common among the fifty states in the second half of the 20th century. But it is not universal. Thus, many California lawyers are graduates of two-year programs provided by the many night schools still functioning in that state. Reliance is placed on a very rigorous licensing examination to assure a reasonable measure of professional competence. There is no evidence that California lawyers are less competent or provide poorer professional service than Texas lawyers.

Requiring three years of formal study made more sense in 1963 than it does in 2013. The difference is the drastically elevated price of higher education and the resulting indebtedness borne by many students who aspire to be good lawyers. The price of all higher education in the United States increased mightily as a secondary consequence of the 1965 federal law guaranteeing the repayment of loans to students. In real dollars, taking account of inflation, the price of higher education is now about five times what it was when that law was enacted. The money is spent on elevated academic salaries, extended administrative services, and reduced ratios of students to teachers at all levels. “Higher” education keeps getting higher and higher in price.

As a result of this elevation of the real price of legal education, the requirement of three years is increasingly discriminatory. It is the offspring of working class families who often leave law school with substantial debts that they cannot repay from their earnings as rookie lawyers. For many, their prospective careers are ruined.

If the Texas Bar Association wishes to remain open to members who come from impecunious families, it, too, must face the reality that the third year of law school is unnecessary to assure the professional competence of its members. And also, if the Association wishes to assure impecunious clients of access to competent legal services, it needs to relax the requirement of prolonged formal education. I urge the Bar and the Supreme Court to address the issues promptly.

, View Comments (5)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests