You are currently browsing archives for the Teaching category.

Compliance at the University of St. Thomas

May 24th, 2015 / By

Joel Nichols, the Associate Dean for Academic Affairs at the University of St. Thomas School of Law, sent me some information about his school’s program in Organizational Ethics and Compliance. The program is still heavily centered in the law school, but it includes key collaboration with the university’s Opus College of Business. The program also offers several options to students, including a JD certificate, JD/LLM in Ethics and Compliance (which can be completed in seven semesters), MSL (for students without a JD), and LLM (for those who already hold the JD).

Perhaps most noteworthy, the program has a substantial advisory board of compliance professionals from outside the university. Creating an advisory board of this nature is an excellent idea. In addition to helping schools identify appropriate coursework, an expert board can advise schools on employment prospects, career pathways, and the relationship between formal education and workplace experience in this area.

Interesting next steps for the St. Thomas program might be (a) creation of an undergraduate major, (b) addition of more courses related to organizational psychology and social psychology, and (c) development of more coursework focused on health-care compliance. UST does not have a medical school, but that circumstance might lead to particularly innovative offerings in this area. Perhaps a member of the advisory board could create a course that includes significant hands-on work or shadowing?

Please feel free to send me information about other notable programs in the compliance area.

No Comments Yet

Campbell on Compliance

May 20th, 2015 / By

Compliance is one of the “hot” alternative jobs that law schools are promoting for their graduates. Much of this discussion, unfortunately, pays little heed to the nature of compliance jobs and whether legal education really prepares students to do this work well. The two seem to fit. After all, compliance is all about obeying the law, and JDs know a lot of law. The equation, though, isn’t that simple.

Law and Compliance

Ray Worthy Campbell explores these issues as one part of a rewarding new paper, The End of Law Schools. Although the title is provocative, and Campbell warns law schools of continued upheaval in the profession, the paper’s thesis is forward looking and upbeat. Campbell urges law schools to reinvent themselves as “schools of the legal professions.”

As part of that analysis, Campbell offers the best discussion I’ve seen of the difference between compliance and traditional law practice. His insights parallel those I’ve heard from contemporary general counsels, which is not surprising since Campbell has extensive practice experience. Educators who are contemplating the addition of compliance courses to the law school curriculum, or who just want to understand this area, should read Campbell’s exposition carefully.

Lawyers, as Campbell explains, tend to assume that compliance requires simply “explaining what the law require[s], and leaving it up the client to hew to the law.” P. 48. But today’s compliance officers are more about the “hewing” than the “explaining.” Naturally, a compliance officer has to understand the legal requirements affecting a company. Legal education can help with that foundation although, as Campbell points out, law schools pay more attention to broad legal principles than to “chapter and verse” of tedious regulations.

More important, understanding the law is just the starting point for an effective compliance officer. Big corporate scandals don’t arise from misreading the law; they often stem from behavior that all participants know full well is illegal. Did Walmart executives mistakenly think it was legal to bribe foreign government officials–or to cover up the evidence of those acts? See p. 49. No one needed a law degree to figure that one out.

Compliance Essentials

Instead, effective compliance officers need a host of knowledge and skills that law schools don’t touch. Necessary background includes “an understanding of how individuals work within a corporate culture, how leaders in an organization can inspire compliance, and [how to] identify[] those points in a business process most likely to lead to risks.” P. 49. “[T]racking, documenting and motivating employee behavior” are also essential. Id.

In addition to these basics, which infuse all compliance work, a compliance officer needs to understand her company’s business. It’s hard to achieve environmental compliance if your last science class was in high school. Ditto for privacy without some knowledge of computer programming. Almost all of the compliance fields require good accounting and math skills. Law students with STEM-phobia are not good candidates for most compliance positions.

Thinking Like a Compliance Officer

Compliance officers thus need education in fields outside the legal mainstream. Too many traditional law classes, meanwhile, may create the wrong mindset for compliance. Law schools hubristically assume that “thinking like a lawyer” is the best mental tool for any task. Traditionally educated lawyers, however, take a surprisingly narrow approach to problems.

Faced with a regulation, a lawyer’s first instinct is to find loopholes–ways for the client to avoid any unnecessary burdens. If there are no loopholes, then the lawyer will consider challenging the regulation in court. Did the agency follow proper procedures when adopting the rule? Did Congress give the agency sufficient authority in this area? Does the regulation raise constitutional issues under the nondelegation doctrine?

These lawyerly questions are appropriate under some circumstances. Indeed, any company faced with a burdensome regulation might ask its lawyers to explore these possibilities. But that’s lawyer work, not compliance.

Compliance requires a very different mindset: Now that we’ve established the validity and scope of these regulations, how do we go about obeying them? A lay person would be surprised to learn that we rarely view the law from that perspective in law school. Yet, as Campbell’s discussion reveals, this is not surprising at all. Effective compliance requires close reading of regulations and (sometimes) cases, but many college graduates can accomplish that task. Once one knows what the law requires, compliance requires very little manipulation of legal principles.

Educating Compliance Officers

Given the differences between law and compliance, Campbell predicts that law schools will not dominate compliance work simply by graduating traditional JDs. Some JDs will find work (and satisfaction) in that field, but the conventional path is both expensive and unsuited for compliance work. Instead, other programs are emerging that focus specifically on compliance careers.

Some of these programs are in law schools, some are in other departments. Some offer degrees, others provide certificates. Some encompass a year or more of work, others span only a few days. Some are online, others are face-to-face. As compliance continues to spawn job opportunities, preparatory programs will blossom. To what extent should law schools participate in that growth?

Campbell notes that law schools cannot educate effective compliance officers by simply packaging part of the current curriculum. Creating meaningful compliance education will require schools to add new fields of study while reshaping conventional ones. That process, Campbell suggests, could form part of the rebirth and expansion of law schools into “schools of the legal professions.” He urges schools to follow that path.

I wholeheartedly agree with Campbell that law schools need new faculty, fields of study, and pedagogic approaches to teach compliance effectively. Excellent education in that field will not be cheap. It will also stray from the single-minded focus that law schools have maintained for generations: the study of appellate opinions as a way of preparing graduates to handle legal disputes.

Broadening the focus of law school would be healthy for many reasons. In addition to allowing schools to enter the compliance field, it would expand our notion of lawyering to encompass the many types of work our graduates already do. Campbell’s vision of a school of the legal professions is very appealing.

Independence or Collaboration?

On the other hand, refashioning law schools as Campbell suggests will be a daunting task. Rather than attempt to create these programs within existing colleges of law, perhaps we should forge truly collaborative degrees with other units on campus.

Academia has long depended upon silos. Degrees belong to particular units, who jealously guard both the stature and revenue generated by those degrees. Interdisciplinary work is painful, as deans are reluctant to share their faculty’s teaching and scholarly capital with others. Despite their rhetoric, provosts and presidents often structure the university’s budget to reward just this type of turf protection.

Recently, however, I’ve seen signs that the old ways may be relaxing. In areas like environmental protection, neuroscience, and data analytics, universities seem to be willing to create truly cross-college programs. Committees of faculty drawn from all participating units govern these programs, which seem more genuinely devoted to meeting student needs than engaging in the horse trades that marked earlier interdisciplinary efforts.

I haven’t participated personally in any of these ventures, so I don’t know how optimistic to be. Despite my recent pessimism about aspects of legal education and the profession, I have an innate tendency toward optimism. (Really. My son calls me Miss Enthusiasm.) Perhaps this type of academic collaboration is illusory. But the stories I’ve heard suggest that there may be a new attitude emerging on campuses.

If so, then a cross-campus collaboration could be the perfect way to create a highly regarded program in compliance. With participation by law, business, organizational psychology, medicine, sciences, and other units, universities might already have the capacity to create stellar programs in this area. No unit would reap as much revenue as it might from an in-house program, but no unit would bear all the costs of building and maintaining such a program.

Maybe it’s time for creative destruction, not just in legal education, but in university structure.

, View Comments (18)

On the Bar Exam, My Graduates Are Your Graduates

May 12th, 2015 / By

It’s no secret that the qualifications of law students have declined since 2010. As applications fell, schools started dipping further into their applicant pools. LSAT scores offer one measure of this trend. Jerry Organ has summarized changes in those scores for the entering classes of 2010 through 2014. Based on Organ’s data, average LSAT scores for accredited law schools fell:

* 2.3 points at the 75th percentile
* 2.7 points at the median
* 3.4 points at the 25th percentile

Among other problems, this trend raises significant concerns about bar passage rates. Indeed, the President of the National Conference of Bar Examiners (NCBE) blamed the July 2014 drop in MBE scores on the fact that the Class of 2014 (which entered law school in 2011) was “less able” than earlier classes. I have suggested that the ExamSoft debacle contributed substantially to the score decline, but here I focus on the future. What will the drop in student quality mean for the bar exam?

Falling Bar Passage Rates

Most observers agree that bar passage rates are likely to fall over the coming years. Indeed, they may have already started that decline with the July 2014 and February 2015 exam administrations. I believe that the ExamSoft crisis and MBE content changes account for much of those slumps, but there is little doubt that bar passage rates will remain depressed and continue to fall.

A substantial part of the decline will stem from examinees with very low LSAT scores. Prior studies suggest that students with low scores (especially those with scores below 145) are at high risk of failing the bar. As the number of low-LSAT students increases at law schools, the number (and percentage) of bar failures probably will mount as well.

The impact, however, will not be limited just to those students. As I explained in a previous post, NCBE’s process of equating and scaling the MBE can drag down scores for all examinees when the group as a whole performs poorly. This occurs because the lower overall performance prompts NCBE to “scale down” MBE scores for all test-takers. Think of this as a kind of “reverse halo” effect, although it’s one that depends on mathematical formulas rather than subjective impressions.

State bar examiners, unfortunately, amplify the reverse-halo effect by the way in which they scale essay and MPT answers to MBE scores. I explain this process in a previous post. In brief, the MBE performance of each state’s examinees sets the curve for scoring other portions of the bar exam within that state. If Ohio’s 2015 examinees perform less well on the MBE than the 2013 group did, then the 2015 examinees will get lower essay and MPT scores as well.

The law schools that have admitted high-risk students, in sum, are not the only schools that will suffer lower bar passage rates. The processes of equating and scaling will depress scores for other examinees in the pool. The reductions may be small, but they will be enough to shift examinees near the passing score from one side to another. Test-takers who might have passed the bar in 2013 will not pass in 2015. In addition to taking a harder exam (i.e. a 7-subject MBE), these unfortunate examinees will suffer from the reverse-halo effect describe above.

On the bar exam, the performance of my graduates affects outcomes for your graduates. If my graduates perform less well than in previous years, fewer of your graduates will pass: my graduates are your graduates in this sense. The growing number of low-LSAT students attending Thomas Cooley and other schools will also affect the fate of our graduates. On the bar exam, Cooley’s graduates are our graduates.

Won’t NCBE Fix This?

NCBE should address this problem, but they have shown no signs of doing so. The equating/scaling process used by NCBE assumes that test-takers retain roughly the same proficiency from year to year. That assumption undergirds the equating process. Psychometricians recognize that, as abilities shift, equating becomes less reliable.* The recent decline in LSAT scores suggests that the proficiency of bar examinees will change markedly over the next few years. Under those circumstances, NCBE should not attempt to equate and scale raw scores; doing so risks the type of reverse-halo effect I have described.

The problem is particularly acute with the bar exam because scaling occurs at several points in the process. As proficiency declines, equating and scaling of MBE performance will inappropriately depress those scores. Those scores, in turn, will lower scores on the essay and MPT portions of the exam. The combined effect of these missteps is likely to produce noticeable–and undeserved–declines in scores for examinees who are just as qualified as those who passed the bar in previous years.

Remember that I’m not referring here to graduates who perform well below the passing score. If you believe that the bar exam is a proper measure of entry-level competence, then those test-takers deserve to fail. The problem is that an increased number of unqualified examinees will drag down scores for more able test-takers. Some of those scores will drop enough to push qualified examinees below the passing line.

Looking Ahead

NCBE, unfortunately, has not been responsive on issues related to their equating and scaling processes. It seems unlikely that the organization will address the problem described here. There is no doubt, meanwhile, that entry-level qualifications of law students have declined. If bar passage rates fall, as they almost surely will, it will be easy to blame all of the decline on less able graduates.

This leaves three avenues for concerned educators and policymakers:

1. Continue to press for more transparency and oversight of NCBE. Testing requires confidentiality, but safeguards are essential to protect individual examinees and public trust of the process.

2. Take a tougher stand against law schools with low bar passage rates. As professionals, we already have an obligation to protect aspirants to our ranks. Self interest adds a potent kick to that duty. As you view the qualifications of students matriculating at schools with low bar passage rates, remember: those matriculants will affect your school’s bar passage rate.

3. Push for alternative ways to measure attorney competence. New lawyers need to know basic doctrinal principles, and law schools should teach those principles. A closed-book, multiple-choice exam covering seven broad subject areas, however, is not a good measure of doctrinal knowledge. It is even worse when performance on that exam sets the curve for scores on other, more useful parts of the bar exam (such as the performance tests). And the situation is worse still when a single organization, with little oversight, controls scoring of that crucial multiple-choice exam.

I have some suggestions for how we might restructure the bar exam, but those ideas must wait for another post. For now, remember: On the bar exam, all graduates are your graduates.

* For a recent review of the literature on changing proficiencies, see Sonya Powers & Michael J. Kolen, Evaluating Equating Accuracy and Assumptions for Groups That Differ in Performance, 51 J. Educ. Measurement 39 (2014). A more reader-friendly overview is available in this online chapter (note particularly the statements on p. 274).

, View Comments (6)

More on Grade and Scholarship Quotas

May 5th, 2015 / By

In a response to this post, Michael Simkovic wonders if I believe “it is inherently immoral to limit ‘A’ grades to students whose academic performance is superior to most of their peers, since an ‘A’ is simply a data point and can be replicated and distributed to everyone at zero marginal cost.”

Not at all. I believe in matching grades to performance, and I don’t hesitate to do that–even when the performance is a failing one. Ironically, however, the mandatory grading curve produces results that are quite troubling for those of us who want grades to reflect performance. Constrained by that type of grading system, I have given A’s to students who performed worse than their peers. Let’s consider that problem and then return to the subject of conditional scholarships.

A Tale of Two Tort Classes

To accommodate institutional needs, I once taught two sections of the first-year Torts class. I used the same book and same lecture notes in both classes. We covered the same material in each class, and I drafted a single exam for the group. Following my practice at that time, it was a 4-hour essay exam with several questions.

I graded the exams as a single batch, without separating them into the two sections. Again following my usual practice, I used grading rubrics for each essay. I also rotated batches of essays so that no exam would always suffer (or benefit) from being in the first or last group graded. After I was done, I plotted all of the scores.

I discovered that, if I applied a single curve to both sections, all of the A grades would fall in one section. Our grading rules, however, required me to apply separate curves to each section. So some students in the “smart” section got B’s instead of the A’s they deserved. Some students in the other section got A’s instead of the B’s they deserved. When I discussed my problem with the Associate Dean, he did allow me to use the highest possible curve for the first section, and the lowest possible one for the other section; that ameliorated the problem to some extent. In the end, however, the letter grades did not match performance.

Several other professors have recounted similar experiences to me. It doesn’t happen often, because it is uncommon for a professor to teach two sections of a first-year class. But it does happen. In fact, when professors teach multiple sections of the same course, section differences seem common. If these differences occur when we can readily detect them (by teaching two sections), they probably occur under other circumstances as well.

I don’t think this drawback to mandatory curves rises to the level of immorality. Students understand the system and benefit from some of its facets. The curve forces professors to award similar grades across courses and sections, moderating both curmudgeons and sycophants. As Professor Simkovic notes, the system also restrains creeping grade inflation. A mandatory curve, finally, offers guidance to professors who lack an independent sense of what an A, B, or C exam looks like in their subject.

I tell this story to make clear that a mandatory curve does not necessarily reward achievement. On the contrary, a mandatory curve can give B’s to students “whose academic performance is superior to most of their peers” as measured through blind grading. I know it can happen–I’ve done it.

Commpetition

It feels silly to say this, given my position on deregulating the legal profession, but I do not believe (as Professor Simkovic suggests) that “competition for scarce and valuable resources is inherently immoral.” Competition within an open market usually leads to beneficial results. Competition within a tournament guild, on the other hand, leads to inefficiencies and other harms.

Back to Conditional Scholarships

Returning to our original point of disagreement, I think Professor Simkovic misconstrues college grading patterns–especially in STEM courses. Those courses are not, to my knowledge, graded on a mandatory curve. Instead, the grades correspond to the students’ demonstrated knowledge. The college woman I mention in the primary post was a STEM major; she was no stranger to tough grading. She, however, was accustomed to a field in which her efforts would be rewarded when measured against a rigorous external standard–not one in which only seven students would get an A even if eight performed at that level.

Once again, law school mandatory curves are not “inherently immoral.” They do, however, differ from those that are “routinely used by other educational institutions and state government programs.” Our particular grading practices change the operation of conditional scholarships in law school. At college, a student with a conditional scholarship competes against an external standard. If she reaches that goal, it doesn’t matter how many other students succeed along with her.

In law school, a student’s success depends as much on the efforts of other students as on her own work. If conditional scholarships were in effect when I taught those two sections of Torts, it is quite possible that a student from the “smart section,” who objectively outperformed a student from the “other section,” would have lost her scholarship–while the less able student from the “other section” would have kept her award. I do not think college students understand that perverse relationship between our grading system and conditional scholarships–and neither Professor Simkovic nor Professor Telman has cited any evidence that they do.

Let the Market Rule

As I stated in my previous post, the ABA’s rule has cured two of the ills previously associated with high-forfeiture conditional scholarships. Schools may continue to offer them, subject to that rule. It appears that schools differ widely in the operation of these programs. Some offer only a few conditional scholarships, with rare forfeitures. Others offer a large number, with many forfeitures. Still others lie in between.

The market will soon tell us which of these paths enhance student enrollment. Now that prospective students know more about how conditional scholarships work at law schools, will they continue to enroll at schools with high forfeiture rates? Time will tell.

View Comments (2)

Equating, Scaling, and Civil Procedure

April 16th, 2015 / By

Still wondering about the February bar results? I continue that discussion here. As explained in my previous post, NCBE premiered its new Multistate Bar Exam (MBE) in February. That exam covers seven subjects, rather than the six tested on the MBE for more than four decades. Given the type of knowledge tested by the MBE, there is little doubt that the new exam is harder than the old one.

If you have any doubt about that fact, try this experiment: Tell any group of third-year students that the bar examiners have decided to offer them a choice. They may study for and take a version of the MBE covering the original six subjects, or they may choose a version that covers those subjects plus Civil Procedure. Which version do they choose?

After the students have eagerly indicated their preference for the six-subject test, you will have to apologize profusely to them. The examiners are not giving them a choice; they must take the harder seven-subject test.

But can you at least reassure the students that NCBE will account for this increased difficulty when it scales scores? After all, NCBE uses a process of equating and scaling scores that is designed to produce scores with a constant meaning over time. A scaled score of 136 in 2015 is supposed to represent the same level of achievement as a scaled score of 136 in 2012. Is that still true, despite the increased difficulty of the test?

Unfortunately, no. Equating works only for two versions of the same exam. As the word “equating” suggests, the process assumes that the exam drafters attempted to test the same knowledge on both versions of the exam. Equating can account for inadvertent fluctuations in difficulty that arise from constructing new questions that test the same knowledge. It cannot, however, account for changes in the content or scope of an exam.

This distinction is widely recognized in the testing literature–I cite numerous sources at the end of this post. It appears, however, that NCBE has attempted to “equate” the scores of the new MBE (with seven subjects) to older versions of the exam (with just six subjects). This treated the February 2015 examinees unfairly, leading to lower scores and pass rates.

To understand the problem, let’s first review the process of equating and scaling.

Equating

First, remember why NCBE equates exams. To avoid security breaches, NCBE must produce a different version of the MBE every February and July. Testing experts call these different versions “forms” of the test. For each of the MBE forms, the designers attempt to create questions that impose the same range of difficulty. Inevitably, however, some forms are harder than others. It would be unfair for examinees one year to get lower scores than examinees the next year, simply because they took a harder form of the test. Equating addresses this problem.

The process of equating begins with a set of “control” questions or “common items.” These are questions that appear on two forms of the same exam. The February 2015 MBE, for example, included a subset of questions that had also appeared on some earlier exam. For this discussion, let’s assume that there were 30 of these common items and 160 new questions that counted toward each examinee’s score. (Each MBE also includes 10 experimental questions that do not count toward the test-taker’s score but that help NCBE assess items for future use.)

When NCBE receives answer sheets from each version of the MBE, it is able to assess the examinees’ performance on the common items and new items. Let’s suppose that, on average, earlier examinees got 25 of the 30 common items correct. If the February 2015 test-takers averaged only 20 correct answers to those common items, NCBE would know that those test-takers were less able than previous examinees. That information would then help NCBE evaluate the February test-takers’ performance on the new test items. If the February examinees also performed poorly on those items, NCBE could conclude that the low scores were due to the test-takers’ abilities rather than to a particularly hard version of the test.

Conversely, if the February test-takers did very well on the new items–while faring poorly on the common ones–NCBE would conclude that the new items were easier than questions on earlier tests. The February examinees racked up points on those questions, not because they were better prepared than earlier test-takers, but because the questions were too easy.

The actual equating process is more complicated than this. NCBE, for example, can account for the difficulty of individual questions rather than just the overall difficulty of the common and new items. The heart of equating, however, lies in this use of “common items” to compare performance over time.

Scaling

Once NCBE has compared the most recent batch of exam-takers with earlier examinees, it converts the current raw scores to scaled ones. Think of the scaled scores as a rigid yardstick; these scores have the same meaning over time. 18 inches this year is the same as 18 inches last year. In the same way, a scaled score of 136 has the same meaning this year as last year.

How does NCBE translate raw points to scaled scores? The translation depends upon the results of equating. If a group of test-takers performs well on the common items, but not so well on the new questions, the equating process suggests that the new questions were harder than the ones on previous versions of the test. NCBE will “scale up” the raw scores for this group of exam takers to make them comparable to scores earned on earlier versions of the test.

Conversely, if examinees perform well on new questions but poorly on the common items, the equating process will suggest that the new questions were easier than ones on previous versions of the test. NCBE will then scale down the raw scores for this group of examinees. In the end, the scaled scores will account for small differences in test difficulty across otherwise similar forms.

Changing the Test

Equating and scaling work well for test forms that are designed to be as similar as possible. The processes break down, however, when test content changes. You can see this by thinking about the data that NCBE had available for equating the February 2015 bar exam. It had a set of common items drawn from earlier tests; these would have covered the six original subjects. It also had answers to 190 new items; these would have included both the original subjects and the new one (Civil Procedure).

With these data, NCBE could make two comparisons:

1. It could compare performance on the common items. It undoubtedly found that the February 2015 test-takers performed less well than previous test-takers on these items. That’s a predictable result of having a seventh subject to study. This year’s examinees spread their preparation among seven subjects rather than six. Their mastery of each subject was somewhat lower, and they would have performed less well on the common items testing those subjects.

2. NCBE could also compare performance on the new Civil Procedure items with performance on old and new items in other subjects. NCBE won’t release those comparisons, because it no longer discloses raw scores for subject areas. I predict, however, that performance on Civil Procedure items was the same as on Evidence, Property, or other subjects. Why? Because Civil Procedure is not intrinsically harder than these other subjects, and the examinees studied all seven subjects.

Neither of these comparisons, however, would address the key change in the MBE: Examinees had to prepare seven subjects rather than six. As my previous post suggested, this isn’t just a matter of taking all seven subjects in law school and remembering key concepts for the MBE. Because the MBE is a closed-book exam that requires recall of detailed rules, examinees devote 10 weeks of intense study to this exam. They don’t have more than 10 weeks, because they’re occupied with law school classes, extracurricular activities, and part-time jobs before mid-May or mid-December.

There’s only so much material you can cram into memory during ten weeks. If you try to memorize rules from seven subjects, rather than just six, some rules from each subject will fall by the wayside.

When Equating Doesn’t Work

Equating is not possible for a test like the new MBE, which has changed significantly in content and scope. The test places new demands on examinees, and equating cannot account for those demands. The testing literature is clear that, under these circumstances, equating produces misleading results. As Robert L. Brennan, a distinguished testing expert, wrote in a prominent guide: “When substantial changes in test specifications occur, either scores should be reported on a new scale or a clear statement should be provided to alert users that the scores are not directly comparable with those on earlier versions of the test.” (See p. 174 of Linking and Aligning Scores and Scales, cited more fully below.)

“Substantial changes” is one of those phrases that lawyers love to debate. The hypothetical described at the beginning of this post, however, seems like a common-sense way to identify a “substantial change.” If the vast majority of test-takers would prefer one version of a test over a second one, there is a substantial difference between the two.

As Brennan acknowledges in the chapter I quote above, test administrators dislike re-scaling an exam. Re-scaling is both costly and time-consuming. It can also discomfort test-takers and others who use those scores, because they are uncertain how to compare new scores to old ones. But when a test changes, as the MBE did, re-scaling should take the place of equating.

The second best option, as Brennan also notes, is to provide a “clear statement” to “alert users that the scores are not directly comparable with those on earlier versions of the test.” This is what NCBE should do. By claiming that it has equated the February 2015 results to earlier test results, and that the resulting scaled scores represent a uniform level of achievement, NCBE is failing to give test-takers, bar examiners, and the public the information they need to interpret these scores.

The February 2015 MBE was not the same as previous versions of the test, it cannot be properly equated to those tests, and the resulting scaled scores represent a different level of achievement. The lower scaled scores on the February 2015 MBE reflect, at least in part, a harder test. To the extent that the test-takers also differed from previous examinees, it is impossible to separate that variation from the difference in the tests themselves.

Conclusion

Equating was designed to detect small, unintended differences in test difficulty. It is not appropriate for comparing a revised test to previous versions of that test. In my next post on this issue, I will discuss further ramifications of the recent change in the MBE. Meanwhile, here is an annotated list of sources related to equating:

Michael T. Kane & Andrew Mroch, Equating the MBE, The Bar Examiner, Aug. 2005, at 22. This article, published in NCBE’s magazine, offers an overview of equating and scaling for the MBE.

Neil J. Dorans, et al., Linking and Aligning Scores and Scales (2007). This is one of the classic works on equating and scaling. Chapters 7-9 deal specifically with the problem of test changes. Although I’ve linked to the Amazon page, most university libraries should have this book. My library has the book in electronic form so that it can be read online.

Michael J. Kolen & Robert L. Brennan, Test Equating, Scaling, and Linking:
Methods and Practices (3d ed. 2014). This is another standard reference work in the field. Once again, my library has a copy online; check for a similar ebook at your institution.

CCSSO, A Practitioner’s Introduction to Equating. This guide was prepared by the Council of Chief State School Officers to help teachers, principals, and superintendents understand the equating of high-stakes exams. It is written for educated lay people, rather than experts, so it offers a good introduction. The source is publicly available at the link.

, No Comments Yet

The February 2015 Bar Exam

April 12th, 2015 / By

States have started to release results of the February 2015 bar exam, and Derek Muller has helpfully compiled the reports to date. Muller also uncovered the national mean scaled score for this February’s MBE, which was just 136.2. That’s a notable drop from last February’s mean of 138.0. It’s also lower than all but one of the means reported during the last decade; Muller has a nice graph of the scores.

The latest drop in MBE scores, unfortunately, was completely predictable–and not primarily because of a change in the test takers. I hope that Jerry Organ will provide further analysis of the latter possibility soon. Meanwhile, the expected drop in the February MBE scores can be summed up in five words: seven subjects instead of six. I don’t know how much the test-takers changed in February, but the test itself did.

MBE Subjects

For reasons I’ve explained in a previous post, the MBE is the central component of the bar exam. In addition to contributing a substantial amount to each test-taker’s score, the MBE is used to scale answers to both essay questions and the Multistate Performance Test (MPT). The scaling process amplifies any drop in MBE scores, leading to substantial drops in pass rates.

In February 2015, the MBE changed. For more than four decades, that test has covered six subjects: Contracts, Torts, Criminal Law and Procedure, Constitutional Law, Property, and Evidence. Starting with the February 2015 exam, the National Conference of Bar Examiners (NCBE) added a seventh subject, Civil Procedure.

Testing examinees’ knowledge of Civil Procedure is not itself problematic; law students study that subject along with the others tested on the exam. In fact, I suspect more students take a course in Civil Procedure than in Criminal Procedure. The difficulty is that it’s harder to memorize rules drawn from seven subjects than to learn the rules for six. For those who like math, that’s an increase of 16.7% in the body of knowledge tested.

Despite occasional claims to the contrary, the MBE requires lots of memorization. It’s not solely a test of memorization; the exam also tests issue spotting, application of law to fact, and other facets of legal reasoning. Test-takers, however, can’t display those reasoning abilities unless they remember the applicable rules: the MBE is a closed-book test.

There is no other context, in school or practice, where we expect lawyers to remember so many legal principles without reference to codes, cases, and other legal materials. Some law school exams are closed-book, but they cover a single subject that has just been studied for a semester. The “closed book” moments in practice are much fewer than many observers assume. I don’t know any trial lawyers who enter the courtroom without a copy of the rules of evidence and a personalized cribsheet reminding them of common objections and responses.

This critique of the bar exam is well known. I repeat it here only to stress the impact of expanding the MBE’s scope. February’s test takers answered the same number of multiple choice questions (190 that counted, plus 10 experimental ones) but they had to remember principles from seven fields of law rather than six.

There’s only so much that the brain can hold in memory–especially when the knowledge is abstract, rather than gained from years of real-client experience. I’ve watched many graduates prepare for the bar over the last decade: they sit in our law library or clinic, poring constantly over flash cards and subject outlines. Since states raised passing scores in the 1990s and early 2000s, examinees have had to memorize many more rules in order to answer enough questions correctly. From my observation, their memory banks were already full to overflowing.

Six to Seven Subjects

What happens, then, when the bar examiners add a seventh subject to an already challenging test? Correct answers will decline, not just in the new subject, but across all subjects. The February 2015 test-takers, I’m sure, studied just as hard as previous examinees. Indeed, they probably studied harder, because they knew that they would have to answer questions drawn from seven bodies of legal knowledge rather than six. But their memories could hold only so much information. Memorized rules of Civil Procedure took the place of some rules of Torts, Contracts, or Property.

Remember that the MBE tests only a fraction of the material that test-takers must learn. It’s not a matter of learning 190 legal principles to answer 190 questions. The universe of testable material is enormous. For Evidence, a subject that I teach, the subject matter outline lists 64 distinct topics. On average, I estimate that each of those topics requires knowledge of three distinct rules to answer questions correctly on the MBE–and that’s my most conservative estimate.

It’s not enough, for example, to know that there’s a hearsay exemption for some prior statements by a witness, and that the exemption allows the fact-finder to use a witness’s out-of-court statements for substantive purposes, rather than merely impeachment. That’s the type of general understanding I would expect a new lawyer to have about Evidence, permitting her to research an issue further if it arose in a case. The MBE, however, requires the test-taker to remember that a grand jury session counts as a “proceeding” for purposes of this exemption (see Q 19). That’s a sub-rule fairly far down the chain. In fact, I confess that I had to check my own book to refresh my recollection.

In any event, if Evidence requires mastering 200 sub-principles of this detail, and the same is true of the other five traditional MBE subjects, that was 1200 very specific rules to memorize and keep in memory–all while trying to apply those rules to new fact patterns. Adding a seventh subject upped the ante to 1400 or more detailed rules. How many things can one test-taker remember without checking a written source? There’s a reason why humanity invented writing, printing, and computers.

But They Already Studied Civil Procedure

Even before February, all jurisdictions (to my knowledge) tested Civil Procedure on their essay exams. So wouldn’t examinees have already studied those Civ Pro principles? No, not in the same manner. Detailed, comprehensive memorization is more necessary for the MBE than for traditional essays.

An essay allows room to display issue spotting and legal reasoning, even if you get one of the sub-rules wrong. In the Evidence example given above, an examinee could display considerable knowledge by identifying the issue, noting the relevant hearsay exemption, and explaining the impact of admissibility (substantive use rather than simply impeachment). If the examinee didn’t remember the correct status of grand jury proceedings under this particular rule, she would lose some points. She wouldn’t, however, get the whole question wrong–as she would on a multiple-choice question.

Adding a new subject to the MBE hit test-takers where they were already hurting: the need to memorize a large number of rules and sub-rules. By expanding the universe of rules to be memorized, NCBE made the exam considerably harder.

Looking Ahead

In upcoming posts, I will explain why NCBE’s equating/scaling process couldn’t account for the increased difficulty of this exam. Indeed, equating and scaling may have made the impact worse. I’ll also explore what this means for the ExamSoft discussion and what (if anything) legal educators might do about the increased difficulty of the MBE. To start the discussion, however, it’s essential to recognize that enhanced level of difficulty.

, View Comments (2)

Needing Law Schools

March 22nd, 2015 / By

I agree entirely with Noah Feldman that society needs law schools. He couldn’t have said it better. This, however, is exactly why law schools need to fix their financial model. Most schools lack the big endowments of Harvard and other elite schools. Students, meanwhile, are increasingly unwilling to pay so much more tuition than Feldman did in the 1990’s or I did in the 1970’s. We need to keep asking: Why does it cost so much more today to learn what the law “can be”?

I learned a lot about what the law can be from Ruth Bader Ginsburg, my constitutional law professor at Columbia in 1979. I also learned from Herbert Wechsler, author of the much-cited article on “neutral principles” in constitutional law; William Carey, one of the New Deal architects and an early chair of the SEC; E. Allan Farnsworth, Reporter for the Restatement (Second) of Contracts; Maurice Rosenberg, one of the earliest legal scholars to apply social science research to legal problems; and many others. Why were all of these luminaries able to teach me and my classmates for so much less tuition than Columbia and other schools demand today?

In part, they earned less. I know that, because I am the daughter of yet another Columbia professor from that era: William K. (“Ken”) Jones. Our family did just fine financially, but we didn’t have the affluence that law professors enjoy today. Another explanation rests on the enormous number of staff members that law schools now need to operate. Communications staff, admissions staff, development staff, student services staff . . . . Each seems indispensable in the modern law school, but how many contribute to our mission of teaching students and others what the law can be?

I doubt that it’s possible to unwind the contemporary law school, to dismiss all of the staff, and go back to an earlier, simple world. Although it’s a charming notion, isn’t it? We could simply post our lower tuition, admit students who apply (without spending time marketing to them), teach them, and send them into the world knowing something about both what the law is and what it can be. Meanwhile, we would publish and engage in law reform efforts–as Ginsburg, Wechsler, Carey, and the others did–while teaching four courses a year.

I know that’s unlikely to happen, so we’ve got to find other ways to fix the financial model. Shifting the first year of law school to the undergraduate curriculum makes sense to me. Let’s teach more people about both the power of law and what it can be. Meanwhile, let’s lower tuition for those who will actually practice law. We, as professors, can teach people what the law can be–but our graduates are essential to make those changes happen.

View Comment (1)

Scholarship: Cost and Value

March 12th, 2015 / By

Critics of legal education raise two key questions about our scholarship: (1) How much value does it offer? And, (2) do law schools have to spend so much money to produce that value?

The answer to the second question is easy: No. We used to produce plenty of superb scholarship with typewriters and four-course teaching loads. Now that we have laptops, tablets, high-powered statistical software, and 24/7 online libraries, our productivity has leaped. Law schools could easily restore teaching loads to four courses a year while still facilitating plenty of good research. The resulting reduction in faculty size could help fund scholarships and reduce tuition.

The answer to the value question is harder. Do we mean immediate pay-off or long term influence? Do we care about value to judges, legislators, practicing attorneys, clients, teachers, students, or some other group? Does each article have to demonstrate value? Or do we recognize that trial and error is part of scholarship as well as other endeavors?

Those are difficult questions and they deserve a series of posts. For now, I’ll limit my discussion to a recent paper by Jeffrey Harrison and Amy Mashburn, which has already provoked considerable commentary. I agree with some of Harrison and Mashburn’s observations, but the empirical part of their paper goes badly astray. Without better method, their conclusions can’t stand. In fact, as I note below, some of their findings seem at odds with their recommendations.

Measuring Citation Strength

Harrison and Mashburn decided to measure the strength of citations to scholarly work, rather than simply count the number of citations. That was an excellent idea; scholars in other fields have done this for decades. There’s a good review of that earlier work in Bornmann & Daniel, Do Citation Counts Measure? A Review of Studies on Citing Behavior, 64 Journal of Documentation 45 (2008). (By the way, isn’t that an amazing name for a journal?)

If Harrison and Mashburn had consulted this literature, they would have found some good guideposts for their own approach. Instead, the paper’s method will make any social scientist cringe. There’s a “control group” that is nothing of the sort, and the method used for choosing articles in that group is badly flawed.* There is little explanation of how they developed or applied their typology (written protocol? inter-rater agreement? training periods?). Harrison and Mashburn tell us only that the distinctions were “highly subjective,” the lines were “difficult to draw,” and “even a second analysis by the current researchers could result in a different count.” Ouch.

Is it possible to make qualitative decisions about citation strength in a thoughtful, documented way? Absolutely. Here’s an example of a recent study of citation types that articulates a rigorous method: Stefan Stremersch, et al., Unraveling Scientific Impact: Citation Types in Marketing Journals, 32 Int’l Journal of Research in Marketing 64 (2015). Harrison and Mashburn might choose a different design than previous scholars, but they need to develop their parameters, articulate them to others, and apply them in a controlled way.

Influence and Usefulness

Harrison and Mashburn conclude that most legal scholarship “is not regarded as useful.” Even when a judge or scholar cites an article, they find, most of the cited articles “serve no useful function in helping the citing author advance or articulate a new idea, theory or insight.” Application of this standard, however, leads to some troubling results.

The authors point, for example, to an article by John Blume, Ten Years of Payne: Victim Impact Evidence in Capital Cases, 88 Cornell L. Rev. 257 (2003). A court cited this article for the seemingly banal fact that “the federal government, the military, and thirty-three of the thirty-eight states with the death penalty have authorized the use of victim impact evidence in capital sentencing.” Harrison and Mashburn dismiss this citation as “solely to the descriptive elements of the article.”

That’s true in a way, but this particular “description” didn’t exist until Blume researched all of that state and federal law to create it. The court wanted to know the state of the law, and Blume provided the answer. This answer may not have “advance[d] . . . a new idea, theory or insight,” but most cases don’t require that level of theory. Disputes do require information about the existing state of the law and Blume assembled information that helped advance resolution of this dispute. Why isn’t that a worthwhile type of influence?

I suspect that judges and practitioners appreciate the type of survey that Blume provided; analyzing the law of 40 jurisdictions requires both time and professional judgment. Blume, of course, did more than just survey the law: he also pointed out crevices and problems in the existing law. But dismissing a citation to the survey portion of his article seems contrary to the authors’ desire to create scholarship that will be more useful.

A reworked method might well distinguish citations to descriptive/survey research from those that adopt a scholar’s new theory. Asking scholars to limit their work to the latter, however, seems counter productive. A lot of people need to know what the law is, not just what it might be.

Judges and Scholars

One statistic in the Harrison and Mashburn article blew me away. On page 25, they note that 73 out of 198 articles from their “top 100” group of journals were cited by courts. That’s more than a third (36.9%) of the articles! I find that a phenomenally high citation rate. I know from personal experience that judges do pay attention to law review articles. When I clerked for Justice O’Connor, for example, she asked us to give her a shelf of law review articles for each of the bench memos we wrote. She didn’t want just our summaries of the articles–she wanted the articles themselves.

But I never would have guessed that the judicial citation rate was as high as 36.9% for professional articles, even for journals from the top 100 schools. At least in judicial circles, there’s a big drop-off between learning from an article and citing the article. Most judges try to keep their opinions lean, and there’s no cultural pressure to cite scholarly works.

I’m not sure how to mesh the judicial citation statistic with the tone of Harrison and Mashburn’s article. More than a third sounds like a high citation rate to me–as does the one quarter figure for journals in the 15-100 group.

Ongoing Discussion

Harrison and Mashburn urge critical debate over the value and funding of legal scholarship, and I back them all the way on that. I wrote this post in that spirit. As I note above, I don’t think law schools need to spend as much money as they do to produce strong levels of excellent scholarship. I also applaud efforts to replace citation counting with more nuanced measures of scholarly value. But we need much stronger empirical work to examine claims like the ones advanced in this paper. Are Harrison and Mashburn right that most legal scholarship “is not regarded as useful”? I don’t know, but I was put off by strong statements with weak empirical evidence.
__________________________
* Harrison and Mashburn chose the first article from each volume. That’s a textbook example of non-random selection: the first article in a volume almost certainly differs, on average, from other articles.

, View Comment (1)

I Am the Law

January 28th, 2015 / By

My colleague Kyle McEntee has a new project that you’ll want to check out. “I Am the Law” is a series of podcasts exploring a wide range of law practice jobs. These aren’t typical attorney interviews: the lawyers offer more detail about their practices than I’ve heard on other broadcasts or career panels.

The podcasts are rich in detail, but free for all listeners. Law students will find a wealth of information on practice areas, work settings, and the paths that individual attorneys followed in their careers. I hope that career services offices will recommend the podcasts to their students.

Prospective law students will also appreciate these podcasts. The discussions can take them well beyond the media stereotypes of BigLaw associates and aggressive courtroom lawyers. What’s it like to practice as a family or patent law attorney? How about real estate, immigration, nonprofit management, and transactional work? Can you believe that there is still room for a “writs attorney” in the twenty-first century? These 20-30 minute podcasts are perfect for listening while working out, riding the bus, or walking across campus.

I’m intrigued, finally, about the possibility of using these podcasts to complement doctrinal courses. I wish that when I taught first-year Torts, I could have asked my students to listen to the podcast with personal injury attorney Tricia Dennis. (Disclosure: I’m the host who interviewed Tricia, and I serve as an ongoing host for I Am the Law.) We forget how much of our law school curriculum focuses on appellate lawyering. Even when we ask students to imagine how they would apply a rule to a client’s problem, it’s hard for them to see the world through a practitioner’s eyes.

We should do much more in law school to help students understand their future roles as problem solvers for real people and organizations. But as we explore those avenues, the I Am the Law podcasts are an easy, cost-free way to give students a small taste of law practice related to the subject areas you teach.

Have a listen. I think you’ll be impressed, as I was, by the thoughtfulness of these lawyers in explaining both their current work and their personal paths in the law.

, No Comments Yet

RT, MT, and HT

March 1st, 2014 / By

Student writers sometimes struggle with attribution. They know to use quotation marks, and to cite the source, when they take language directly from another author. But when should they credit that other author with an idea? Or with paraphrased language? Social media now give us a way to explain these key practices. The “RT-MT-HT” culture also illustrates the positive role that attribution plays.

Lessons from Tweeters and Bloggers

I’m still polishing my skills as a blogger, while starting to learn Twitter. I recently summoned the courage to ask a 20-something what “RT” and “MT” mean on Twitter. He kindly explained that “RT” is a “retweet.” A tweeter uses that abbreviation when passing along another user’s tweet word-for-word. “MT” is a “modified tweet.” In this case, the tweeter transmits the gist of a previous tweet but modifies some of the language.

Easy–and a direct parallel to quotation and paraphrase. From now on I’ll tell my students: If you take language directly from another source, that’s a “retweet.” You need to use quotation marks and credit the source. If you take the gist of an idea from another writer, that’s a “modified tweet.” Give credit to the original source just as you would on Twitter.

HT or H/T, meanwhile, is blogger-speak for “hat tip.” That’s how we credit another source who has provided information or inspiration for a post, although our posts may depart considerably from the original source. Writers in other media, including student papers, should learn to “HT” sources offering that type of information or inspiration.

The RT/MT/HT typology is easy for students to understand. I’m also intrigued by the fact that the attribution process in 140-character tweets parallels what we do in scholarly papers. That fact made me think more about why we attribute–and why students often resist the process.

Why Attribute?

The primary reason for attribution is to give credit where credit is due. If you have devised an innovative argument, dug up original data, or spun a creative phrase, I shouldn’t claim those words or ideas as my own. Attribution acknowledges the work of others.

That’s one reason, I think, that students resist attribution. They feel great pressure to produce original ideas and language in products like seminar papers. They worry about a poor grade if they attribute too much of the paper to others.

Student papers, of course, should reflect the student’s own language, as well as some degree of personal insight on substance. But maybe we need to be more pragmatic about just how “original” a student paper can be. In large part, we want students to manipulate the ideas of others; that’s part of the learning process. It’s also unrealistic to expect original work from students before they’ve had a chance to work with those other ideas over time.

Luckily, attribution has another role: it demonstrates the author’s familiarity with related work and her growing connections with the field. That’s one reason we see so many RT’s, MT’s, and HT’s in social media. These authors aren’t shy about building on the work of others, and they want to create networks through their connections. We sometimes forget the importance of attribution in articulating networks.

In the future, I hope to stress this positive aspect of attribution when supervising student papers. Attributions aren’t admissions against interest, conceding that an idea originated with someone else. Instead, these attributions are positive signals that a paper is part of a larger network of ideas. A student who knows how to attribute is one who has engaged with a knowledge network, staked out a spot for herself within that web, and started to cultivate her own voice. Just as tweeters develop a following by building on the ideas of others, writers can enjoy the same success.

, No Comments Yet

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests