You are currently browsing archives for July 2015.

Can We Close the Racial Grade Gap?

July 25th, 2015 / By

In response to last week’s post about the racial gap in law school grades, several professors sent me articles discussing ways to ameliorate this gap. Here are two articles that readers may find useful:

1. Sean Darling-Hammond (a Berkeley Law graduate) and Kristen Holmquist (Director of Berkeley’s Academic Support Program), Creating Wise Classrooms to Empower Diverse Law Students.

2. Edwin S. Fruehwald, How to Help Students from Disadvantaged Backgrounds Succeed in Law School.

Another excellent choice is Claude Steele‘s popular book, Whistling Vivaldi. Steele, who is currently Executive Vice Chancellor and Provost at UC Berkeley, is a leading psychology researcher. He originated the phrase “stereotype threat,” which explains a key cognitive mechanism behind the reduced performance of minority students in higher education. In his book, Steele offers highly accessible explanations of this mechanism.

Even better, the book describes some experimentally tested approaches for reducing stereotype threat and improving performance of minority students. The psychologists have not found a magic tonic, but they are pursuing some promising ideas.

How Hard Will It Be?

Many of the ideas offered by Steele, Darling-Hammond, Holmquist, and Fruehwald rest on principles of good teaching. We should, for example, teach all of our students how to read cases and analyze statutes, rather than let them flounder to learn on their own. The analytical skills of “thinking like a lawyer” can be taught and learned; they are not simply talents that arise mysteriously in some students.

Similarly, we should cover the basics in our courses, explaining the legal system rather than brushing over those introductory chapters as “something you can read if you need to review.” The latter approach is likely to increase stereotype threat, because it suggests “you don’t belong here and you’re behind already” to students who lack that information. Besides, you’d be surprised how many law students don’t understand the concept of a grand jury–even when they take my second-year Evidence course.

Positive feedback and formative assessment are also important tools; these techniques, like the ones described above, can benefit all students. They may be especially important for minority students, however, who are likely to suffer from both social capital deficits (i.e., lack of knowledge about how to study for law school exams) and culturally imposed self doubts. By giving students opportunities to try out their law school wings, and then offering constructive feedback, we can loosen some of the handicaps that restrain performance.

Harder Than That

These approaches, as well as others mentioned in the articles at the beginning of this post, are worth trying in the classroom. I think, though, that it will be much harder than most white professors imagine to remove the clouds of stereotype threat.

In law schools, we like to imagine that racial bias happens somewhere else. We acknowledge that it occurred in the past and that some of our students still suffer inherited deficits. We also know that it happens in communities outside our walls, where bad things of all types happen. We may also concede that bias occurs in earlier stages of education, if only because many minority students attend low-performing schools.

We assume, however, that racial bias stops at our doors. Law schools, after all, are bastions of reason. Just as we refine “minds full of mush” to sharp analytic instruments, we surely wipe out any traces of bias in ourselves and out students.

This is a dangerously false belief. Race is a pervasive, deeply ingrained category in our psyches. The category may be cultural, rather than biological, but both science and everyday experience demonstrate its grip on us.

Humans, moreover, are exquisitely expressive and acutely sensitive. Micro expressions and body language convey biases we don’t consciously acknowledge. Other people receive those signals even more readily than they hear our spoken words. Reading the psychology literature on implicit bias is both humbling and eye opening. When designing cures for the racial grade gap, we need to grapple with our own unconscious behaviors–as well as with the fact that those of us who are white rarely know what it feels like (deep down, every day) to be a person of color in America.

For Example

Here’s one example of how difficult it may be to overcome the racial gap in law school grades. One useful technique, as mentioned above, is to give students supportive feedback on their work. To help minority students overcome stereotype threat, however, the feedback has to take a particular form.

On p. 162 of his book, Steele describes an experiment in which researchers offered different forms of feedback to Stanford undergraduates who had written an essay. After receiving the feedback, conveyed through extensive written comments, students indicated how much they trusted the feedback and how motivated they were to revise their essays. Importantly, students participating in the study all believed that the reader was white; they also knew that the reader would know their race because of photographs attached to the essays. (The experimental set-up made these conditions seem natural.)

White students showed little variation in how they responded to three types of feedback: (1) “unbuffered” feedback in which they received mostly critical comments and corrections on their essays; (2) “positive” feedback in which these comments were prefaced by a paragraph of the “overall nice job” kind; and (3) “wise” feedback in which the professor noted that he had applied a particularly high standard to the essay but believed the student could meet that standard through revision. All three of these feedback forms provided similar motivation to white students.

For Black students, however, the type of feedback generated significantly different results. The unbuffered feedback produced mistrust and little motivation; the Black students believed that the reader had stereotyped them as poor performers. Feedback prefaced by a positive comment was better; Black students were more likely to trust the feedback and feel motivated to improve. The wise feedback, however, was best of all. When students felt that a professor recognized their individual talent, and was willing to help them develop that talent, they responded enthusiastically.

Some researchers refer to this as the “Stand and Deliver” phenomenon, named for the story of a high school teacher who inspired his underprivileged Mexican-American students to learn calculus. Professors who set high standards, while conveying sincere signals that minority students can meet those standards, can close enormous achievement gaps.

Sincerity

The key word in the previous paragraph is “sincere.” To overcome stereotype threat and other forces restraining our minority students, it’s not enough to offer general messages of encouragement to a class. That worked for Jaime Escalante, the teacher who taught his disadvantaged students calculus, because he was talking to students who all suffered from disadvantage. Delivering the same message to a law school class in which most students are white won’t have much impact on the minority students. The minority students will assume that the professor is speaking primarily to the white students; if anything, this will increase stereotype threat.

Nor will individualized messages work if they follow our usual “overall nice job” format. I cringed when I read those words in the study described by Steele. How often have I written those very words on a paper that needed lots of improvement?

Instead, we have to find ways to convey individually to minority students that we believe they can meet very high standards. That’s a tough challenge because many of us (especially white professors) suffer from implicit biases telling us otherwise. Even if we use the right words, will our tone of voice, micro expressions, and body language signal those unconscious doubts?

Moving Forward

Some readers may dismiss my worry about unconscious bias; they may be certain that they view students of all races equally. Others may be discouraged by my concern, feeling that it is impossible to overcome these biases. Indeed, Steele and others have documented a phenomenon in which whites avoid close interactions with minorities because they fear that they will display their unconscious bias.

A third group of readers may whisper to themselves, “she’s overlooking the elephant in the room. Because of affirmative action, minority students at most law schools are less capable than their white peers.” That potential reaction is so important that I’ll address it in a separate post.

For now, I want to offer this thought to all readers: This will be hard. If we want to close the racial grade gap and help all students excel, we need to examine both our individual and institutional practices very closely. Some of that may be painful. If we can succeed, however, we will achieve a paramount goal–making our promises of racial equality tangible. Our success will affect, not only the careers of individual students, but the quality of the legal profession and the trust that citizens place in the legal system.

I will continue blogging about this issue, offering information about other cognitive science studies in the field. For those of you who would like to look at the study involving written feedback (rather than just read the summary in Steele’s book), it is: Geoffrey L. Cohen, Claude M. Steele & Lee D. Ross, The Mentor’s Dilemma: Providing Critical Feedback Across the Racial Divide, 25 Personality & Social Psychology Bulletin 1302 (1999).

If you want to explore the field on your own, use the database PsycINFO and search for “stereotype threat” as a phrase. Most universities have subscriptions to PsycINFO; if you are a faculty member, staff member, or student, you will be able to read full-text articles for no charge.

, View Comment (1)

More on Paid Clinical Externships

July 17th, 2015 / By

I’ve posted before about my support for a proposed change in Interpretation 305-2 of the ABA’s accreditation standards for law schools. The proposal would allow law schools to offer externship credit for paid positions. Today I sent an admittedly tardy letter to the Council, expressing the reasons for that support. For those who are interested, I reproduce the text below:

Dear Council Members:

I apologize for this late submission in response to your request for comments on the proposed change to Interpretation 305-2. I strongly support the proposed change, which would allow law schools to choose whether to offer externships with paid employers.

I have been a law professor for thirty years, teaching doctrinal, legal writing, and clinical courses. I also have a research interest in legal education and have published several articles in that field. My current interest lies in learning how lawyers develop professional expertise and in designing educational programs that will promote that development.

From my personal experience, as well as reviews of the cognitive science literature, I have no doubt that externships are a key feature of this development. Externships alone are not sufficient: In-house clinics provide pedagogic advantages (such as the opportunity for close mentoring and regular reflection) that externships are less likely to offer. A program of in-house clinics complemented by externships, simulations, and other classroom experiences, however, can offer students an excellent foundation in professional expertise.

When designing an educationally effective externship, the employer’s status (for-profit, non-profit, government) and student’s financial arrangement (paid or unpaid) are not relevant. This is because the educational institution controls the externship requirements. If an employer offering a paid externship balks at the school’s educational requirements, the school can (and should) refuse to include that employer in its program.

The key to educationally sound externships is close control by the academic institution. I suspect that some law schools (like other academic institutions) do not devote as much attention to externships as they should. The greater the school’s collaboration with the employer, the better the externship experience will be. This problem, however, applies to both paid and unpaid externships. The educational potential of externships does not depend upon the amount of pay; it depends upon the school’s willingness to supervise the externship closely—-and to reject employers that do not create suitable learning experiences.

Employers who pay law students may decide that they don’t want to participate in externship programs; they may find compliance with the program’s requirements and paperwork too onerous. This is not a reason to reject paid externships; it is an assurance that they will work properly. If an employer is willing to pay a student and comply with the pedagogic requirements of a good externship program, we should rejoice: This is an employer eager to satisfy the profession’s obligation to mentor new members.

This brings me to the major reason I support the proposal: Permission of paid externships will allow innovative partnerships between law schools and the practicing bar. As members of a profession, lawyers have a duty to educate new colleagues. Our Rules of Professional Conduct, sadly, do not explicitly recognize this duty. The obligation, however, lies at the heart of what it means to be a profession. See, e.g., Howard Gardner & Lee S. Shulman, The Professions in America Today: Crucial But Fragile, DAEDALUS, Summer 2005, at 13.

Our profession lags behind others in developing models that allow practitioners to fulfill their educational duty while still earning a profit and paying their junior members. Law school clinics and externship supervisors possess a wealth of experience that could help practitioners achieve those goals. Working together to supervise paid externships would be an excellent way to transfer these models, improve them, and serve clients.

I deliberately close by stressing clients. Many of our debates about educational practices focus on the interests of law schools, law students, and employers. For members of a profession, however, client needs are supreme. We know that an extraordinary number of ordinary Americans lack affordable legal services. We also know that businesses are increasingly turning to non-JDs to fill their legal needs as compliance officers, human resources directors, and other staff. If we want to create a world in which individuals and businesses benefit from the insights of law graduates, then we have to design educational models in which new lawyers become professionals while they and their mentors make a living.

Thank you for your attention. Please let me know if I can provide any further information.

Deborah J. Merritt
John Deaver Drinko/Baker & Hostetler Chair in Law
Moritz College of Law, The Ohio State University

No Comments Yet

The White Bias in Legal Education

July 16th, 2015 / By

Alexia Brunet Marks and Scott Moss have just published an article that analyzes empirical data to determine which admissions characteristics best predict law student grades. Their study, based on four recent classes matriculating at their law school (the University of Colorado) or Case Western’s School of Law, is careful and thoughtful. Educators will find many useful insights.

The most stunning finding, however, relates to minority students. Even after controlling for LSAT score, undergraduate GPA, college quality, college major, work experience, and other factors, minority students secured significantly lower grades than white students. The disparity appeared both in first-year GPA and in cumulative GPA. The impact, moreover, was similar for African American, Latino/a, Asian, and Native American students.

Marks and Moss caution that the number of Native American students in their database (15) was small, and that the number of Latino/a students (45) was also modest. These numbers may be too small to support definitive findings. Still, the findings for these groups were statistically significant–and consistent with those for the larger groups of African American and Asian American students.

What accounts for this disturbing difference? Why do students of color receive lower law school grades than white students with similar backgrounds?

“Something . . . About Legal Education Itself”

Marks and Moss are unable to probe this racial disparity in depth; their paper reports a wide range of empirical findings, with limited space to discuss each one. They observe, however, that their extensive controls for student characteristics suggests that the “racial disparity reflects something not merely about the students, but about legal education itself.” What is that something?

One possibility, as Marks and Moss note, is unconscious bias in grading. Most law school courses are graded anonymously, but others are not. Legal writing, seminars, clinics, and other skills courses often require identified grading. Even in large lecture courses, some professors give credit for class participation–a practice that destroys anonymity for that portion of the grade.

No one suspects that professors discriminate overtly against minority students. Implicit bias, however, is pervasive in our society. How do we as faculty know that we value the words of a minority student as highly as those offered by white students? Unless we keep very careful records, how do we know that we remember the minority student’s comments as often as the white student’s? These are questions that all educators should be willing to ask.

Another explanation lies in the psychological phenomenon of stereotype threat. When placed in situations in which a group stereotype suggests they will perform poorly, people often do just that. Scientists have demonstrated this phenomenon with people of all races and both genders. Math performance among White men, for example, declines if they take a test after hearing information about the superiority of Asian math students.

Legal education itself, finally, may embody practices that favor white students. Are there ways in which our culture silently nurtures white students better than students of color? I’d like to think not, but it’s hard to judge a matter like that from within the culture. Cultures are like gravity; they affect us constantly but invisibly.

Other Influences

I can think of three forces originating outside of law schools that might depress the performance of minority students. First, minorities may enter law school with fewer financial resources than their white peers. Marks and Moss were unable to control for economic background, and the minority students in their study may have come from financially poorer families than the white students. Students from economically disadvantaged backgrounds may spend more time working for pay, live in less convenient housing, and lack money for goods and services that ease academic study.

Second, minority students may have less social capital than white students. Students who have family members in the legal profession, or who know other law graduates, can commiserate with them about the challenges of law school. These students can also discuss study approaches and legal principles with their outside network. Even knowing other people who have succeeded in law school may give a student confidence to succeed. Minority students, on average, may have fewer of these supports.

In fact, minority students may suffer more than white students from negative social capital. If a student is the first in the family (or neighborhood) to attend law school, the student’s social network may tacitly suggest that she is unlikely to succeed. Minority students may also be more likely than white students to face family demands on their time; families may rely economically and emotionally on a student who has achieved such unusual success.

Finally, minority students bear emotional burdens of racism that white students simply don’t encounter. Some of those burdens are personal: the white people who cross the street to avoid a minority male, the shopkeeper who seems to hover especially close. Others are societal. We were all upset by the church massacre in Charleston, South Carolina, but the tragedy was much more personal–and threatening–for African Americans. How hard it must be to continue studying the rule against perpetuities in the face of such lawlessness and racial hatred.

What Should Law Schools Do?

I don’t know the causes of the racial disparity in law student grades. One or more of the above factors may account for the problem; other influences may be at work. Whatever the causes, the data cry out for a response. Even if the discrepancy stems from the outside forces I’ve identified, law schools can’t ignore the impact of those forces. If we’re serious about racial diversity in the legal profession, we need to identify the source of the racial grade gap and remedy it.

Law schools face many challenges today, but this one is as important as any I’ve heard about. It’s time to talk about the burdens on minority students, the ways in which our culture may aggravate those burdens, and the steps we can take to open the legal profession more fully to all.

, View Comments (4)

ExamSoft: New Evidence from NCBE

July 14th, 2015 / By

Almost a year has passed since the ill-fated July 2014 bar exam. As we approach that anniversary, the National Conference of Bar Examiners (NCBE) has offered a welcome update.

Mark Albanese, the organization’s Director of Testing and Research, recently acknowledged that: “The software used by many jurisdictions to allow their examinees to complete the written portion of the bar examination by computer experienced a glitch that could have stressed and panicked some examinees on the night before the MBE was administered.” This “glitch,” Albanese concedes, “cannot be ruled out as a contributing factor” to the decline in MBE scores and pass rates.

More important, Albanese offers compelling new evidence that ExamSoft played a major role in depressing July 2014 exam scores. He resists that conclusion, but I think the evidence speaks for itself. Let’s take a look at the new evidence, along with why this still matters.

LSAT Scores and MBE Scores

Albanese obtained the national mean LSAT score for law students who entered law school each year from 2000 through 2011. He then plotted those means against the average MBE scores earned by the same students three years later. The graph (Figure 10 on p. 43 of his article) looks like this:

As the black dots show, there is a strong linear relationship between scores on the LSAT and those for the MBE. Entering law school classes with high LSAT scores produce high MBE scores after graduation. For the classes that began law school from 2000 through 2010, the correlation is 0.89–a very high value.

Now look at the triangle toward the lower right-hand side of the graph. That symbol represents the relationship between mean LSAT score and mean MBE score for the class that entered law school in fall 2011 and took the bar exam in July 2014. As Albanese admits, this dot is way off the line: “it shows a mean MBE score that is much lower than that of other points with similar mean LSAT scores.”

Based on the historical relationship between LSAT and MBE scores, Albanese calculates that the Class of 2014 should have achieved a mean MBE score of 144.0. Instead, the mean was just 141.4, producing elevated bar failure rates across the country. As Albanese acknowledges, there was a clear “disruption in the relationship between the matriculant LSAT scores and MBE scores with the July 2014 examination.”

Professors Jerry Organ and Derek Muller made similar points last fall, but they were handicapped by their lack of access to LSAT means. The ABA releases only median scores, and those numbers are harder to compile into the type of persuasive graph that Albanese produced. Organ and Muller made an excellent case with their data–one that NCBE should have heeded–but they couldn’t be as precise as Albanese.

But now we have NCBE’s Director of Testing and Research admitting that “something happened” with the Class of 2014 “that disrupted the previous relationship between MBE scores and LSAT scores.” What could it have been?

Apprehending a Suspect

Albanese suggests a single culprit for the significant disruption shown in his graph: He states that the Law School Admission Council (LSAC) changed the manner in which it reported scores for students who take the LSAT more than once. Starting with the class that entered in fall 2011, Albanese writes, LSAC used the high score for each of those test takers; before then, it used the average scores.

At first blush, this seems like a possible explanation. On average, students who retake the LSAT improve their scores. Counting only high scores for these test takers, therefore, would increase the mean score for the entering class. National averages calculated using high scores for repeaters aren’t directly comparable to those computed with average scores.

But there is a problem with Albanese’s rationale: He is wrong about when LSAC switched its method for calculating national means. That occurred for the class that matriculated in fall 2010, not the one that entered in fall 2011. LSAC’s National Decision Profiles, which report these national means, state that quite clearly.

Albanese’s suspect, in other words, has an alibi. The change in LSAT reporting methods occurred a year earlier; it doesn’t explain the aberrational results on the July 2014 MBE. If we accept LSAT scores as a measure of ability, as NCBE has urged throughout this discussion, then the Class of 2014 should have received higher scores on the MBE. Why was their mean score so much lower than their LSAT test scores predicted?

NCBE has vigorously asserted that the test was not to blame; they prepared, vetted, and scored the July 2014 MBE using the same professional methods employed in the past. I believe them. Neither the test content nor the scoring algorithms are at fault. But we can’t ignore the evidence of Albanese’s graph: something untoward happened to the Class of 2014’s MBE scores.

The Villain

The villain almost certainly is the suspect who appeared at the very beginning of the story: ExamSoft. Anyone who has sat through the bar exam, who has talked to test-takers during those days, or who has watched students struggle to upload a single law school exam knows this.

I still remember the stress of the bar exam, although 35 years have passed. I’m pretty good at legal writing and analysis, but the exam wore me out. Few other experiences have taxed me as much mentally and physically as the bar exam.

For a majority of July 2014 test-takers, the ExamSoft “glitch” imposed hours of stress and sleeplessness in the middle of an already exhausting process. The disruption, moreover, occurred during the one period when examinees could recoup their energy and review material for the next day’s exam. It’s hard for me to imagine that ExamSoft’s failure didn’t reduce test-taker performance.

The numbers back up that claim. As I showed in a previous post, bar passage rates dropped significantly more in states affected directly by the software crash than in other states. The difference was large enough that there is less than a 0.001 probability that it occurred by chance. If we combine that fact with Albanese’s graph, what more evidence do we need?

Aiding and Abetting

ExamSoft was the original culprit, but NCBE aided and abetted the harm. The testing literature is clear that exams can be equated only if both the content and the test conditions are comparable. The testing conditions on July 29-30, 2014, were not the same as in previous years. The test-takers were stressed, overtired, and under-prepared because of ExamSoft’s disruption of the testing procedure.

NCBE was not responsible for the disruption, but it should have refrained from equating results produced under the 2014 conditions with those from previous years. Instead, it should have flagged this issue for state bar examiners and consulted with them about how to use scores that significantly understated the ability of test takers. The information was especially important for states that had not used ExamSoft, but whose examinees suffered repercussions through NCBE’s scaling process.

Given the strong relationship between LSAT scores and MBE performance, NCBE might even have used that correlation to generate a second set of scaled scores correcting for the ExamSoft disruption. States could have chosen which set of scores to use–or could have decided to make a one-time adjustment in the cut score. However states decided to respond, they would have understood the likely effect of the ExamSoft crisis on their examinees.

Instead, we have endured a year of obfuscation–and of blaming the Class of 2014 for being “less able” than previous classes. Albanese’s graph shows conclusively that diminished ability doesn’t explain the abnormal dip in July 2014 MBE scores. Our best predictor of that ability, scores earned on the LSAT, refutes that claim.

Lessons for the Future

It’s time to put the ExamSoft debacle to rest–although I hope we can do so with an even more candid acknowledgement from NCBE that the software crash was the primary culprit in this story. The test-takers deserve that affirmation.

At the same time, we need to reflect on what we can learn from this experience. In particular, why didn’t NCBE take the ExamSoft crash more seriously? Why didn’t NCBE and state bar examiners proactively address the impact of a serious flaw in exam administration? The equating and scaling process is designed to assure that exam takers do not suffer by taking one exam administration rather than another. The July 2014 examinees clearly did suffer by taking the exam during the ExamSoft disruption. Why didn’t NCBE and the bar examiners work to address that imbalance, rather than extend it?

I see three reasons. First, NCBE staff seem removed from the experience of bar exam takers. The psychometricians design and assess tests, but they are not lawyers. The president is a lawyer, but she was admitted through Wisconsin’s diploma privilege. NCBE staff may have tested bar questions and formats, but they lack firsthand knowledge of the test-taking experience. This may have affected their ability to grasp the impact of ExamSoft’s disruption.

Second, NCBE and law schools have competing interests. Law schools have economic and reputational interests in seeing their graduates pass the bar; NCBE has economic and reputational interests in disclaiming any disruption in the testing process. The bar examiners who work with NCBE have their own economic and reputational interests: reducing competition from new lawyers. Self interest is nothing to be ashamed of in a market economy; nor is self interest incompatible with working for the public good.

The problem with the bar exam, however, is that these parties (NCBE and bar examiners on one side, law schools on the other) tend to talk past one another. Rather than gain insights from each other, the parties often communicate after decisions are made. Each seems to believe that it protects the public interest, while the other is driven purely by self interest.

This stand-off hurts law school graduates, who get lost in the middle. NCBE and law schools need to start listening to one another; both sides have valid points to make. The ExamSoft crisis should have prompted immediate conversations between the groups. Law schools knew how the crash had affected their examinees; the cries of distress were loud and clear. NCBE knew, as Albanese’s graph shows, that MBE scores were far below outcomes predicted by the class’s LSAT scores. Discussion might have generated wisdom.

Finally, the ExamSoft debacle demonstrates that we need better coordination–and accountability–in the administration and scoring of bar exams. When law schools questioned the July 2014 results, NCBE’s president disclaimed any responsibility for exam administration. That’s technically true, but exam administration affects equating and scaling. Bar examiners, meanwhile, accepted NCBE’s results without question; they assumed that NCBE had taken all proper factors (including any effect from a flawed administration) into account.

We can’t rewind administration of the July 2014 bar exam; nor can we redo the scoring. But we can create a better system for exam administration going forward, one that includes more input from law schools (who have valid perspectives that NCBE and state bar examiners lack) as well as more coordination between NCBE and bar examiners on administration issues.

, View Comments (5)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests