You are currently browsing archives for the Data category.

The ABA Council and LSAT Scores

January 10th, 2016 / By

As Law School Transparency documented last fall, LSAT scores have plunged at numerous law schools. The low scores, combined with previous research, suggest that some schools are admitting students at high risk of failing the bar exam. If true, the schools are violating ABA Standard 501(b).

Two leaders of the ABA’s Section of Legal Education and Admissions to the Bar recently offered thoughts on this issue. Justice Rebecca White Berch, Chair of the Section’s Council, and Barry Currier, Managing Director of Accreditation and Legal Education, each addressed the topic in the Section’s winter newsletter.

Taking Accreditation Seriously

Berch and Currier both affirm the importance of enforcing the Council’s standards; they also indicate that the Council is already considering school admissions practices. Justice Berch reminds readers that the Council enforces its standards largely through review of each school’s responses to the annual questionnaire. This year, more than half of approved schools are replying to inquiries based on their questionnaire responses–although Berch does not indicate how many of those inquiries relate to admissions standards.

Currier, similarly, endorses the Council’s process and promises that: “If the evidence shows that a law school’s admissions process is being driven by the need to fill seats and generate revenue without taking appropriate steps to determine that students who enroll have a reasonable chance to succeed in school and on the bar examination, as ABA Standard 501(b) requires, then that school should be, and I am confident will be, held accountable.”

This is good news, that the Council is investigating this troubling issue. If we want to maintain legal education’s status, we have to be serious about our accreditation standards. But two points in the columns by Justice Berch and Managing Director Currier trouble me.

The Significance of LSAT Scores

Both Justice Berch and Currier stress that LSAT scores reveal only a small part of an individual’s potential for law study or practice. As Justice Berch notes, “an LSAT score does not purport to tell the whole story of a person.” This is undoubtedly true. Many law schools place far too much emphasis on LSAT scores when admitting students and awarding financial aid. Applicants’ work history, writing ability, prior educational achievements, and leadership experience should play a far greater role in admissions and scholarships. Rather than targeting high LSAT scores for admission and scholarships, schools should be more aggressive in rewarding other indicia of promise.

At the other end of the scale, I don’t think anyone would endorse an absolute LSAT threshold that every law school applicant must meet for admission–although we do, of course, require all applicants to take the test. There are too many variables that affect an admissions decision: a particular applicant with a very low LSAT may have other characteristics signaling a special potential for success.

LSAT scores, however, possess a different meaning when reported for a group, like a law school’s entering class. A law school may find one or two applicants with very low LSAT scores who display other indicia of success. That type of individualized decisionmaking, however, should have little impact on a school’s median or 25th percentile scores.

When a law school’s 25th percentile score plunges 10 points to reach a low of 138, that drop belies the type of individualized decisionmaking that responsible educators pursue. This is particularly true when the drop occurs during a period of diminished applications and financial stress.

The Charlotte School of Law displayed just that decline in entering credentials between 2010 and 2014. Nor was Charlotte alone. The Ave Maria School of Law dropped its 25th percentile LSAT score from 147 to 139. Arizona Summit fell from 148 to 140. You can see these and other drops in the detailed database compiled by Law School Transparency here.

We shouldn’t confuse the meaning of LSAT scores for an individual with the significance of those scores for a group. As I have suggested before, the score drops at some law schools are red flags that demand immediate attention.

Limited Resources

Justice Berch reminds readers that the Council’s accreditation process is “volunteer-driven” and that those volunteers already “give thousands of hours of their time each year.” More, she suggests, “should not be asked of them.” Even making the best use of those volunteers’ hours, she warns, careful review of the LSAT issue will take time.

This caution sounds the wrong tone. As professionals, we owe duties to both our students and their future clients. If law schools are violating the professional commitments they made through the accreditation process, then our accrediting body should act promptly to investigate, remedy, and–if necessary–sanction the violations.

Of course schools deserve “an opportunity to justify the admissions choices they have made before sanctions may be imposed.” But students also deserve fair treatment. If schools are admitting students who cannot pass the bar exam, that conduct should stop now–not a year or two from now, after more students have been placed into the same precarious position.

The LSAT drops cited above occurred between 2010 and 2014. More than a year has passed since schools reported those 2014 LSAT scores to the ABA. Isn’t that enough time to investigate schools’ admissions processes? What has the Council done during the last year, while more students were admitted with weak scores–and more graduates failed the bar?

Accreditation signals to students that schools and their accrediting body are watching out for their interests. If schools need to contribute more money or volunteer time to provide prompt review of red flags like these LSAT scores, we should ante up. Maintaining an accreditation process that fails to act promptly smacks of protectionism rather than professional responsibility.

, View Comment (1)

ATL Rankings: The Bad and the Maybe

June 5th, 2015 / By

I’ve already discussed the positive aspects of Above the Law (ATL)’s law school rankings. Here I address the poorly constructed parts of the ranking scheme. Once again, I use ATL to provoke further thought about all law school rankings.

Quality Jobs Score

ATL complements its overall employment score, which is one of the scheme’s positive features, with a “quality jobs score.” The latter counts only “placement with the country’s largest and best-paying law firms (using the National Law Journal’s “NLJ 250”) and the percentage of graduates embarking on federal judicial clerkships.”

I agree with ATL’s decision to give extra weight to some jobs; even among jobs requiring bar admission, some are more rewarding to graduates than others. This category, however, is unnecessarily narrow–and too slanted towards private practice.

Using ATL’s own justification for the category’s definition (counting careers that best support repayment of law school debt), it would be easy to make this a more useful category. Government and public interest jobs, which grant full loan forgiveness after ten years, also enable repayment of student loans. Given the short tenure of many BigLaw associates, the government/public interest route may be more reliable than the BigLaw one.

I would expand this category to include all government and public interest jobs that qualify graduates for loan forgiveness at the ten-year mark, excluding only those that are school financed. Although ATL properly excludes JD-advantage jobs from its general employment score, I would include them here–as long as the jobs qualify for public-service loan forgiveness. A government job requiring bar admission, in other words, would count toward both employment measures, while a JD-advantage government position would count just once.

Making this change would reduce this factor’s bias toward private practice, while incorporating information that matters to a wider range of prospective students.

SCOTUS Clerks and Federal Judges

Many observers have criticized this component, which counts “a school’s graduates as a percentage of (1) all U.S. Supreme Court clerks (since 2010) and (2) currently sitting Article III judges.” For both of these, ATL adjusts the score for the size of the school. What’s up with that?

ATL defends the criterion as useful for students “who want to be [federal] judges and academics.” But that’s just silly. These jobs constitute such a small slice of the job market that they shouldn’t appear in a ranking designed to be useful for a large group of users. If ATL really embraces the latter goal, there’s an appropriate way to modify this factor.

First, get rid of the SCOTUS clerk count. That specialized information is available elsewhere (including on ATL) for prospective students who think that’s relevant to their choice of law school. Second, expand the count of sitting Article III judges to include counts of (a) current members of Congress; (b) the President and Cabinet members; and (c) CEO’s and General Counsel at all Fortune 500 companies. Finally, don’t adjust the counts for school size.

These changes would produce a measure of national influence in four key areas: the judiciary, executive branch, legislature, and corporate world. Only a small percentage of graduates will ever hold these very prestigious jobs, but the jobholders improve their school’s standing and influence. That’s why I wouldn’t adjust the counts for school size. If you’re measuring the power that a school exerts through alumni in these positions, the absolute number matters more than the percentage.

Leaders in private law firms, state governments, and public interest organizations also enhance a school’s alumni network–and one could imagine adding those to this component. Those organizations, however, already receive recognition in the two factors that measure immediate graduate employment. It seems more important to add legislative, executive, and corporate influence to the rankings. As a first step, therefore, I would try to modify this component as I’ve outlined here.

Component Sorting

A major flaw in ATL’s scheme is that it doesn’t allow users to sort schools by component scores. The editors have published the top five schools in most categories, but that falls far short of full sorting. Focused-purpose rankings are most useful if readers can sort schools based on each component. One reader may value alumni ratings above all other factors, while another reader cares about quality jobs. Adding a full-sort feature to the ranking would be an important step.

Why Rank?

Like many educators, I dislike rankings. The negative incentives created by US News far outweigh the limited value it offers prospective students. Rankings can also mislead students into making decisions based solely on those schemes, rather than using rank as one tool in a broader decisionmaking process. Even if modified in the ways I suggest here, both of these drawbacks may affect the ATL rankings.

As Law School Transparency has shown, it is possible to give prospective students useful information about law schools without adding the baggage of rankings. Above the Law could perform a greater public service by publishing its data as an information set rather than as an integrated ranking.

But rankings draw attention and generate revenue; they are unlikely to disappear. If we’re going to have rankings, then it’s good to have more than one. Comparing schemes may help us see the flaws in all ranking systems; perhaps eventually we’ll reject rankings in favor of other ways to organize information.

, No Comments Yet

ATL Rankings: The Good, the Bad, and the Maybe

June 4th, 2015 / By

In my last post I used Above the Law (ATL)’s law school rankings to explore three types of ranking schemes. Now it’s time to assess the good, bad, and maybe of ATL’s system. In this column I explore the good; posts on the bad and maybe will follow shortly. ATL’s metrics are worth considering both to assess that system and to reflect on all ranking schemes.

Employment Score

ATL’s ranking gives substantial weight to employment outcomes, a factor that clearly matters to students. I agree with ATL that “full-time, long-term jobs requiring bar passage (excluding solos and school-funded positions)” offer the best measure for an employment score. Surveys show that these are the jobs that most graduates want immediately after law school. Equally important, these are the jobs that allow law schools to charge a tuition premium for entry to a restricted profession. Since schools reap the premium, they should be measured on their ability to deliver the outcome.

For a focused-purpose ranking, finally, simple metrics make the most sense. Prospective law students who don’t want to practice can ignore or adjust the ATL rankings (which assume practice as a desired outcome). A student admitted to Northwestern’s JD-MBA program, for example, will care more about that program’s attributes than about the ATL rank. For most students, ATL’s employment score offers a useful starting point.

Alumni Rating

This metric, like the previous one, gives useful information to prospective students. If alumni like an institution’s program, culture, and outcomes, prospective students may feel the same. Happy alumni also provide stronger networks for career support. The alumni rating, finally, may provide a bulwark against schools gaming other parts of the scheme. If a school mischaracterizes jobs, for example, alumni may respond negatively.

It’s notable that ATL surveys alumni, while US News derives reputation scores from a general pool of academics, lawyers, and judges. The former offers particularly useful information to prospective students, while the latter focuses more directly on prestige.

Debt Per Job

This is a nice way of incorporating two elements (cost and employment) that matter to students. The measure may also suggest how closely the institution focuses on student welfare. A school that keeps student costs low, while providing good outcomes, is one that probably cares about students. Even a wealthy student might prefer that institution over one with a worse ratio of debt to jobs.

The best part of this metric is that it gives law schools an incentive to award need-based scholarships. Sure, schools could try to improve this measure by admitting lots of wealthy students–but there just aren’t that many of those students to go around. Most schools have already invested in improving employment outcomes, so the best way to further improve the “debt per job” measure is for the school to award scholarships to students who would otherwise borrow the most.

Over the last twenty years, US News has pushed schools from need-based scholarships to LSAT-based ones. What a refreshing change if a ranking scheme led us back to need-based aid.

Education Cost

Cost is another key factor for 0Ls considering law schools and, under the current state of the market, I support ATL’s decision to use list-price tuition for this measure. Many students negotiate discounts from list price, but schools don’t publish their net tuition levels. The whole negotiation system, meanwhile, is repugnant. Why are schools forcing young adults to test their bargaining skills in a high-stakes negotiation that will affect their financial status for up to a quarter century?

We know that in other contexts, race and gender affect negotiation outcomes. (These are just two of many possible citations.) How sure are we that these factors don’t affect negotiations for tuition discounts? Most of the biases that taint negotiations are unconscious rather than conscious. And even if law school administrators act with scrupulous fairness, these biases affect the students seeking aid: Race and gender influence a student’s willingness to ask for more.

In addition to these biases, it seems likely that students from disadvantaged backgrounds know less about tuition negotiation than students who have well educated helicopter parents. It’s no answer to say that economically disadvantaged students get some tuition discounts; the question is whether they would have gotten bigger discounts if they were armed with more information and better negotiating skills.

Negotiation over tuition is one of the most unsavory parts of our current academic world. I favor any component of a ranking scheme that pushes schools away from that practice. If schools don’t want to be ranked based on an inflated list-price tuition, then they can lower that tuition (and stop negotiating) or publish their average net tuition. My co-moderator made the same point last year, and it’s just as valid today.

The Bad and Maybe

Those are four strengths of the ATL rankings. Next up, the weaknesses.

, No Comments Yet

More on Rankings: Three Purposes

June 1st, 2015 / By

I want to continue my discussion of the law school rankings published by Above the Law (ATL). But before I do, let’s think more generally about the purpose of law school rankings. Who uses these rankings, and for what reason? Rankings may serve one or more of three purposes:

1. Focused-Purpose Rankings

Rankings in this first category help users make a specific decision. A government agency, for example, might rate academic institutions based on their research productivity; this ranking could the guide the award of research dollars. A private foundation aiming to reward innovative teaching might develop a ranking scheme more focused on teaching prowess.

US News and Above the Law advertise their rankings as focused-purpose ones: Both are designed to help prospective students choose a law school. One way to assess these rankings, accordingly, is to consider how well they perform this function.

Note that focused-purpose rankings can be simple or complex. Some students might choose a law school based solely on the percentage of graduates who secure jobs with the largest law firms. For those students, NLJ’s annual list of go-to law schools is the only ranking they need.

Most prospective students, however, consider a wider range of factors when choosing a law school. The same is true of people who use other types of focused-purpose rankings. The key function of these rankings is that they combine relevant information in a way that helps a user sort that information. Without assistance, a user could focus on only a few bits of information at a time. Focused-purpose rankings overcome that limit by aggregating some of the relevant data.

This doesn’t mean that users should (or will) make decisions based solely on a ranking scheme. Although a good scheme combines lots of relevant data, the scheme is unlikely to align precisely with each user’s preferences. Most people who look at rankings use them as a starting point. The individual adds relevant information omitted by the ranking scheme, or adjusts the weight given to particular components, before making a final decision.

A good ranking scheme in the “focused purpose” category supports this process through four features. The scheme (a) incorporates factors that matter to most users; (b) omits other, irrelevant data; (c) uses unambiguous metrics as components; and (d) allows users to disaggregate the components.

2. Prestige Rankings

Some rankings explicitly measure prestige. Others implicitly offer that information, although they claim another purpose. In either case, the need for “prestige” rankings is somewhat curious. Prestige does not inhere in institutions; it stems from the esteem that others confer upon the institution. Why do we need a ranking system to tell us what we already believe?

One reason is that our nation is very large. People from the West Coast may not know the prestige accorded Midwestern institutions. Newcomers to a profession may also seek information about institutional prestige. Some college students know very little about the prestige of different law schools.

For reasons like these, prestige rankings persist. It is important to recognize, however, that prestige rankings differ from the focused-purpose schemes discussed above. Prestige often relates to one of those focused purposes: A law school’s prestige, for example, almost certainly affects the employability of its graduates. A ranking of schools based on prestige, however, is different than a ranking that incorporates factors that prospective students find important in selecting a school.

Prestige rankings are more nebulous than focused-purpose ones. The ranking may depend simply on a survey of the relevant audience. Alternatively, the scheme may incorporate factors that traditionally reflect an institution’s prestige. For academic institutions, these include the selectivity of its admissions, the qualifications of its entering class, and the institution’s wealth.

3. Competition Rankings

Competition rankings have a single purpose: to confer honor. A competition ranking awards gold, silver, bronze, and other medals according to specific criteria. These rankings differ from the previous categories because their sole purpose is to accord honor for winning the competition.

Many athletic honors fall into this category. We honor Olympic gold medalists because they were the best at their event on a particular day, even if their prowess diminishes thereafter.

Competition rankings are most common in athletics and the arts, although they occasionally occur in academia. More commonly, as I discuss below, people misinterpret focused-purpose rankings as if they were competition ones.

US News

As noted above, US News promotes its law school ranking for a focused purpose: to help prospective students choose among law schools. Over time, however, the ranking has acquired aspects of both a prestige scheme and a competition one. These characteristics diminish the rankings’ use for potential students; they also contribute to much of the mischief surrounding the rankings.

Many professors, academic administrators, and alumni view their school’s US News rank as a general measure of prestige, not simply as a tool for prospective students to use when comparing law schools. Some of the US News metrics contribute to this perception. Academic reputation, for example, conveys relatively little useful information to potential students. It is much more relevant to measuring an institution’s overall prestige.

Even more troublesome, some of these audiences have started to treat the US News rankings as a competition score. Like Olympic athletes, schools claim honor simply for achieving a particular rank. Breaking into the top fourteen, top twenty, or top fifty becomes cause for excessive celebration.

If the US News ranking existed simply to aid students in selecting a law school, they would cause much less grief. Imagine, for example, if deans could reassure anxious alumni by saying something like: “Look, these rankings are just a tool for students to use when comparing law schools. And they’re not the only information that these prospective students use. We supplement the rankings by pointing to special features of our program that the rankings don’t capture. We have plenty of students who choose our school over ones ranked somewhat above us because they value X, Y, and Z.”

Deans can’t offer that particular reassurance, and listeners won’t accept it, because we have all given the US News rankings the status of prestige or competition scores. It may not matter much if a school is number 40 or 45 on a yardstick that 0Ls use as one reference in choosing a law school. Losing 5 prestige points, on the other hand, ruins everyone’s day.

Above the Law

I’ll offer a more detailed analysis of the ATL rankings in a future post. But to give you a preview: One advantage of these rankings over US News is that they focus very closely on the particular purpose of aiding prospective students. That focus makes the rankings more useful for their intended audience; it also avoids the prestige and competition auras that permeate the US News product.

, No Comments Yet

ATL Law School Rankings

May 29th, 2015 / By

Above the Law (ATL) has released the third edition of its law school rankings. Writing about rankings is a little like talking about intestinal complaints: We’d rather they didn’t exist, and it’s best not to mention such things in polite company. Rankings, however, are here to stay–and we already devote an inordinate amount of time to talking about them. In that context, there are several points to make about Above the Law‘s ranking scheme.

In this post, I address an initial question: Who cares about the ATL rankings? Will anyone read them or follow them? In my next post, I’ll explore the metrics that ATL uses and the incentives they create. In a final post, I’ll make some suggestions to improve ATL’s rankings.

So who cares? And who doesn’t?

Prospective Students

I think potential law students are already paying attention to the ATL rankings. Top-Law-Schools.com, a source used by many 0Ls, displays the Above the Law rankings alongside the US News (USN) list. Prospective students refer to both ranking systems in the site’s discussion forum. If prospective students don’t already know about ATL and its rankings, they will soon.

If I were a prospective student, I would pay at least as much attention to the ATL rankings than the USN ones. Above the Law, after all, incorporates measures that affect students deeply (cost, job outcomes, and alumni satisfaction). US News includes factors that seem more esoteric to a potential student.

Also, let’s face it: Above the Law is much more fun to read than US News. Does anyone read US News for any purpose other than rankings? 0Ls read Above the Law for gossip about law schools and the profession. If you like a source and read it regularly, you’re likely to pay attention to its recommendations–including recommendations in the form of rankings.

Alumni

Deans report that their alumni care deeply about the school’s US News rank. Changes in that number may affect the value of a graduate’s degree. School rank also creates bragging rights among other lawyers. We don’t have football or basketball teams at law schools, so what other scores can we brag about?

I predict that alumni will start to pay a lot of attention to Above the Law‘s ranking scheme. Sure, ATL is the site we all love to hate: Alumni, like legal educators, cringe at the prospect of reading about their mistakes on the ever-vigilant ATL. But the important thing is that they do read the site–a lot. They laugh at the foibles of others, nod in agreement with some reports, and keep coming back for more. This builds a lot of good will for Above the Law.

Equally important, whenever Above the Law mentions a law school in a story, it appends information about the school’s ATL rank. For an example, see this recent story about Harvard Law School. (I purposely picked a positive story, so don’t get too excited about following the link.)

Whenever alumni read about their law school–or any law school–in Above the Law, they will see information about ATL’s ranking. This is true even for the 150 schools that are “not ranked” by Above the Law. For them, a box appears reporting that fact along with information about student credentials and graduate employment.

This is an ingenious (and perfectly appropriate) marketing scheme. Alumni who read Above the Law will constantly see references to ATL’s ranking scheme. Many will care about their school’s rank and will pester the school’s dean for improvement. At first, they may not want to admit publicly that they care about an ATL ranking, but that reservation will quickly disappear. US News is a failed magazine; Above the Law is a very successful website. Which one do you think will win in the end?

US News, moreover, has no way to combat this marketing strategy. We’ve already established that no one reads US News for any reason other than the rankings. So US News has no way to keep its rankings fresh in the public’s mind. Readers return to Above the Law week after week.

Law Professors

Law professors will not welcome the ATL rankings. We don’t like any rankings, because they remind us that we’re no longer first in the class. And we certainly don’t like Above the Law, which chronicles our peccadilloes.

Worst of all, ATL rankings don’t fit with our academic culture. We like to think of ourselves as serious-minded people, pursuing serious matters with great seriousness. How could we respect rankings published by a site that makes fun of us and all of our seriousness? Please, be serious.

Except…professors spent a long time ignoring the US News rankings. We finally had to pay attention when everyone else started putting so much weight on them. Law faculty are not leaders when it comes to rankings; we are followers. If students and alumni care about ATL’s rankings, we eventually will pay attention.

University Administrators

People outside academia may not realize how much credence university presidents, provosts, and trustees give the US News rankings. The Board of Trustees at my university has a scorecard for academic initiatives that includes these two factors: (1) rank among public colleges, as determined by USN, and (2) number of graduate or professional programs in the USN top 25. On the first, we aim to improve our rank from 18 to 10. On the second, we hope to increase the number of highly ranked departments from 49 to 65.

These rank-related goals are no longer implicit; they are quite explicit at universities. And, although academic leaders once eschewed US News as a ranking source, they now embrace the system.

Presidents and provosts are likely to laugh themselves silly if law schools clamor to be judged by Above the Law rather than US News. At least for the immediate future, this will restrain ATL’s power within academia.

On the other hand, I remember a time (in the late 1990’s) when presidents and provosts laughed at law schools for attempting to rely upon their US News rank. “Real” academic departments had fancier ranking schemes, like those developed by the National Research Council. But US News was the kudzu of academic rankings: It took over faster than anyone anticipated.

Who’s to say that the Above the Law rankings won’t have their day, at least within legal education?

Meanwhile

Even if US News retains its primary hold on academic rankings, Above the Law may have some immediate impact within law schools. High US News rank, after all, depends upon enrolling talented students. If prospective students pay attention to Above the Law–as I predict they will–then law schools will have to do the same. To maintain class size and student quality, we need to know what students want. For that, Above the Law offers essential information

, View Comments (6)

On the Bar Exam, My Graduates Are Your Graduates

May 12th, 2015 / By

It’s no secret that the qualifications of law students have declined since 2010. As applications fell, schools started dipping further into their applicant pools. LSAT scores offer one measure of this trend. Jerry Organ has summarized changes in those scores for the entering classes of 2010 through 2014. Based on Organ’s data, average LSAT scores for accredited law schools fell:

* 2.3 points at the 75th percentile
* 2.7 points at the median
* 3.4 points at the 25th percentile

Among other problems, this trend raises significant concerns about bar passage rates. Indeed, the President of the National Conference of Bar Examiners (NCBE) blamed the July 2014 drop in MBE scores on the fact that the Class of 2014 (which entered law school in 2011) was “less able” than earlier classes. I have suggested that the ExamSoft debacle contributed substantially to the score decline, but here I focus on the future. What will the drop in student quality mean for the bar exam?

Falling Bar Passage Rates

Most observers agree that bar passage rates are likely to fall over the coming years. Indeed, they may have already started that decline with the July 2014 and February 2015 exam administrations. I believe that the ExamSoft crisis and MBE content changes account for much of those slumps, but there is little doubt that bar passage rates will remain depressed and continue to fall.

A substantial part of the decline will stem from examinees with very low LSAT scores. Prior studies suggest that students with low scores (especially those with scores below 145) are at high risk of failing the bar. As the number of low-LSAT students increases at law schools, the number (and percentage) of bar failures probably will mount as well.

The impact, however, will not be limited just to those students. As I explained in a previous post, NCBE’s process of equating and scaling the MBE can drag down scores for all examinees when the group as a whole performs poorly. This occurs because the lower overall performance prompts NCBE to “scale down” MBE scores for all test-takers. Think of this as a kind of “reverse halo” effect, although it’s one that depends on mathematical formulas rather than subjective impressions.

State bar examiners, unfortunately, amplify the reverse-halo effect by the way in which they scale essay and MPT answers to MBE scores. I explain this process in a previous post. In brief, the MBE performance of each state’s examinees sets the curve for scoring other portions of the bar exam within that state. If Ohio’s 2015 examinees perform less well on the MBE than the 2013 group did, then the 2015 examinees will get lower essay and MPT scores as well.

The law schools that have admitted high-risk students, in sum, are not the only schools that will suffer lower bar passage rates. The processes of equating and scaling will depress scores for other examinees in the pool. The reductions may be small, but they will be enough to shift examinees near the passing score from one side to another. Test-takers who might have passed the bar in 2013 will not pass in 2015. In addition to taking a harder exam (i.e. a 7-subject MBE), these unfortunate examinees will suffer from the reverse-halo effect describe above.

On the bar exam, the performance of my graduates affects outcomes for your graduates. If my graduates perform less well than in previous years, fewer of your graduates will pass: my graduates are your graduates in this sense. The growing number of low-LSAT students attending Thomas Cooley and other schools will also affect the fate of our graduates. On the bar exam, Cooley’s graduates are our graduates.

Won’t NCBE Fix This?

NCBE should address this problem, but they have shown no signs of doing so. The equating/scaling process used by NCBE assumes that test-takers retain roughly the same proficiency from year to year. That assumption undergirds the equating process. Psychometricians recognize that, as abilities shift, equating becomes less reliable.* The recent decline in LSAT scores suggests that the proficiency of bar examinees will change markedly over the next few years. Under those circumstances, NCBE should not attempt to equate and scale raw scores; doing so risks the type of reverse-halo effect I have described.

The problem is particularly acute with the bar exam because scaling occurs at several points in the process. As proficiency declines, equating and scaling of MBE performance will inappropriately depress those scores. Those scores, in turn, will lower scores on the essay and MPT portions of the exam. The combined effect of these missteps is likely to produce noticeable–and undeserved–declines in scores for examinees who are just as qualified as those who passed the bar in previous years.

Remember that I’m not referring here to graduates who perform well below the passing score. If you believe that the bar exam is a proper measure of entry-level competence, then those test-takers deserve to fail. The problem is that an increased number of unqualified examinees will drag down scores for more able test-takers. Some of those scores will drop enough to push qualified examinees below the passing line.

Looking Ahead

NCBE, unfortunately, has not been responsive on issues related to their equating and scaling processes. It seems unlikely that the organization will address the problem described here. There is no doubt, meanwhile, that entry-level qualifications of law students have declined. If bar passage rates fall, as they almost surely will, it will be easy to blame all of the decline on less able graduates.

This leaves three avenues for concerned educators and policymakers:

1. Continue to press for more transparency and oversight of NCBE. Testing requires confidentiality, but safeguards are essential to protect individual examinees and public trust of the process.

2. Take a tougher stand against law schools with low bar passage rates. As professionals, we already have an obligation to protect aspirants to our ranks. Self interest adds a potent kick to that duty. As you view the qualifications of students matriculating at schools with low bar passage rates, remember: those matriculants will affect your school’s bar passage rate.

3. Push for alternative ways to measure attorney competence. New lawyers need to know basic doctrinal principles, and law schools should teach those principles. A closed-book, multiple-choice exam covering seven broad subject areas, however, is not a good measure of doctrinal knowledge. It is even worse when performance on that exam sets the curve for scores on other, more useful parts of the bar exam (such as the performance tests). And the situation is worse still when a single organization, with little oversight, controls scoring of that crucial multiple-choice exam.

I have some suggestions for how we might restructure the bar exam, but those ideas must wait for another post. For now, remember: On the bar exam, all graduates are your graduates.

* For a recent review of the literature on changing proficiencies, see Sonya Powers & Michael J. Kolen, Evaluating Equating Accuracy and Assumptions for Groups That Differ in Performance, 51 J. Educ. Measurement 39 (2014). A more reader-friendly overview is available in this online chapter (note particularly the statements on p. 274).

, View Comments (6)

More on Grade and Scholarship Quotas

May 5th, 2015 / By

In a response to this post, Michael Simkovic wonders if I believe “it is inherently immoral to limit ‘A’ grades to students whose academic performance is superior to most of their peers, since an ‘A’ is simply a data point and can be replicated and distributed to everyone at zero marginal cost.”

Not at all. I believe in matching grades to performance, and I don’t hesitate to do that–even when the performance is a failing one. Ironically, however, the mandatory grading curve produces results that are quite troubling for those of us who want grades to reflect performance. Constrained by that type of grading system, I have given A’s to students who performed worse than their peers. Let’s consider that problem and then return to the subject of conditional scholarships.

A Tale of Two Tort Classes

To accommodate institutional needs, I once taught two sections of the first-year Torts class. I used the same book and same lecture notes in both classes. We covered the same material in each class, and I drafted a single exam for the group. Following my practice at that time, it was a 4-hour essay exam with several questions.

I graded the exams as a single batch, without separating them into the two sections. Again following my usual practice, I used grading rubrics for each essay. I also rotated batches of essays so that no exam would always suffer (or benefit) from being in the first or last group graded. After I was done, I plotted all of the scores.

I discovered that, if I applied a single curve to both sections, all of the A grades would fall in one section. Our grading rules, however, required me to apply separate curves to each section. So some students in the “smart” section got B’s instead of the A’s they deserved. Some students in the other section got A’s instead of the B’s they deserved. When I discussed my problem with the Associate Dean, he did allow me to use the highest possible curve for the first section, and the lowest possible one for the other section; that ameliorated the problem to some extent. In the end, however, the letter grades did not match performance.

Several other professors have recounted similar experiences to me. It doesn’t happen often, because it is uncommon for a professor to teach two sections of a first-year class. But it does happen. In fact, when professors teach multiple sections of the same course, section differences seem common. If these differences occur when we can readily detect them (by teaching two sections), they probably occur under other circumstances as well.

I don’t think this drawback to mandatory curves rises to the level of immorality. Students understand the system and benefit from some of its facets. The curve forces professors to award similar grades across courses and sections, moderating both curmudgeons and sycophants. As Professor Simkovic notes, the system also restrains creeping grade inflation. A mandatory curve, finally, offers guidance to professors who lack an independent sense of what an A, B, or C exam looks like in their subject.

I tell this story to make clear that a mandatory curve does not necessarily reward achievement. On the contrary, a mandatory curve can give B’s to students “whose academic performance is superior to most of their peers” as measured through blind grading. I know it can happen–I’ve done it.

Commpetition

It feels silly to say this, given my position on deregulating the legal profession, but I do not believe (as Professor Simkovic suggests) that “competition for scarce and valuable resources is inherently immoral.” Competition within an open market usually leads to beneficial results. Competition within a tournament guild, on the other hand, leads to inefficiencies and other harms.

Back to Conditional Scholarships

Returning to our original point of disagreement, I think Professor Simkovic misconstrues college grading patterns–especially in STEM courses. Those courses are not, to my knowledge, graded on a mandatory curve. Instead, the grades correspond to the students’ demonstrated knowledge. The college woman I mention in the primary post was a STEM major; she was no stranger to tough grading. She, however, was accustomed to a field in which her efforts would be rewarded when measured against a rigorous external standard–not one in which only seven students would get an A even if eight performed at that level.

Once again, law school mandatory curves are not “inherently immoral.” They do, however, differ from those that are “routinely used by other educational institutions and state government programs.” Our particular grading practices change the operation of conditional scholarships in law school. At college, a student with a conditional scholarship competes against an external standard. If she reaches that goal, it doesn’t matter how many other students succeed along with her.

In law school, a student’s success depends as much on the efforts of other students as on her own work. If conditional scholarships were in effect when I taught those two sections of Torts, it is quite possible that a student from the “smart section,” who objectively outperformed a student from the “other section,” would have lost her scholarship–while the less able student from the “other section” would have kept her award. I do not think college students understand that perverse relationship between our grading system and conditional scholarships–and neither Professor Simkovic nor Professor Telman has cited any evidence that they do.

Let the Market Rule

As I stated in my previous post, the ABA’s rule has cured two of the ills previously associated with high-forfeiture conditional scholarships. Schools may continue to offer them, subject to that rule. It appears that schools differ widely in the operation of these programs. Some offer only a few conditional scholarships, with rare forfeitures. Others offer a large number, with many forfeitures. Still others lie in between.

The market will soon tell us which of these paths enhance student enrollment. Now that prospective students know more about how conditional scholarships work at law schools, will they continue to enroll at schools with high forfeiture rates? Time will tell.

View Comments (2)

Comparisons

April 29th, 2015 / By

Michael Simkovic has posted some comments on my study of recent law graduates in Ohio. I had already benefited from his private comments, made some changes to my paper, and thanked him privately. When he reads the revised paper, posted several weeks ago, he’ll discover that I also thank him in the acknowledgement footnote–with the disclaimer that he and my other readers “sometimes disagree with one another, as well as with me, which made their comments even more helpful.”

For those who are interested, I note here my responses to the critiques that Professor Simkovic offers in his blogpost. Beyond these comments, I think readers can judge for themselves how much my study helps them understand the market for law school graduates in their part of the world. Some will find it relevant; others will not. As I’ve already noted, I hope that others will collect additional data to complement these findings.

Here and There

Professor Simkovic’s primary criticism is that the Ohio legal market is not representative. I discussed that issue in a previous post, so will add just a few thoughts. It is true that the wages for Ohio lawyers fall below the national average (both mean and median), but Ohio’s cost of living is also below average. Our index is 94.1 compared to 128.7 in California, 133.3 in New York, and 141.6 in the District of Columbia. On balance, I don’t see any reason to dismiss Ohio as a representative state for this reason.

Lawyers constitute a smaller percentage of the Ohio workforce than of the national one, but that is not a particularly meaningful indicator. Oklahoma, with 4.48 lawyers per 1,000 jobs, comes very close to the national average of 4.465, but that would not make Oklahoma the best choice for a study of new lawyers’ employment patterns.

Ohio has a disproportionate number of schools that rank low in the US News rankings: We have one school in the top 50, two in the second tier, three in the third tier, and four among the unranked schools. I discuss the implications of this in my study and show how law school rank affects employment patterns within the state. Like other studies, I find strong associations between law school rank and job type.

It is hard to know how this issue affects my overall findings. Professor Simkovic suggests that low-ranked schools create a sub-par job market with depressed outcomes for graduates. Just the opposite, however, could be true. Ohio individuals and businesses have the same legal needs as those in other states and, as noted above, we do not have as many lawyers per worker as some states. It is possible, therefore, that graduates of low-ranked schools have better employment opportunities than graduates of similar schools in other states. Similarly, the graduates of our first- and second-tier schools may fare better than graduates of similar schools in states with local T14 competitors.

The results of my Ohio study undoubtedly generalize better to some markets than to others. Similarly, the results may interest educators at some schools but not others. I doubt that my study will influence decisions at top-twenty law schools. At other schools, however, I think professors and deans should at least reflect upon the findings. Most of us graduated from elite law schools 10, 20, 30, or even 40 years ago. Our impressions of the employment market were molded by those experiences, and it is very hard to overcome that anchoring bias. I hope that my results at least provoke thought and further research.

Now and Then

Professor Simkovic and others also criticize my attempt to compare the 2014 Ohio data with national data gathered by NALP and the After the JD (AJD) study. I agree that those are far from perfect comparisons, and I note the limits in the paper. Unfortunately, we don’t have perfect data about employment patterns in the legal profession. In fact, we have surprisingly little data given the importance of our profession.

Some of the data we do have is out-of-date or badly skewed. Professor Simkovic and others, for example, cite results from the AJD study. That study tracks the Class of 2000, a group of graduates with experiences that almost certainly differ from those of more recent graduates. The Class of 2000’s history of debt repayment, for example, almost certainly will differ from that of the Class of 2010. In 2000, the average resident tuition at public law schools was $7,790–or $9,864 in 2010 dollars. By 2010, tuition had more than doubled to $20,238.

Rather than rely on outdated information, my study begins the process of providing more current data. (I don’t study tuition in particular; I note that example because Professor Simkovic uses AJD for that purpose in his post.) In providing that information, I also make comparisons to the baseline data we have. Although the prior data stem from different populations and use somewhat different methods, some of the differences are so large that they seem likely to reflect real changes rather than methodological artifacts.

AJD, for example, found that 62.1% of the class of 2000 worked in law firms three years after graduation. At a similar point (4.5 years post graduation), just 40.5% of my population held positions in firms. Some of that difference could stem from method. AJD relied upon survey responses, and the responses showed some bias toward graduates of highly ranked schools. AJD also examined a national sample of lawyers, while I looked only at Ohio. A national sample, however, is not a New York or California sample. AJD included lawyers from Tennessee, Oklahoma, Utah, and Oregon, as well as some from the larger markets. Ohio will not precisely mirror those averages, but I doubt the difference is large enough to account for the 20-point drop in law firm employment.

Assumptions About Non-Respondents

In my study, I tracked employment outcomes for all 1,214 new lawyers who were admitted to the Ohio bar after passing one of the 2010 exams. Using internet sources I was able to confirm a current (as of December 2014) job for 93.7% of the population members. For another 1.6% I found affirmative indications that the population member was not working. I.e., the person had noted online that s/he was jobseeking or that s/he had decided to leave the workforce to care for a family member.

That left 4.7% of the population for which I lacked information. For the reasons discussed on pp. 15-17 of the paper, I elected to treat this group as “not working.” There are some licensed lawyers who hold jobs without leaving any internet trace, but it’s a difficult task. For starters, Ohio’s Supreme Court requires all bar members to notify the court of their current office address and phone; the court then publishes that information online.

In addition, most working lawyers want to be found on the internet. With employer websites, LinkedIn, and Google searches, I found most of the population members very easily. The ones I couldn’t find became intriguing challenges; I returned to them repeatedly to see if I could find any traces of employment. The lack of any such evidence, combined with the factors cited in my paper, suggested that these individuals were not working.

It is quite possible, of course, that some of these individuals held jobs. Any bias toward understating employment outcomes, however, was likely outweighed by countervailing biases: (1) Some online references to jobs persist after an employee has left the position and is seeking other work. (2) My data collection could not distinguish part-time and full-time work, so I gave all jobs the same weight. (3) Some job titles may be polite masks for unemployment. A “solo practitioner,” for example, may not be actively handling cases or seeking clients. (4) My study included only law graduates who were admitted to the bar; it does not include the 10-12% of graduates who never take or pass the bar.

As I acknowledge in the paper, all of these biases could lead to overstating employment outcomes.

Salaries Within Job Categories

Professor Simkovic notes that my study does not account for salary increases within job categories. As I note in the paper, I gathered no data about salaries. I certainly hope that 2010 graduates received salary increases during the last five years! That, however, is a different question from whether employment patterns have shifted among new attorneys. Within the population I studied, I observed several features that differ from employment patterns reported in earlier studies of lawyers. These include the emergence of staff attorneys at BigLaw firms, a notable percentage of solo practitioners, a surprisingly low percentage of lawyers employed at law firms, and substantial percentage of recently licensed lawyers working in jobs that do not require bar admission.

Selection Bias

Professor Simkovic suggests that my study suffers from selection bias because the most talented Ohio graduates may have moved to other states to accept BigLaw offers. This would be a concern if I were trying to describe employment opportunities for a particular law school, but I am not doing that. Instead, I analyze the employment opportunities within a defined market. One can debate, as we have, how well Ohio represents outcomes in other markets. The study, however, is relatively free of selection bias within its defined population. Unlike AJD and many other studies, it does not depend upon subjects’ willingness to answer a lengthy survey.

For the record I’ll note that, although some of my school’s graduates move to other states for BigLaw jobs, the number is small. Like most law schools outside the top-ranked group, we place relatively few graduates at schools with more than 500 (or even more than 250) lawyers. My relatively informed, yet still anecdotal, impression is that our students who move out of state show a similar job distribution to those who remain in Ohio.

What Do We Know?

From my study, we know some things about the jobs held by lawyers who passed the Ohio bar exam in 2010. We don’t know about lawyers who passed the Ohio bar in other years, or about law graduates living in Ohio who have not been admitted to the bar. Nor do we know anything with certitude about lawyers in other states or at different times. But do the facts we know about one set of lawyers at one time provide insights into the experiences of other lawyers? Much social science research assumes that such insights are possible. The reach of those insights depends on the nature of the study.

Here, I think we gain some insight into employment patterns for recent graduates from many schools–at least for the 90% of schools ranked outside the US News top twenty. Some schools and some markets are very distinctive, but most of us are not as different as we first believe. Our first-hand impressions of our graduates’ job outcomes, meanwhile, are very skewed. After just a few years of teaching, we all have lots of former students. The ones we hear from or see at reunions almost certainly differ from those who drop out of sight. Research about Ohio won’t tell you everything you want to know about another market, but it may tell you more than you think.

Can we also gain insights about whether the job market for new lawyers has changed? That is a central claim of my study, buttressed by comparisons to previous data as well as information about why outcomes may have changed. Once again, I think the comparisons add to our knowledge. Personally, I don’t find the fact of change surprising. The legal employment market was different in the 1980s than in the 1950s, and both of those markets were different from the 1920s or 1890s. Why would we in 2015 be exempt from change?

The fact that change has occurred doesn’t mean that the demand for lawyers has evaporated; Richard Suskind’s provocative book title (The End of Lawyers?) has skewed discussions about change by creating a straw man. In the end, even Susskind doesn’t believe that lawyers are doomed to extinction. I think it’s important to know, however, that changes are occurring in the nature of legal employment. Staff attorneys, contract workers, and legal process outsourcers play a larger role today than they did ten years ago; an increasing number of new lawyers seem to establish solo practices; and junior positions in law firms seem to be declining. These and other changes are the ones I discuss in my paper. I hope that others will continue the exploration.

, View Comments (2)

Note to Law Schools: Show Your Work on JD Advantage Jobs

April 23rd, 2015 / By

In a column in this week’s New York Law Journal, Jill Backer, assistant dean of career and professional development at Pace Law School, says it’s artificial to distinguish between jobs that require a law license and jobs where the JD confers an advantage. Backer contends that doing so through the ABA’s standardized employment reporting regime reinforces a perception that JD Advantage jobs are “less than” the Bar Passage Required jobs.

As it turns out, there’s strong evidence that many of these jobs are “less than.” But Backer does not address overwhelming evidence that JD Advantage jobs pay substantially less on average and leave graduates looking for new jobs shortly after starting them.

Additional information showing otherwise would be helpful, but law schools do not provide it. Instead schools hope we’ll trust their word that JD Advantage jobs are not only desirable, but worth pursuing through the JD-path instead of a shorter, less painfully expensive degree.

I will be the first to admit that there are many great JD Advantage jobs. For instance, my job as executive director of Law School Transparency would count as JD Advantage because my JD provides a “demonstrable advantage in . . . performing the job,” even if my job “does not itself require bar passage or an active law license or involve practicing law.” The same applies to the editors of Above the Law, the founders of Hire an Esquire, and federal agents.

The problem is that JD Advantage category is so broad that it loses meaning. Schools infuse meaning on the term through the occasional, sexy anecdote—just like the ones in my previous paragraph. Look no further than Baker’s column lede to see how these anecdotes are operationalized. Baker frames readers’ understanding of JD Advantage jobs by pointing out that the President of the United States holds a job for which the JD is an advantage.

But the definition does not even require the employer to care about the JD—the education merely needs to be helpful. A colorable argument can be made that a legal education helps with just about any job a law graduate would consider. How many jobs would you take that don’t require some measure of critical thinking or understanding of our legal system?

The category is so flimsy that paralegals and graduates in administrative positions at law firms count as JD Advantage. For example, at Baker’s own school, 14 class of 2013 graduates (or 13% of all graduates in firm jobs) were paralegals or administrators. Of the 14, five were “professional” jobs and nine were “JD Advantage.” Nobody pays $45,000 per year in law school tuition to become a paralegal. But nearly a quarter of Pace’s graduates in JD Advantage jobs were paralegals or administrators at law firms.

According to NALP, 41% of all class of 2013 graduates in JD Advantage jobs were still seeking another job nine months after graduation. Graduates in Bar Passage Required jobs were one-third as likely to indicate the same. According to data from law school graduates, JD Advantage jobs are not nearly as desirable as Baker would have readers (and prospective students) believe. Further, NALP reports that the average JD Advantage salary is 25% less than the average salary for graduates in bar-required jobs.

There are certainly people who attend law school with other aims, and they may find desirable work outside of the practice of law. (Note that there’s good reason to believe that non-legal employers are after people with legal experience, rather than a legal education.) Though I don’t speak for others, LST does not include non-legal jobs in the LST Employment Score for straightforward reasons. For people interested specifically in a non-legal career including these jobs in the LST Employment Score would not make the score more meaningful. Such a mixed score would be determined primarily by legal job placements. A mixed legal/non-legal score does not really tell prospective students about alternative job placement.

For people interested in only a legal career, the addition of non-legal jobs greatly depreciates the value of the score by including a number of jobs they are not interested in. The only group that would be well served by a mixed score is a group who would be okay with pretty much any job upon graduation. While there are third-year students and recent graduates scrambling for any job they can obtain, few people have such an attitude before entering law school.

If a school prides and sells itself on its ability to produce graduates primed for JD Advantage jobs, it ought to find another way to prove its graduates are different than the 41% of graduates in JD Advantage jobs looking for a different job just a few months after starting. I’d like to think schools in this category would want to do this. Regardless, the onus is on law schools to prove that their JD Advantage outcomes are desirable and worth pursuing a JD to obtain.

Northwestern University School of Law, for example, makes a persuasive attempt to do just that. On a page titled, “JD Advantage Employment,” Northwestern actively distinguishes itself from other schools through data and context. By request for this column, the school’s dean supplemented that information. Northwestern graduates from 2013 and 2014 are substantially less likely than the national average to be seeking another job with one in hand—an estimated 10% of JD Advantage and 3% Bar Passage Required job holders.

Law schools are in a position where they need to become more attractive to prospective students, especially the highest achieving ones. A school could legitimately position itself as a force for the new economy. Northwestern does this as well as anyone. If other schools want to show how they’re different, they need to do more than throw together a new program to sell to applicants and alumni or claim that their JD is the best path to these new economy jobs. It requires more than just bold claims and factless editorials. To schools like Pace hoping to carve out a new niche, show us your work in a meaningful way. The applicant market is listening.

, View Comments (4)

Overpromising

April 18th, 2015 / By

Earlier this week, I wrote about the progress that law schools have made in reporting helpful employment statistics. The National Association for Law Placement (NALP), unfortunately, has not made that type of progress. On Wednesday, NALP issued a press release that will confuse most readers; mislead many; and ultimately hurt law schools, prospective students, and the profession. It’s the muddled, the false, and the damaging.

The Muddled

Much of the press release discusses the status of $160,000 salaries for new lawyers. This discussion vacillates between good news (for the minority of graduates who might get these salaries) and bad news. On the one hand, the $160,000 starting salary still exists. On the other hand, the rate hasn’t increased since 2007, producing a decline of 11.7% in real dollars (although NALP doesn’t spell that out).

On the bright side, the percentage of large firm offices paying this salary has increased from 27% in 2014 to 39% this year. On the down side, that percentage still doesn’t approach the two-thirds of large-firm offices that paid $160,000 in 2009. It also looks like the percentage of offices offering $160,000 to this fall’s associates (“just over one-third”) will be slightly lower than the current percentage.

None of this discussion tells us very much. This NALP survey focused on law firms, not individuals, and it tabulated results by office rather than firm. The fact that 39% of offices associated with the largest law firms are paying $160,000 doesn’t tell us how many individuals are earning that salary (let alone what percentage of law school graduates are doing so). And, since NALP has changed its definition of the largest firms since 2009, it’s hard to know what to make of comparisons with previous years.

In the end, all we know is that some new lawyers are earning $160,000–a fact that has been true since 2007. We also know that this salary must be very, very important because NALP repeats the figure (“$160,000”) thirty-two times in a single press release.

The False

In a bolded heading, NALP tells us that its “Data Represent Broad-Based Reporting.” This is so far off the mark that it’s not even “misleading.” It’s downright false. As the press release notes, only 5% of the firms responding to the survey employed 50 lawyers or fewer. (The accompanying table suggests that the true percentage was just 3.5%, but I won’t quibble over that.)

That’s a laughable representation of small law firms, and NALP knows it. Last year, NALP reported that 57.5% of graduates who took jobs with law firms went to firms of 50 lawyers or less. Smaller firms tend to hire fewer associates than large ones, and they don’t hire at all in some years. The percentage of “small” firms (those with 50 or fewer lawyers) in the United States undoubtedly is greater than 57.5%–and not anywhere near 5%.

NALP’s false statements go beyond a single heading. The press release specifically assures readers that “The report thus sheds light on the breadth of salary differentials among law firms of varying sizes and in a wide range of geographic areas nationwide, from the largest metropolitan areas to much smaller cities.” I don’t know how anyone can make that claim with a straight face, given the lack of response from law firms that make up the majority of firms nationwide.

This would be simply absurd, except NALP also tells readers that “the overall national median first-year salary at firms of all sizes was $135,000,” and that the median for the smallest firms (those with 50 or fewer lawyers) was $121,500. There is some fuzzy language about the median moving up during the last year because of “relatively fewer responses from smaller firms,” but that refers simply to the incremental change. Last year’s survey was almost as distorted as this year’s, with just 9.8% of responses coming from firms with 50 or fewer lawyers.

More worrisome, there’s no caveat at all attached to the representation that the median starting salary in the smallest law firms is $121,500. If you think that the 16 responding firms in this category magically represented salaries of all firms with 50 or fewer lawyers, see below. Presentation of the data in this press release as “broad-based” and “shed[ding] light on the breadth of salary differentials” is just breathtakingly false.

The Damaging

NALP’s false statements damage almost everyone related to the legal profession. The media have reported some of the figures from the press release, and the public response is withering. Clients assume that firms must be bilking them; otherwise, how could so many law firms pay new lawyers so much? Remember that this survey claims a median starting salary of $121,500 even at the smallest firms. Would you approach a law firm to draft your will or handle your divorce if you thought your fees would have to support that type of salary for a brand-new lawyer?

Prospective students will also be hurt if they act on NALP’s misrepresentations. Why shouldn’t they believe an organization called the “National Association for Law Placement,” especially when the organization represents its data as “broad-based”?

Ironically, though, law schools may suffer the most. What happens when prospective students compare NALP’s pumped-up figures with the ones on most of our websites? Nationwide, the median salary for 2013 graduates working in firms of 2-10 lawyers was just $50,000. So far, reports about the Class of 2014 look comparable. (As I’ve explained before, the medians that NALP reports for small firms are probably overstated. But let’s go with the reported median for now.)

When prospective students look at most law school websites, they’re going to see that $50,000 median (or one close to it) for small firms. They’re also going to see that a lot of our graduates work in those small firms of 2-10 lawyers. Nationwide, 8,087 members of the Class of 2013 took a job with one of those firms. That’s twice as many small firm jobs as ones at firms employing 500+ lawyers (which hired 3,980 members of the Class of 2013).

How do we explain the fact that so many of our graduates work at small firms, when NALP claims that these firms represent such a small percentage of practice? And how do we explain that our graduates average only $50,000 in these small-firm jobs, while NALP reports a median of $121,500? And then how do we explain the small number of our graduates who earn this widely discussed salary of $160,000?

With figures like $160,000 and $121,500 dancing in their heads, prospective students will conclude that most law schools are losers. By “most” I mean the 90% of us who fall outside the top twenty schools. Why would a student attend a school that offers outcomes so inferior to ones reported by NALP?

Even if these prospective students have read scholarly analyses showing the historic value of a law degree, they’re going to worry about getting stuck with a lemon school. And compared to the “broad-based” salaries reported by NALP, most of us look pretty sour.

Law schools need to do two things. First, we need to stop NALP from making false statements–or even just badly skewed ones. Each of our institutions pays almost $1,000 per year for this type of reporting. We shouldn’t support an organization that engages in such deceptive statements.

Second, we really do need to stop talking about BigLaw and $160,000 salaries. If Michael Simkovic and Frank McIntyre are correct about the lifetime value of a law degree, then we should be able to illustrate that value with real careers and real salaries. What do prosecutors earn compared to other government workers, both entry-level and after 20 years of experience? How much of a premium do businesses pay for a compliance officer with a JD? We should be able to generate answers to those questions. If the answers are positive, and we can place students in the appropriate jobs, we’ll have no trouble recruiting applicants.

If the answers are negative, we need to know that as well. We need to figure out the value of our degree, for our students. Let’s get real. Stop NALP from disseminating falsehoods, stop talking about $16*,*** salaries, and start talking about outcomes we can deliver.

, View Comment (1)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests