You are currently browsing archives for April 2015.

Comparisons

April 29th, 2015 / By

Michael Simkovic has posted some comments on my study of recent law graduates in Ohio. I had already benefited from his private comments, made some changes to my paper, and thanked him privately. When he reads the revised paper, posted several weeks ago, he’ll discover that I also thank him in the acknowledgement footnote–with the disclaimer that he and my other readers “sometimes disagree with one another, as well as with me, which made their comments even more helpful.”

For those who are interested, I note here my responses to the critiques that Professor Simkovic offers in his blogpost. Beyond these comments, I think readers can judge for themselves how much my study helps them understand the market for law school graduates in their part of the world. Some will find it relevant; others will not. As I’ve already noted, I hope that others will collect additional data to complement these findings.

Here and There

Professor Simkovic’s primary criticism is that the Ohio legal market is not representative. I discussed that issue in a previous post, so will add just a few thoughts. It is true that the wages for Ohio lawyers fall below the national average (both mean and median), but Ohio’s cost of living is also below average. Our index is 94.1 compared to 128.7 in California, 133.3 in New York, and 141.6 in the District of Columbia. On balance, I don’t see any reason to dismiss Ohio as a representative state for this reason.

Lawyers constitute a smaller percentage of the Ohio workforce than of the national one, but that is not a particularly meaningful indicator. Oklahoma, with 4.48 lawyers per 1,000 jobs, comes very close to the national average of 4.465, but that would not make Oklahoma the best choice for a study of new lawyers’ employment patterns.

Ohio has a disproportionate number of schools that rank low in the US News rankings: We have one school in the top 50, two in the second tier, three in the third tier, and four among the unranked schools. I discuss the implications of this in my study and show how law school rank affects employment patterns within the state. Like other studies, I find strong associations between law school rank and job type.

It is hard to know how this issue affects my overall findings. Professor Simkovic suggests that low-ranked schools create a sub-par job market with depressed outcomes for graduates. Just the opposite, however, could be true. Ohio individuals and businesses have the same legal needs as those in other states and, as noted above, we do not have as many lawyers per worker as some states. It is possible, therefore, that graduates of low-ranked schools have better employment opportunities than graduates of similar schools in other states. Similarly, the graduates of our first- and second-tier schools may fare better than graduates of similar schools in states with local T14 competitors.

The results of my Ohio study undoubtedly generalize better to some markets than to others. Similarly, the results may interest educators at some schools but not others. I doubt that my study will influence decisions at top-twenty law schools. At other schools, however, I think professors and deans should at least reflect upon the findings. Most of us graduated from elite law schools 10, 20, 30, or even 40 years ago. Our impressions of the employment market were molded by those experiences, and it is very hard to overcome that anchoring bias. I hope that my results at least provoke thought and further research.

Now and Then

Professor Simkovic and others also criticize my attempt to compare the 2014 Ohio data with national data gathered by NALP and the After the JD (AJD) study. I agree that those are far from perfect comparisons, and I note the limits in the paper. Unfortunately, we don’t have perfect data about employment patterns in the legal profession. In fact, we have surprisingly little data given the importance of our profession.

Some of the data we do have is out-of-date or badly skewed. Professor Simkovic and others, for example, cite results from the AJD study. That study tracks the Class of 2000, a group of graduates with experiences that almost certainly differ from those of more recent graduates. The Class of 2000’s history of debt repayment, for example, almost certainly will differ from that of the Class of 2010. In 2000, the average resident tuition at public law schools was $7,790–or $9,864 in 2010 dollars. By 2010, tuition had more than doubled to $20,238.

Rather than rely on outdated information, my study begins the process of providing more current data. (I don’t study tuition in particular; I note that example because Professor Simkovic uses AJD for that purpose in his post.) In providing that information, I also make comparisons to the baseline data we have. Although the prior data stem from different populations and use somewhat different methods, some of the differences are so large that they seem likely to reflect real changes rather than methodological artifacts.

AJD, for example, found that 62.1% of the class of 2000 worked in law firms three years after graduation. At a similar point (4.5 years post graduation), just 40.5% of my population held positions in firms. Some of that difference could stem from method. AJD relied upon survey responses, and the responses showed some bias toward graduates of highly ranked schools. AJD also examined a national sample of lawyers, while I looked only at Ohio. A national sample, however, is not a New York or California sample. AJD included lawyers from Tennessee, Oklahoma, Utah, and Oregon, as well as some from the larger markets. Ohio will not precisely mirror those averages, but I doubt the difference is large enough to account for the 20-point drop in law firm employment.

Assumptions About Non-Respondents

In my study, I tracked employment outcomes for all 1,214 new lawyers who were admitted to the Ohio bar after passing one of the 2010 exams. Using internet sources I was able to confirm a current (as of December 2014) job for 93.7% of the population members. For another 1.6% I found affirmative indications that the population member was not working. I.e., the person had noted online that s/he was jobseeking or that s/he had decided to leave the workforce to care for a family member.

That left 4.7% of the population for which I lacked information. For the reasons discussed on pp. 15-17 of the paper, I elected to treat this group as “not working.” There are some licensed lawyers who hold jobs without leaving any internet trace, but it’s a difficult task. For starters, Ohio’s Supreme Court requires all bar members to notify the court of their current office address and phone; the court then publishes that information online.

In addition, most working lawyers want to be found on the internet. With employer websites, LinkedIn, and Google searches, I found most of the population members very easily. The ones I couldn’t find became intriguing challenges; I returned to them repeatedly to see if I could find any traces of employment. The lack of any such evidence, combined with the factors cited in my paper, suggested that these individuals were not working.

It is quite possible, of course, that some of these individuals held jobs. Any bias toward understating employment outcomes, however, was likely outweighed by countervailing biases: (1) Some online references to jobs persist after an employee has left the position and is seeking other work. (2) My data collection could not distinguish part-time and full-time work, so I gave all jobs the same weight. (3) Some job titles may be polite masks for unemployment. A “solo practitioner,” for example, may not be actively handling cases or seeking clients. (4) My study included only law graduates who were admitted to the bar; it does not include the 10-12% of graduates who never take or pass the bar.

As I acknowledge in the paper, all of these biases could lead to overstating employment outcomes.

Salaries Within Job Categories

Professor Simkovic notes that my study does not account for salary increases within job categories. As I note in the paper, I gathered no data about salaries. I certainly hope that 2010 graduates received salary increases during the last five years! That, however, is a different question from whether employment patterns have shifted among new attorneys. Within the population I studied, I observed several features that differ from employment patterns reported in earlier studies of lawyers. These include the emergence of staff attorneys at BigLaw firms, a notable percentage of solo practitioners, a surprisingly low percentage of lawyers employed at law firms, and substantial percentage of recently licensed lawyers working in jobs that do not require bar admission.

Selection Bias

Professor Simkovic suggests that my study suffers from selection bias because the most talented Ohio graduates may have moved to other states to accept BigLaw offers. This would be a concern if I were trying to describe employment opportunities for a particular law school, but I am not doing that. Instead, I analyze the employment opportunities within a defined market. One can debate, as we have, how well Ohio represents outcomes in other markets. The study, however, is relatively free of selection bias within its defined population. Unlike AJD and many other studies, it does not depend upon subjects’ willingness to answer a lengthy survey.

For the record I’ll note that, although some of my school’s graduates move to other states for BigLaw jobs, the number is small. Like most law schools outside the top-ranked group, we place relatively few graduates at schools with more than 500 (or even more than 250) lawyers. My relatively informed, yet still anecdotal, impression is that our students who move out of state show a similar job distribution to those who remain in Ohio.

What Do We Know?

From my study, we know some things about the jobs held by lawyers who passed the Ohio bar exam in 2010. We don’t know about lawyers who passed the Ohio bar in other years, or about law graduates living in Ohio who have not been admitted to the bar. Nor do we know anything with certitude about lawyers in other states or at different times. But do the facts we know about one set of lawyers at one time provide insights into the experiences of other lawyers? Much social science research assumes that such insights are possible. The reach of those insights depends on the nature of the study.

Here, I think we gain some insight into employment patterns for recent graduates from many schools–at least for the 90% of schools ranked outside the US News top twenty. Some schools and some markets are very distinctive, but most of us are not as different as we first believe. Our first-hand impressions of our graduates’ job outcomes, meanwhile, are very skewed. After just a few years of teaching, we all have lots of former students. The ones we hear from or see at reunions almost certainly differ from those who drop out of sight. Research about Ohio won’t tell you everything you want to know about another market, but it may tell you more than you think.

Can we also gain insights about whether the job market for new lawyers has changed? That is a central claim of my study, buttressed by comparisons to previous data as well as information about why outcomes may have changed. Once again, I think the comparisons add to our knowledge. Personally, I don’t find the fact of change surprising. The legal employment market was different in the 1980s than in the 1950s, and both of those markets were different from the 1920s or 1890s. Why would we in 2015 be exempt from change?

The fact that change has occurred doesn’t mean that the demand for lawyers has evaporated; Richard Suskind’s provocative book title (The End of Lawyers?) has skewed discussions about change by creating a straw man. In the end, even Susskind doesn’t believe that lawyers are doomed to extinction. I think it’s important to know, however, that changes are occurring in the nature of legal employment. Staff attorneys, contract workers, and legal process outsourcers play a larger role today than they did ten years ago; an increasing number of new lawyers seem to establish solo practices; and junior positions in law firms seem to be declining. These and other changes are the ones I discuss in my paper. I hope that others will continue the exploration.

, View Comments (2)

It’s Just Ohio

April 27th, 2015 / By

Today’s NY Times has an article that mentions my recent study of employment outcomes for the Class of 2010. Using official bar records, employer web sites, LinkedIn, and other internet sources, I tracked current employment outcomes for the 1,214 new lawyers who passed the Ohio bar in 2010. I found job information for 93.7% of the population.

The findings, as I explain in the paper, suggest that the Class of 2010 continues to face challenges in the job market–even almost five years after graduation. Although all members of the group I studied group were admitted to the bar, only three quarters hold a job that requires a law license. One-tenth of these recent graduates have gone into solo practice. The percentage working in law firms is just 40.4%–and a third of those lawyers work in firms with just 1-4 others.

These and other findings, of course, represent outcomes for newly admitted lawyers licensed in Ohio. Brian Galle at Prawfsblawg has questioned whether Ohio’s results represent outcomes in other parts of the nation. It’s a question that others undoubtedly will raise, so I offer some thoughts on that here.

Which Legal Profession?

When legal educators talk about the legal profession, discussion drifts toward BigLaw. This seems to happen even when we don’t realize it. Professor Galle, for example, states in a follow-up comment to his post that “the U.S. law market is concentrated in a few states.” That may be true for some types of corporate practice, but it’s not true for all of the other types of law that attorneys pursue.

Small and medium-sized businesses account for more than 99% of all business employers in the United States. These businesses, which populate every state, generate legal needs of all kinds: incorporation and partnership agreements, contracts with suppliers, tax disputes, employment suits, real estate deals, regulatory compliance, and tort claims. These clients do not hire BigLaw firms for their work.

Individuals in every state, meanwhile, need lawyers to handle divorces, criminal charges, real estate transactions, employment claims, immigration concerns, trusts and estates, civil lawsuits, and government disputes. Speaking of the latter, many more lawyers work for state and local governments than for the federal government. Whether you want to be a prosecutor, public defender, or agency lawyer, you’re more likely to work for a state, town, or county than for the feds.

If we want to think about employment outcomes for law graduates, we have to evaluate all parts of the legal profession–not just the BigLaw firms or government offices located inside the beltway. There’s a lot of law all over this land.

Why Look at a Single State?

If we agree that the legal profession is quite diverse, then how can we explore employment outcomes in that profession? National studies, like the After the JD project, offer one option. Averages taken across a diverse group, however, can offer a misleading picture. As statisticians have noted wryly, the average person has one testicle and one ovary.

Studying a specific city or state, on the other hand, imposes different limits. No two cites or states look exactly the same. Geographically targeted studies, however, can be quite informative. Two of the leading studies of our profession, Chicago Lawyers and its sequel Urban Lawyers, both focus exclusively on lawyers working within Chicago’s city limits.

I concluded that, given existing data on the legal profession (which is both fragmented and sparse), it would be most illuminating to develop a study of recent graduates licensed to practice in a large, but not dominant, legal market. In a private comment, one of my readers characterized Ohio as a “second tier legal market,” and I accept that label. That’s exactly what I was looking for: a market that would reflect the experiences of a wide range of law graduates, rather than those of an elite minority.

But Why Ohio?

In the paper, I offer considerable detail about why Ohio serves my purpose as a state that represents outcomes for a large band of new lawyers. Ohio is relatively large: it ranks ninth among all states for both the size of its licensed bar and the number of jobs provided recent law graduates. Two Ohio cities (Columbus and Cleveland) rank among the top 20 cities providing jobs to those graduates.

And yes, Ohio does have BigLaw firms: Jones Day, Baker & Hostetler, Squite Patton Boggs, and several others. It also has a client base that generates BigLaw issues: the three just-mentioned firms originated in Ohio and then spread globally.

NALP‘s employment reports on 9-month outcomes for the Class of 2010 suggest that Ohio’s legal market includes a representative mix of employers for entry-level lawyers. Other large states skew strongly toward private practice jobs (e.g., New York and California) or government positions (Washington DC). When I examined 9-month employment patterns for the ten largest states, only Ohio and Pennsylvania offered a representative mix.

Ohio, finally, has a fairly robust economy. In 2010, the state’s overall unemployment rate was worse than the national average but better than several states (California, Florida, and Illinois) that employ more lawyers. Equally important for measuring current employment outcomes, Ohio benefited from a strong recovery. In 2014, Ohio’s overall unemployment rate beat the national average and was considerably better than in legal powerhouse states like New York, California, Illinois, Florida, and Washington, D.C. See p. 13 of the paper.

Summing Up

No single study can capture a picture of employment outcomes that is true for all members of the Class of 2010–or of any other recent class. That’s partly because we have so few baseline studies to build on, and partly because the outcomes are so diverse. My study is incomplete in several ways. In addition to the geographic focus, I included only law graduates who were successfully admitted to the bar. The study tells us relatively little about careers of law school graduates who never take or pass the bar. That group, which comprises about 12% of all graduates (see page 40), would have different job outcomes than the ones I traced.

But we have to start somewhere. I chose to examine new lawyers in a state that, I believe, represents the type of employment outcomes achieved by a very large number of law graduates nationwide. As legal educators, we need to focus more on those outcomes–not just on the salaries and lifestyle at the largest law firms.

In making this start, I also developed a method that is easy to replicate. Ohio happens to have a particularly user-friendly bar directory, but most states have searchable directories online. Graduates, bar licensees, and other populations are easy to track through those directories, employer websites, LinkedIn, and other sources. If you’d like to study a different set of lawyers, feel free to contact me. I’d be happy to share all of my tips, including the best ways to track graduates who change their names. (Ok, I’ll offer that one without even requiring an email. If the state bar directory doesn’t allow searching by first and middle names, type the lawyer’s name plus the word “wedding” into google. You’ll most likely obtain a wedding announcement, gift registry site, or other leads.)

, No Comments Yet

Note to Law Schools: Show Your Work on JD Advantage Jobs

April 23rd, 2015 / By

In a column in this week’s New York Law Journal, Jill Backer, assistant dean of career and professional development at Pace Law School, says it’s artificial to distinguish between jobs that require a law license and jobs where the JD confers an advantage. Backer contends that doing so through the ABA’s standardized employment reporting regime reinforces a perception that JD Advantage jobs are “less than” the Bar Passage Required jobs.

As it turns out, there’s strong evidence that many of these jobs are “less than.” But Backer does not address overwhelming evidence that JD Advantage jobs pay substantially less on average and leave graduates looking for new jobs shortly after starting them.

Additional information showing otherwise would be helpful, but law schools do not provide it. Instead schools hope we’ll trust their word that JD Advantage jobs are not only desirable, but worth pursuing through the JD-path instead of a shorter, less painfully expensive degree.

I will be the first to admit that there are many great JD Advantage jobs. For instance, my job as executive director of Law School Transparency would count as JD Advantage because my JD provides a “demonstrable advantage in . . . performing the job,” even if my job “does not itself require bar passage or an active law license or involve practicing law.” The same applies to the editors of Above the Law, the founders of Hire an Esquire, and federal agents.

The problem is that JD Advantage category is so broad that it loses meaning. Schools infuse meaning on the term through the occasional, sexy anecdote—just like the ones in my previous paragraph. Look no further than Baker’s column lede to see how these anecdotes are operationalized. Baker frames readers’ understanding of JD Advantage jobs by pointing out that the President of the United States holds a job for which the JD is an advantage.

But the definition does not even require the employer to care about the JD—the education merely needs to be helpful. A colorable argument can be made that a legal education helps with just about any job a law graduate would consider. How many jobs would you take that don’t require some measure of critical thinking or understanding of our legal system?

The category is so flimsy that paralegals and graduates in administrative positions at law firms count as JD Advantage. For example, at Baker’s own school, 14 class of 2013 graduates (or 13% of all graduates in firm jobs) were paralegals or administrators. Of the 14, five were “professional” jobs and nine were “JD Advantage.” Nobody pays $45,000 per year in law school tuition to become a paralegal. But nearly a quarter of Pace’s graduates in JD Advantage jobs were paralegals or administrators at law firms.

According to NALP, 41% of all class of 2013 graduates in JD Advantage jobs were still seeking another job nine months after graduation. Graduates in Bar Passage Required jobs were one-third as likely to indicate the same. According to data from law school graduates, JD Advantage jobs are not nearly as desirable as Baker would have readers (and prospective students) believe. Further, NALP reports that the average JD Advantage salary is 25% less than the average salary for graduates in bar-required jobs.

There are certainly people who attend law school with other aims, and they may find desirable work outside of the practice of law. (Note that there’s good reason to believe that non-legal employers are after people with legal experience, rather than a legal education.) Though I don’t speak for others, LST does not include non-legal jobs in the LST Employment Score for straightforward reasons. For people interested specifically in a non-legal career including these jobs in the LST Employment Score would not make the score more meaningful. Such a mixed score would be determined primarily by legal job placements. A mixed legal/non-legal score does not really tell prospective students about alternative job placement.

For people interested in only a legal career, the addition of non-legal jobs greatly depreciates the value of the score by including a number of jobs they are not interested in. The only group that would be well served by a mixed score is a group who would be okay with pretty much any job upon graduation. While there are third-year students and recent graduates scrambling for any job they can obtain, few people have such an attitude before entering law school.

If a school prides and sells itself on its ability to produce graduates primed for JD Advantage jobs, it ought to find another way to prove its graduates are different than the 41% of graduates in JD Advantage jobs looking for a different job just a few months after starting. I’d like to think schools in this category would want to do this. Regardless, the onus is on law schools to prove that their JD Advantage outcomes are desirable and worth pursuing a JD to obtain.

Northwestern University School of Law, for example, makes a persuasive attempt to do just that. On a page titled, “JD Advantage Employment,” Northwestern actively distinguishes itself from other schools through data and context. By request for this column, the school’s dean supplemented that information. Northwestern graduates from 2013 and 2014 are substantially less likely than the national average to be seeking another job with one in hand—an estimated 10% of JD Advantage and 3% Bar Passage Required job holders.

Law schools are in a position where they need to become more attractive to prospective students, especially the highest achieving ones. A school could legitimately position itself as a force for the new economy. Northwestern does this as well as anyone. If other schools want to show how they’re different, they need to do more than throw together a new program to sell to applicants and alumni or claim that their JD is the best path to these new economy jobs. It requires more than just bold claims and factless editorials. To schools like Pace hoping to carve out a new niche, show us your work in a meaningful way. The applicant market is listening.

, View Comments (4)

ExamSoft Update

April 21st, 2015 / By

In a series of posts (here, here, and here) I’ve explained why I believe that ExamSoft’s massive computer glitch lowered performance on the July 2014 Multistate Bar Exam (MBE). I’ve also explained how NCBE’s equating and scaling process amplified the damage to produce a 5-point drop in the national bar passage rate.

We now have a final piece of evidence suggesting that something untoward happened on the July 2014 bar exam: The February 2015 MBE did not produce the same type of score drop. This February’s MBE was harder than any version of the test given over the last four decades; it covered seven subjects instead of six. Confronted with that challenge, the February scores declined somewhat from the previous year’s mark. The mean scaled score on the February 2015 MBE was 136.2, 1.8 points lower than the February 2014 mean scaled score of 138.0.

The contested July 2014 MBE, however, produced a drop of 2.8 points compared to the July 2013 test. That drop was 35.7% larger than the February drop. The July 2014 shift was also larger than any other year-to-year change (positive or negative) recorded during the last ten years. (I treat the February and July exams as separate categories, as NCBE and others do.)

The shift in February 2015 scores, on the other hand, is similar in magnitude to five other changes that occurred during the last decade. Scores dropped, but not nearly as much as in July–and that’s despite taking a harder version of the MBE. Why did the July 2014 examinees perform so poorly?

It can’t be a change in the quality of test takers, as NCBE’s president, Erica Moeser, has suggested in a series of communications to law deans and the profession. The February 2015 examinees started law school at about the same time as the July 2014 ones. As others have shown, law student credentials (as measured by LSAT scores) declined only modestly for students who entered law school in 2011.

We’re left with the conclusion that something very unusual happened in July 2014, and it’s not hard to find that unusual event: a software problem that occupied test-takers’ time, aggravated their stress, and interfered with their sleep.

On its own, my comparison of score drops does not show that the ExamSoft crisis caused the fall in July 2014 test performance. The other evidence I have already discussed is more persuasive. I offer this supplemental analysis for two reasons.

First, I want to forestall arguments that February’s performance proves that the July test-takers must have been less qualified than previous examinees. February’s mean scaled score did drop, compared to the previous February, but the drop was considerably less than the sharp July decline. The latter drop remains the largest score change during the last ten years. It clearly is an outlier that requires more explanation. (And this, of course, is without considering the increased difficulty of the February exam.)

Second, when combined with other evidence about the ExamSoft debacle, this comparison adds to the concerns. Why did scores fall so precipitously in July 2014? The answer seems to be ExamSoft, and we owe that answer to test-takers who failed the July 2014 bar exam.

One final note: Although I remain very concerned about both the handling of the ExamSoft problem and the equating of the new MBE to the old one, I am equally concerned about law schools that admit students who will struggle to pass a fairly administered bar exam. NCBE, state bar examiners, and law schools together stand as gatekeepers to the profession and we all owe a duty of fairness to those who seek to join the profession. More about that soon.

, No Comments Yet

Overpromising

April 18th, 2015 / By

Earlier this week, I wrote about the progress that law schools have made in reporting helpful employment statistics. The National Association for Law Placement (NALP), unfortunately, has not made that type of progress. On Wednesday, NALP issued a press release that will confuse most readers; mislead many; and ultimately hurt law schools, prospective students, and the profession. It’s the muddled, the false, and the damaging.

The Muddled

Much of the press release discusses the status of $160,000 salaries for new lawyers. This discussion vacillates between good news (for the minority of graduates who might get these salaries) and bad news. On the one hand, the $160,000 starting salary still exists. On the other hand, the rate hasn’t increased since 2007, producing a decline of 11.7% in real dollars (although NALP doesn’t spell that out).

On the bright side, the percentage of large firm offices paying this salary has increased from 27% in 2014 to 39% this year. On the down side, that percentage still doesn’t approach the two-thirds of large-firm offices that paid $160,000 in 2009. It also looks like the percentage of offices offering $160,000 to this fall’s associates (“just over one-third”) will be slightly lower than the current percentage.

None of this discussion tells us very much. This NALP survey focused on law firms, not individuals, and it tabulated results by office rather than firm. The fact that 39% of offices associated with the largest law firms are paying $160,000 doesn’t tell us how many individuals are earning that salary (let alone what percentage of law school graduates are doing so). And, since NALP has changed its definition of the largest firms since 2009, it’s hard to know what to make of comparisons with previous years.

In the end, all we know is that some new lawyers are earning $160,000–a fact that has been true since 2007. We also know that this salary must be very, very important because NALP repeats the figure (“$160,000”) thirty-two times in a single press release.

The False

In a bolded heading, NALP tells us that its “Data Represent Broad-Based Reporting.” This is so far off the mark that it’s not even “misleading.” It’s downright false. As the press release notes, only 5% of the firms responding to the survey employed 50 lawyers or fewer. (The accompanying table suggests that the true percentage was just 3.5%, but I won’t quibble over that.)

That’s a laughable representation of small law firms, and NALP knows it. Last year, NALP reported that 57.5% of graduates who took jobs with law firms went to firms of 50 lawyers or less. Smaller firms tend to hire fewer associates than large ones, and they don’t hire at all in some years. The percentage of “small” firms (those with 50 or fewer lawyers) in the United States undoubtedly is greater than 57.5%–and not anywhere near 5%.

NALP’s false statements go beyond a single heading. The press release specifically assures readers that “The report thus sheds light on the breadth of salary differentials among law firms of varying sizes and in a wide range of geographic areas nationwide, from the largest metropolitan areas to much smaller cities.” I don’t know how anyone can make that claim with a straight face, given the lack of response from law firms that make up the majority of firms nationwide.

This would be simply absurd, except NALP also tells readers that “the overall national median first-year salary at firms of all sizes was $135,000,” and that the median for the smallest firms (those with 50 or fewer lawyers) was $121,500. There is some fuzzy language about the median moving up during the last year because of “relatively fewer responses from smaller firms,” but that refers simply to the incremental change. Last year’s survey was almost as distorted as this year’s, with just 9.8% of responses coming from firms with 50 or fewer lawyers.

More worrisome, there’s no caveat at all attached to the representation that the median starting salary in the smallest law firms is $121,500. If you think that the 16 responding firms in this category magically represented salaries of all firms with 50 or fewer lawyers, see below. Presentation of the data in this press release as “broad-based” and “shed[ding] light on the breadth of salary differentials” is just breathtakingly false.

The Damaging

NALP’s false statements damage almost everyone related to the legal profession. The media have reported some of the figures from the press release, and the public response is withering. Clients assume that firms must be bilking them; otherwise, how could so many law firms pay new lawyers so much? Remember that this survey claims a median starting salary of $121,500 even at the smallest firms. Would you approach a law firm to draft your will or handle your divorce if you thought your fees would have to support that type of salary for a brand-new lawyer?

Prospective students will also be hurt if they act on NALP’s misrepresentations. Why shouldn’t they believe an organization called the “National Association for Law Placement,” especially when the organization represents its data as “broad-based”?

Ironically, though, law schools may suffer the most. What happens when prospective students compare NALP’s pumped-up figures with the ones on most of our websites? Nationwide, the median salary for 2013 graduates working in firms of 2-10 lawyers was just $50,000. So far, reports about the Class of 2014 look comparable. (As I’ve explained before, the medians that NALP reports for small firms are probably overstated. But let’s go with the reported median for now.)

When prospective students look at most law school websites, they’re going to see that $50,000 median (or one close to it) for small firms. They’re also going to see that a lot of our graduates work in those small firms of 2-10 lawyers. Nationwide, 8,087 members of the Class of 2013 took a job with one of those firms. That’s twice as many small firm jobs as ones at firms employing 500+ lawyers (which hired 3,980 members of the Class of 2013).

How do we explain the fact that so many of our graduates work at small firms, when NALP claims that these firms represent such a small percentage of practice? And how do we explain that our graduates average only $50,000 in these small-firm jobs, while NALP reports a median of $121,500? And then how do we explain the small number of our graduates who earn this widely discussed salary of $160,000?

With figures like $160,000 and $121,500 dancing in their heads, prospective students will conclude that most law schools are losers. By “most” I mean the 90% of us who fall outside the top twenty schools. Why would a student attend a school that offers outcomes so inferior to ones reported by NALP?

Even if these prospective students have read scholarly analyses showing the historic value of a law degree, they’re going to worry about getting stuck with a lemon school. And compared to the “broad-based” salaries reported by NALP, most of us look pretty sour.

Law schools need to do two things. First, we need to stop NALP from making false statements–or even just badly skewed ones. Each of our institutions pays almost $1,000 per year for this type of reporting. We shouldn’t support an organization that engages in such deceptive statements.

Second, we really do need to stop talking about BigLaw and $160,000 salaries. If Michael Simkovic and Frank McIntyre are correct about the lifetime value of a law degree, then we should be able to illustrate that value with real careers and real salaries. What do prosecutors earn compared to other government workers, both entry-level and after 20 years of experience? How much of a premium do businesses pay for a compliance officer with a JD? We should be able to generate answers to those questions. If the answers are positive, and we can place students in the appropriate jobs, we’ll have no trouble recruiting applicants.

If the answers are negative, we need to know that as well. We need to figure out the value of our degree, for our students. Let’s get real. Stop NALP from disseminating falsehoods, stop talking about $16*,*** salaries, and start talking about outcomes we can deliver.

, View Comment (1)

Equating, Scaling, and Civil Procedure

April 16th, 2015 / By

Still wondering about the February bar results? I continue that discussion here. As explained in my previous post, NCBE premiered its new Multistate Bar Exam (MBE) in February. That exam covers seven subjects, rather than the six tested on the MBE for more than four decades. Given the type of knowledge tested by the MBE, there is little doubt that the new exam is harder than the old one.

If you have any doubt about that fact, try this experiment: Tell any group of third-year students that the bar examiners have decided to offer them a choice. They may study for and take a version of the MBE covering the original six subjects, or they may choose a version that covers those subjects plus Civil Procedure. Which version do they choose?

After the students have eagerly indicated their preference for the six-subject test, you will have to apologize profusely to them. The examiners are not giving them a choice; they must take the harder seven-subject test.

But can you at least reassure the students that NCBE will account for this increased difficulty when it scales scores? After all, NCBE uses a process of equating and scaling scores that is designed to produce scores with a constant meaning over time. A scaled score of 136 in 2015 is supposed to represent the same level of achievement as a scaled score of 136 in 2012. Is that still true, despite the increased difficulty of the test?

Unfortunately, no. Equating works only for two versions of the same exam. As the word “equating” suggests, the process assumes that the exam drafters attempted to test the same knowledge on both versions of the exam. Equating can account for inadvertent fluctuations in difficulty that arise from constructing new questions that test the same knowledge. It cannot, however, account for changes in the content or scope of an exam.

This distinction is widely recognized in the testing literature–I cite numerous sources at the end of this post. It appears, however, that NCBE has attempted to “equate” the scores of the new MBE (with seven subjects) to older versions of the exam (with just six subjects). This treated the February 2015 examinees unfairly, leading to lower scores and pass rates.

To understand the problem, let’s first review the process of equating and scaling.

Equating

First, remember why NCBE equates exams. To avoid security breaches, NCBE must produce a different version of the MBE every February and July. Testing experts call these different versions “forms” of the test. For each of the MBE forms, the designers attempt to create questions that impose the same range of difficulty. Inevitably, however, some forms are harder than others. It would be unfair for examinees one year to get lower scores than examinees the next year, simply because they took a harder form of the test. Equating addresses this problem.

The process of equating begins with a set of “control” questions or “common items.” These are questions that appear on two forms of the same exam. The February 2015 MBE, for example, included a subset of questions that had also appeared on some earlier exam. For this discussion, let’s assume that there were 30 of these common items and 160 new questions that counted toward each examinee’s score. (Each MBE also includes 10 experimental questions that do not count toward the test-taker’s score but that help NCBE assess items for future use.)

When NCBE receives answer sheets from each version of the MBE, it is able to assess the examinees’ performance on the common items and new items. Let’s suppose that, on average, earlier examinees got 25 of the 30 common items correct. If the February 2015 test-takers averaged only 20 correct answers to those common items, NCBE would know that those test-takers were less able than previous examinees. That information would then help NCBE evaluate the February test-takers’ performance on the new test items. If the February examinees also performed poorly on those items, NCBE could conclude that the low scores were due to the test-takers’ abilities rather than to a particularly hard version of the test.

Conversely, if the February test-takers did very well on the new items–while faring poorly on the common ones–NCBE would conclude that the new items were easier than questions on earlier tests. The February examinees racked up points on those questions, not because they were better prepared than earlier test-takers, but because the questions were too easy.

The actual equating process is more complicated than this. NCBE, for example, can account for the difficulty of individual questions rather than just the overall difficulty of the common and new items. The heart of equating, however, lies in this use of “common items” to compare performance over time.

Scaling

Once NCBE has compared the most recent batch of exam-takers with earlier examinees, it converts the current raw scores to scaled ones. Think of the scaled scores as a rigid yardstick; these scores have the same meaning over time. 18 inches this year is the same as 18 inches last year. In the same way, a scaled score of 136 has the same meaning this year as last year.

How does NCBE translate raw points to scaled scores? The translation depends upon the results of equating. If a group of test-takers performs well on the common items, but not so well on the new questions, the equating process suggests that the new questions were harder than the ones on previous versions of the test. NCBE will “scale up” the raw scores for this group of exam takers to make them comparable to scores earned on earlier versions of the test.

Conversely, if examinees perform well on new questions but poorly on the common items, the equating process will suggest that the new questions were easier than ones on previous versions of the test. NCBE will then scale down the raw scores for this group of examinees. In the end, the scaled scores will account for small differences in test difficulty across otherwise similar forms.

Changing the Test

Equating and scaling work well for test forms that are designed to be as similar as possible. The processes break down, however, when test content changes. You can see this by thinking about the data that NCBE had available for equating the February 2015 bar exam. It had a set of common items drawn from earlier tests; these would have covered the six original subjects. It also had answers to 190 new items; these would have included both the original subjects and the new one (Civil Procedure).

With these data, NCBE could make two comparisons:

1. It could compare performance on the common items. It undoubtedly found that the February 2015 test-takers performed less well than previous test-takers on these items. That’s a predictable result of having a seventh subject to study. This year’s examinees spread their preparation among seven subjects rather than six. Their mastery of each subject was somewhat lower, and they would have performed less well on the common items testing those subjects.

2. NCBE could also compare performance on the new Civil Procedure items with performance on old and new items in other subjects. NCBE won’t release those comparisons, because it no longer discloses raw scores for subject areas. I predict, however, that performance on Civil Procedure items was the same as on Evidence, Property, or other subjects. Why? Because Civil Procedure is not intrinsically harder than these other subjects, and the examinees studied all seven subjects.

Neither of these comparisons, however, would address the key change in the MBE: Examinees had to prepare seven subjects rather than six. As my previous post suggested, this isn’t just a matter of taking all seven subjects in law school and remembering key concepts for the MBE. Because the MBE is a closed-book exam that requires recall of detailed rules, examinees devote 10 weeks of intense study to this exam. They don’t have more than 10 weeks, because they’re occupied with law school classes, extracurricular activities, and part-time jobs before mid-May or mid-December.

There’s only so much material you can cram into memory during ten weeks. If you try to memorize rules from seven subjects, rather than just six, some rules from each subject will fall by the wayside.

When Equating Doesn’t Work

Equating is not possible for a test like the new MBE, which has changed significantly in content and scope. The test places new demands on examinees, and equating cannot account for those demands. The testing literature is clear that, under these circumstances, equating produces misleading results. As Robert L. Brennan, a distinguished testing expert, wrote in a prominent guide: “When substantial changes in test specifications occur, either scores should be reported on a new scale or a clear statement should be provided to alert users that the scores are not directly comparable with those on earlier versions of the test.” (See p. 174 of Linking and Aligning Scores and Scales, cited more fully below.)

“Substantial changes” is one of those phrases that lawyers love to debate. The hypothetical described at the beginning of this post, however, seems like a common-sense way to identify a “substantial change.” If the vast majority of test-takers would prefer one version of a test over a second one, there is a substantial difference between the two.

As Brennan acknowledges in the chapter I quote above, test administrators dislike re-scaling an exam. Re-scaling is both costly and time-consuming. It can also discomfort test-takers and others who use those scores, because they are uncertain how to compare new scores to old ones. But when a test changes, as the MBE did, re-scaling should take the place of equating.

The second best option, as Brennan also notes, is to provide a “clear statement” to “alert users that the scores are not directly comparable with those on earlier versions of the test.” This is what NCBE should do. By claiming that it has equated the February 2015 results to earlier test results, and that the resulting scaled scores represent a uniform level of achievement, NCBE is failing to give test-takers, bar examiners, and the public the information they need to interpret these scores.

The February 2015 MBE was not the same as previous versions of the test, it cannot be properly equated to those tests, and the resulting scaled scores represent a different level of achievement. The lower scaled scores on the February 2015 MBE reflect, at least in part, a harder test. To the extent that the test-takers also differed from previous examinees, it is impossible to separate that variation from the difference in the tests themselves.

Conclusion

Equating was designed to detect small, unintended differences in test difficulty. It is not appropriate for comparing a revised test to previous versions of that test. In my next post on this issue, I will discuss further ramifications of the recent change in the MBE. Meanwhile, here is an annotated list of sources related to equating:

Michael T. Kane & Andrew Mroch, Equating the MBE, The Bar Examiner, Aug. 2005, at 22. This article, published in NCBE’s magazine, offers an overview of equating and scaling for the MBE.

Neil J. Dorans, et al., Linking and Aligning Scores and Scales (2007). This is one of the classic works on equating and scaling. Chapters 7-9 deal specifically with the problem of test changes. Although I’ve linked to the Amazon page, most university libraries should have this book. My library has the book in electronic form so that it can be read online.

Michael J. Kolen & Robert L. Brennan, Test Equating, Scaling, and Linking:
Methods and Practices (3d ed. 2014). This is another standard reference work in the field. Once again, my library has a copy online; check for a similar ebook at your institution.

CCSSO, A Practitioner’s Introduction to Equating. This guide was prepared by the Council of Chief State School Officers to help teachers, principals, and superintendents understand the equating of high-stakes exams. It is written for educated lay people, rather than experts, so it offers a good introduction. The source is publicly available at the link.

, No Comments Yet

Old Ways, New Ways

April 14th, 2015 / By

For the last two weeks, Michael Simkovic and I have been discussing the manner in which law schools used to publish employment and salary information. The discussion started here and continued on both that blog and this one. The debate, unfortunately, seems to have confused some readers because of its historical nature. Let’s clear up that confusion: We were discussing practices that, for the most part, ended four or five years ago.

Responding to both external criticism and internal reflection, today’s law schools publish a wealth of data about their employment outcomes; most of that information is both user-friendly and accurate. Here’s a brief tour of what data are available today and what the future might still hold.

ABA Reports

For starters, all schools now post a standard ABA form that tabulates jobs in a variety of categories. The ABA also provides this information on a website that includes a summary sheet for each school and a spreadsheet compiling data from all of the ABA-accredited schools. Data are available for classes going back to 2010; the 2014 data will appear shortly (and are already available on many school sites).

Salary Specifics

The ABA form does not include salary data, and the organization warns schools to “take special care” when reporting salaries because “salary data can so easily be misleading.” Schools seem to take one of two approaches when discussing salary data today.

Some provide almost no information, noting that salaries vary widely. Others post their “NALP Report” or tables drawn directly from that report. What is this report? It’s a collection of data that law schools have been gathering for about forty years, but not disclosing publicly until the last five. The NALP Report for each school summarizes the salary data that the school has gathered from graduates and other sources. You can find examples by googling “NALP Report” along with the name of a law school. NALP reports are available later in the year than ABA ones; you won’t find any 2014 NALP Reports until early summer.

NALP’s data gathering process is far from perfect, as both Professor Simkovic and I have discussed. The report for each school, however, has the virtue of both providing some salary information and displaying the limits of that information. The reports, for example, detail how many salaries were gathered in each employment category. If a law school reports salaries for 19/20 graduates working for large firms, but just 5/30 grads working in very small firms, a reader can make note of that fact. Readers also get a more complete picture of how salaries differ between the public and private sector, as well as within subsets of those groups.

Before 2010, no law school shared its NALP Report publicly. Instead, many schools chose a few summary statistics to disclose. A common approach was to publish the median salary for a particular law school class, without further information about the process of obtaining salary information, the percentage of salaries gathered, or the mix of jobs contributing to the median. If more specific information made salaries look better, schools could (and did) provide that information. A school that placed a lot of graduates in judicial clerkships, government jobs, or public interest positions, for example, often would report separate medians for those categories–along with the higher median for the private sector. Schools had a lot of discretion to choose the most pleasing summary statistic, because no one reported more detailed data.

Given the brevity of reported salary data, together with the potential for these summary figures to mislead, the nonprofit organization Law School Transparency (LST) began urging schools to publish their “full” NALP Reports. “Full” did not mean the entire report, which can be quite lengthy and repetitive. Instead, LST defined the portions of the report that prospective students and others would find helpful. Schools seem to agree with LST’s definition, publishing those portions of the report when they choose to disclose the information.

Today, according to LST’s tracking efforts, at least half of law schools publish their NALP Reports. There may be even more schools that do so; although LST invites ongoing communication with law schools, the schools don’t always choose to update their status for the LST site.

Plus More

The ABA’s standardized employment form, together with greater availability of NALP Reports, has greatly changed the information available to potential law students and other interested parties. But the information doesn’t stop with these somewhat dry forms. Many law schools have built upon these reports to convey other useful information about their graduates’ careers. Although I have not made an exhaustive review, the contemporary information I’ve seen seems to comply with our obligation to provide information that is “complete, accurate and not misleading to a reasonable law school student or applicant.”

In addition to these efforts by individual schools, the ABA has created two websites with consumer information about law schools: the employment site noted above and a second site with other data regularly reported to the ABA. NALP has also increased the amount of data it releases publicly without charge. LST, finally, has become a key source for prospective students who want to sort and compare data drawn from all of these sources. LST has also launched a new series of podcasts that complement the data with a more detailed look at the wide range of lawyers’ work.

Looking Forward

There’s still more, of course, that organizations could do to gather and disseminate data about legal careers. I like Professor Simkovic’s suggestion that the Census Bureau expand the Current Population Survey and American Community Survey to include more detailed information about graduate education. These surveys were developed when graduate education was relatively uncommon; now that post-baccalaureate degrees are more common, it seems critical to have more rigorous data about those degrees.

I also hope that some scholars will want to gather data from bar records and other online sources, as I have done. This method has limits, but so do larger initiatives like After the JD. Because of their scale and expense, those large projects are difficult to maintain–and without regular maintenance, much of their utility falls.

Even with projects like these, however, law schools undoubtedly will continue to collect and publish data about their own employment outcomes. Our institutions compete for students, US News rank, and other types of recognition. Competition begets marketing, and marketing can lead to overstatements. The burden will remain on all of us to maintain professional standards of “complete, accurate and not misleading” information, even as we talk with pride about our schools. Our graduates face similar obligations when they compete for clients. Although all of us chafe occasionally at duties, they are also the mark of our status as professionals.

, View Comment (1)

The February 2015 Bar Exam

April 12th, 2015 / By

States have started to release results of the February 2015 bar exam, and Derek Muller has helpfully compiled the reports to date. Muller also uncovered the national mean scaled score for this February’s MBE, which was just 136.2. That’s a notable drop from last February’s mean of 138.0. It’s also lower than all but one of the means reported during the last decade; Muller has a nice graph of the scores.

The latest drop in MBE scores, unfortunately, was completely predictable–and not primarily because of a change in the test takers. I hope that Jerry Organ will provide further analysis of the latter possibility soon. Meanwhile, the expected drop in the February MBE scores can be summed up in five words: seven subjects instead of six. I don’t know how much the test-takers changed in February, but the test itself did.

MBE Subjects

For reasons I’ve explained in a previous post, the MBE is the central component of the bar exam. In addition to contributing a substantial amount to each test-taker’s score, the MBE is used to scale answers to both essay questions and the Multistate Performance Test (MPT). The scaling process amplifies any drop in MBE scores, leading to substantial drops in pass rates.

In February 2015, the MBE changed. For more than four decades, that test has covered six subjects: Contracts, Torts, Criminal Law and Procedure, Constitutional Law, Property, and Evidence. Starting with the February 2015 exam, the National Conference of Bar Examiners (NCBE) added a seventh subject, Civil Procedure.

Testing examinees’ knowledge of Civil Procedure is not itself problematic; law students study that subject along with the others tested on the exam. In fact, I suspect more students take a course in Civil Procedure than in Criminal Procedure. The difficulty is that it’s harder to memorize rules drawn from seven subjects than to learn the rules for six. For those who like math, that’s an increase of 16.7% in the body of knowledge tested.

Despite occasional claims to the contrary, the MBE requires lots of memorization. It’s not solely a test of memorization; the exam also tests issue spotting, application of law to fact, and other facets of legal reasoning. Test-takers, however, can’t display those reasoning abilities unless they remember the applicable rules: the MBE is a closed-book test.

There is no other context, in school or practice, where we expect lawyers to remember so many legal principles without reference to codes, cases, and other legal materials. Some law school exams are closed-book, but they cover a single subject that has just been studied for a semester. The “closed book” moments in practice are much fewer than many observers assume. I don’t know any trial lawyers who enter the courtroom without a copy of the rules of evidence and a personalized cribsheet reminding them of common objections and responses.

This critique of the bar exam is well known. I repeat it here only to stress the impact of expanding the MBE’s scope. February’s test takers answered the same number of multiple choice questions (190 that counted, plus 10 experimental ones) but they had to remember principles from seven fields of law rather than six.

There’s only so much that the brain can hold in memory–especially when the knowledge is abstract, rather than gained from years of real-client experience. I’ve watched many graduates prepare for the bar over the last decade: they sit in our law library or clinic, poring constantly over flash cards and subject outlines. Since states raised passing scores in the 1990s and early 2000s, examinees have had to memorize many more rules in order to answer enough questions correctly. From my observation, their memory banks were already full to overflowing.

Six to Seven Subjects

What happens, then, when the bar examiners add a seventh subject to an already challenging test? Correct answers will decline, not just in the new subject, but across all subjects. The February 2015 test-takers, I’m sure, studied just as hard as previous examinees. Indeed, they probably studied harder, because they knew that they would have to answer questions drawn from seven bodies of legal knowledge rather than six. But their memories could hold only so much information. Memorized rules of Civil Procedure took the place of some rules of Torts, Contracts, or Property.

Remember that the MBE tests only a fraction of the material that test-takers must learn. It’s not a matter of learning 190 legal principles to answer 190 questions. The universe of testable material is enormous. For Evidence, a subject that I teach, the subject matter outline lists 64 distinct topics. On average, I estimate that each of those topics requires knowledge of three distinct rules to answer questions correctly on the MBE–and that’s my most conservative estimate.

It’s not enough, for example, to know that there’s a hearsay exemption for some prior statements by a witness, and that the exemption allows the fact-finder to use a witness’s out-of-court statements for substantive purposes, rather than merely impeachment. That’s the type of general understanding I would expect a new lawyer to have about Evidence, permitting her to research an issue further if it arose in a case. The MBE, however, requires the test-taker to remember that a grand jury session counts as a “proceeding” for purposes of this exemption (see Q 19). That’s a sub-rule fairly far down the chain. In fact, I confess that I had to check my own book to refresh my recollection.

In any event, if Evidence requires mastering 200 sub-principles of this detail, and the same is true of the other five traditional MBE subjects, that was 1200 very specific rules to memorize and keep in memory–all while trying to apply those rules to new fact patterns. Adding a seventh subject upped the ante to 1400 or more detailed rules. How many things can one test-taker remember without checking a written source? There’s a reason why humanity invented writing, printing, and computers.

But They Already Studied Civil Procedure

Even before February, all jurisdictions (to my knowledge) tested Civil Procedure on their essay exams. So wouldn’t examinees have already studied those Civ Pro principles? No, not in the same manner. Detailed, comprehensive memorization is more necessary for the MBE than for traditional essays.

An essay allows room to display issue spotting and legal reasoning, even if you get one of the sub-rules wrong. In the Evidence example given above, an examinee could display considerable knowledge by identifying the issue, noting the relevant hearsay exemption, and explaining the impact of admissibility (substantive use rather than simply impeachment). If the examinee didn’t remember the correct status of grand jury proceedings under this particular rule, she would lose some points. She wouldn’t, however, get the whole question wrong–as she would on a multiple-choice question.

Adding a new subject to the MBE hit test-takers where they were already hurting: the need to memorize a large number of rules and sub-rules. By expanding the universe of rules to be memorized, NCBE made the exam considerably harder.

Looking Ahead

In upcoming posts, I will explain why NCBE’s equating/scaling process couldn’t account for the increased difficulty of this exam. Indeed, equating and scaling may have made the impact worse. I’ll also explore what this means for the ExamSoft discussion and what (if anything) legal educators might do about the increased difficulty of the MBE. To start the discussion, however, it’s essential to recognize that enhanced level of difficulty.

, View Comments (2)

Clueless About Salary Stats

April 11th, 2015 / By

Students and practitioners sometimes criticize law professors for knowing too little about the real world. Often, those criticisms are overstated. But then a professor like Michael Simkovic says something so clueless that you start to wonder if the critics are right.

Salaries and Response Rates

In a recent post, Simkovic tries to defend a practice that few other legal educators have defended: reporting entry-level salaries gathered through the annual NALP process without disclosing response rates to the salary question. Echoing a previous post, Simkovic claims that this practice was “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government.”

Simkovic doesn’t seem to understand how law schools and NALP actually collect salary information; the process is nothing like the government surveys he describes. Because of the idiosyncracies of the NALP process, the response rate has a particular importance.

Here are the two keys to the NALP process: (1) law schools are allowed–even encouraged–to supplement survey responses with information obtained from third parties; and (2) NALP itself is one of those third parties. Each year NALP publishes an online directory with copious salary information about the largest, best-paying law firms. Smaller firms rarely submit information to NALP, so they are almost entirely absent from the Directory.

As a result, as NALP readily acknowledges, “salaries for most jobs in large firms are reported” by law schools, while “fewer than half the salaries for jobs in small law firms are reported.” That’s “reported” as in “schools have independent information about large-firm salaries.”

For Example

To see an example of how this works in practice, take a look at the most recent (2013) salary report for Seton Hall Law School, where Simkovic teaches. Ten out of the eleven graduates who obtained jobs in firms with 500+ lawyers reported their salaries. But of the 34 graduates who took jobs in the smallest firms (those with 2-10 lawyers), just nine disclosed a salary. In 2010, 2011, and 2012, no graduates in the latter category reported a salary.

If this were a government survey, the results would be puzzling. The graduates working at the large law firms are among those “high-income individuals” that Simkovic tells us “often value privacy and are reluctant to share details about their finances.” Why are they so eager to disclose their salaries, when graduates working at smaller (and lower-paying) firms are not? And why do the graduates at every other law school act the same way? The graduates of Chicago’s Class of 2013 seem to have no sense of privacy: 149 out of 153 graduates working in the private sector happily provided their salaries, most of which were $160,000.

The answer, of course, is the NALP Directory. Law schools don’t need large-firm associates to report their salaries; the schools already know those figures. The current Directory offers salary information for almost 800 offices associated with firms of 200+ lawyers. In contrast, the Directory includes information about just 14 law firms employing 25 or fewer attorneys. That’s 14 nationwide–not 14 in New Jersey.

For the latter salaries, law schools must rely upon graduate reports, which seem difficult to elicit. When grads do report these salaries, they are much lower than the BigLaw ones. At Seton Hall, the nine graduates who reported small-firm salaries yielded a mean of just $51,183.

What Was the Problem?

I’m able to give detailed data in the above example because Seton Hall reports all of that information. It does so, moreover, for years going back to 2010. Other schools have not always been so candid. In the old days, some law schools merged the large-firm salaries provided by NALP with a handful of small-firm salaries collected directly from graduates. The school would then report a median or mean “private practice salary” without further information.

Was this “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government”? Clearly not–unless the government keeps a list of salaries from high-paying employers that it uses to supplement survey responses. That would be a nifty way to inflate wage reports, but no political party seems to have thought of this just yet.

Law schools, in other words, were not just publishing salary information without disclosing response rates. They were disclosing information that they knew was biased: they had supplemented the survey information with data drawn from the largest firms. The organization supervising the data collection process acknowledged that the salary statistics were badly skewed; so did any dean I talked with during that period.

The criticism of law schools for “failing to report response rates” became a polite shorthand for describing the way in which law schools produced misleading salary averages. Perhaps the critics should have been less polite. We reasoned, however, that if law schools at least reported the “response” rates (which, of course, included “responses” provided by the NALP data), graduates would see that reported salaries clustered in the largest firms. The information would also allow other organizations, like Law School Transparency to explain the process further to applicants.

This approach gave law schools the greatest leeway to continue reporting salary data and, frankly, to package it in ways that may still overstate outcomes. But let’s not pretend that law schools have been operating social science surveys with an unbiased method of data collection. That wasn’t true in the past, and it’s not true now.

, View Comments (24)

Law School Statistics

April 8th, 2015 / By

Earlier this week, I noted that even smart academics are misled by the manner in which law schools traditionally reported employment statistics. Steven Solomon, a very smart professor at Berkeley’s law school, was misled by the “nesting” of statistics on NALP’s employment report for another law school.

Now Michael Simkovic, another smart law professor, has proved the point again. Simkovic rather indignantly complains that Kyle McEntee “suggests incorrectly that The New York Times reported Georgetown’s median private sector salary without providing information on what percentage of the class or of those employed were working in the private sector.” But it is Simkovic who is incorrect–and, once again, it seems to be because he was misled by the manner in which law schools report some of their employment and salary data.

Response Rates

What did McEntee say that got Simkovic so upset? McEntee said that a NY Times column (the one authored by Solomon) gave a median salary for Georgetown’s private sector graduates without telling readers “the response rate.” And that’s absolutely right. The contested figures are here on page two. You’ll see that 362 of Georgetown’s 2013 graduates took jobs in the private sector. That constituted 60.3% of the employed graduates. You’ll also see a median salary of $160,000. All of that is what Solomon noted in his Times column (except that he confused the percentage of employed graduates with the percentage of the graduating class).

The fact that Solomon omitted, and that McEntee properly highlighted, is the response rate for the number of graduates who reported those salaries. That number appears clearly on the Georgetown report, in the same line as the other information: 362 graduates obtained these private sector jobs, but only 293 of them disclosed salaries for those jobs. Salary information was unavailable for about one-fifth of the graduates holding these positions.

Why does this matter? If you’ve paid any attention to the employment of law school graduates, the answer is obvious. NALP acknowledged years ago that reported salaries suffer from response bias. To see an illustration of this, take a look at the same Georgetown report we’ve been examining. On page 4, you’ll see that salaries were known for 207 of the 211 graduates (98.1%) working in the largest law firms. For graduates working in the smallest category of firms, just 7 out of 27 salaries (25.9%) were available. For public interest jobs that required bar admission, just 15 out of 88 salaries (17.0%) were known.

Simkovic may think it’s ok for Solomon to discuss medians in his Times column without disclosing the response rate. I disagree–and I think a Times reporter would as well. Respected newspapers are more careful about things like response rates. But whether or not you agree with Solomon’s writing style, McEntee is clearly right that he omitted the response rate on the data he discussed.

So Simkovic, like Solomon, seems to be confused by the manner in which law schools report information on NALP forms. 60% of the employed graduates held private sector jobs, but that’s not the response rate for salaries. And there’s a pretty strong consensus that the salary responses on the NALP questionnaire are biased–even NALP thinks so.

Misleading By Omission

The ABA’s standard employment report has brought more clarity to reporting entry-level employment outcomes. Solomon and Simkovic were not confused by data appearing on that form, but by statistics contained in NALP’s more outmoded form. Once again, their errors confirm the problems in old reporting practices.

More worrisome than this confusion, Solomon and Simkovic both adopt a strategy that many law schools followed before the ABA intervened: they omit information that a reader (or potential student) would find important. The most mind-boggling fact about Georgetown’s 2013 employment statistics is that the school itself hired 83 of its graduates–12.9% of the class. For 80 of those graduates, Georgetown provided a full year of full-time employment.

Isn’t that something you would want to know in evaluating whether “[a]t the top law schools, things are returning to the years before the financial crisis”? That’s the lead in to Solomon’s up-beat description of Georgetown’s employment statistics–the description that then neglects to mention how many of the graduates’ jobs were funded by their own law school.

I’m showing my age here, but back in the twentieth century, T14 schools didn’t fund jobs for one out of every eight graduates. Nor was that type of funding common in those hallowed years more immediately preceding the financial crisis.

I’ll readily acknowledge that Georgetown funds more graduate jobs than most other law schools, but the practice exists at many top schools. It’s Solomon who chose Georgetown as his example. Why are he and Simkovie then so silent about these school-funded jobs?

Final Thoughts

I ordinarily wouldn’t devote an entire post to a law professor’s errors in reading an employment table. We all make too many errors for that to be newsworthy. But Simkovic is so convinced that law schools have never misled anyone with their employment statistics–and here we have two examples of smart, knowledgeable people misled by those same statistics.

Speaking of which, Simkovic defends Solomon’s error by suggesting that he “simply rounded up” from 56% to 60% because four percent is a “small enough difference.” Rounded up? Ask any law school dean whether a four-point difference in an employment rate matters. Or check back in some recent NALP reports. The percentage of law school graduates obtaining nine-month jobs in law firms fell from 50.9% in 2010 to 45.9% in 2011. Maybe we could have avoided this whole law school crisis thing if we’d just “rounded up” the 2011 number to 50%.

, View Comments (11)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests