Old Ways, New Ways

April 14th, 2015 / By

For the last two weeks, Michael Simkovic and I have been discussing the manner in which law schools used to publish employment and salary information. The discussion started here and continued on both that blog and this one. The debate, unfortunately, seems to have confused some readers because of its historical nature. Let’s clear up that confusion: We were discussing practices that, for the most part, ended four or five years ago.

Responding to both external criticism and internal reflection, today’s law schools publish a wealth of data about their employment outcomes; most of that information is both user-friendly and accurate. Here’s a brief tour of what data are available today and what the future might still hold.

ABA Reports

For starters, all schools now post a standard ABA form that tabulates jobs in a variety of categories. The ABA also provides this information on a website that includes a summary sheet for each school and a spreadsheet compiling data from all of the ABA-accredited schools. Data are available for classes going back to 2010; the 2014 data will appear shortly (and are already available on many school sites).

Salary Specifics

The ABA form does not include salary data, and the organization warns schools to “take special care” when reporting salaries because “salary data can so easily be misleading.” Schools seem to take one of two approaches when discussing salary data today.

Some provide almost no information, noting that salaries vary widely. Others post their “NALP Report” or tables drawn directly from that report. What is this report? It’s a collection of data that law schools have been gathering for about forty years, but not disclosing publicly until the last five. The NALP Report for each school summarizes the salary data that the school has gathered from graduates and other sources. You can find examples by googling “NALP Report” along with the name of a law school. NALP reports are available later in the year than ABA ones; you won’t find any 2014 NALP Reports until early summer.

NALP’s data gathering process is far from perfect, as both Professor Simkovic and I have discussed. The report for each school, however, has the virtue of both providing some salary information and displaying the limits of that information. The reports, for example, detail how many salaries were gathered in each employment category. If a law school reports salaries for 19/20 graduates working for large firms, but just 5/30 grads working in very small firms, a reader can make note of that fact. Readers also get a more complete picture of how salaries differ between the public and private sector, as well as within subsets of those groups.

Before 2010, no law school shared its NALP Report publicly. Instead, many schools chose a few summary statistics to disclose. A common approach was to publish the median salary for a particular law school class, without further information about the process of obtaining salary information, the percentage of salaries gathered, or the mix of jobs contributing to the median. If more specific information made salaries look better, schools could (and did) provide that information. A school that placed a lot of graduates in judicial clerkships, government jobs, or public interest positions, for example, often would report separate medians for those categories–along with the higher median for the private sector. Schools had a lot of discretion to choose the most pleasing summary statistic, because no one reported more detailed data.

Given the brevity of reported salary data, together with the potential for these summary figures to mislead, the nonprofit organization Law School Transparency (LST) began urging schools to publish their “full” NALP Reports. “Full” did not mean the entire report, which can be quite lengthy and repetitive. Instead, LST defined the portions of the report that prospective students and others would find helpful. Schools seem to agree with LST’s definition, publishing those portions of the report when they choose to disclose the information.

Today, according to LST’s tracking efforts, at least half of law schools publish their NALP Reports. There may be even more schools that do so; although LST invites ongoing communication with law schools, the schools don’t always choose to update their status for the LST site.

Plus More

The ABA’s standardized employment form, together with greater availability of NALP Reports, has greatly changed the information available to potential law students and other interested parties. But the information doesn’t stop with these somewhat dry forms. Many law schools have built upon these reports to convey other useful information about their graduates’ careers. Although I have not made an exhaustive review, the contemporary information I’ve seen seems to comply with our obligation to provide information that is “complete, accurate and not misleading to a reasonable law school student or applicant.”

In addition to these efforts by individual schools, the ABA has created two websites with consumer information about law schools: the employment site noted above and a second site with other data regularly reported to the ABA. NALP has also increased the amount of data it releases publicly without charge. LST, finally, has become a key source for prospective students who want to sort and compare data drawn from all of these sources. LST has also launched a new series of podcasts that complement the data with a more detailed look at the wide range of lawyers’ work.

Looking Forward

There’s still more, of course, that organizations could do to gather and disseminate data about legal careers. I like Professor Simkovic’s suggestion that the Census Bureau expand the Current Population Survey and American Community Survey to include more detailed information about graduate education. These surveys were developed when graduate education was relatively uncommon; now that post-baccalaureate degrees are more common, it seems critical to have more rigorous data about those degrees.

I also hope that some scholars will want to gather data from bar records and other online sources, as I have done. This method has limits, but so do larger initiatives like After the JD. Because of their scale and expense, those large projects are difficult to maintain–and without regular maintenance, much of their utility falls.

Even with projects like these, however, law schools undoubtedly will continue to collect and publish data about their own employment outcomes. Our institutions compete for students, US News rank, and other types of recognition. Competition begets marketing, and marketing can lead to overstatements. The burden will remain on all of us to maintain professional standards of “complete, accurate and not misleading” information, even as we talk with pride about our schools. Our graduates face similar obligations when they compete for clients. Although all of us chafe occasionally at duties, they are also the mark of our status as professionals.

, View Comment (1)

Clueless About Salary Stats

April 11th, 2015 / By

Students and practitioners sometimes criticize law professors for knowing too little about the real world. Often, those criticisms are overstated. But then a professor like Michael Simkovic says something so clueless that you start to wonder if the critics are right.

Salaries and Response Rates

In a recent post, Simkovic tries to defend a practice that few other legal educators have defended: reporting entry-level salaries gathered through the annual NALP process without disclosing response rates to the salary question. Echoing a previous post, Simkovic claims that this practice was “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government.”

Simkovic doesn’t seem to understand how law schools and NALP actually collect salary information; the process is nothing like the government surveys he describes. Because of the idiosyncracies of the NALP process, the response rate has a particular importance.

Here are the two keys to the NALP process: (1) law schools are allowed–even encouraged–to supplement survey responses with information obtained from third parties; and (2) NALP itself is one of those third parties. Each year NALP publishes an online directory with copious salary information about the largest, best-paying law firms. Smaller firms rarely submit information to NALP, so they are almost entirely absent from the Directory.

As a result, as NALP readily acknowledges, “salaries for most jobs in large firms are reported” by law schools, while “fewer than half the salaries for jobs in small law firms are reported.” That’s “reported” as in “schools have independent information about large-firm salaries.”

For Example

To see an example of how this works in practice, take a look at the most recent (2013) salary report for Seton Hall Law School, where Simkovic teaches. Ten out of the eleven graduates who obtained jobs in firms with 500+ lawyers reported their salaries. But of the 34 graduates who took jobs in the smallest firms (those with 2-10 lawyers), just nine disclosed a salary. In 2010, 2011, and 2012, no graduates in the latter category reported a salary.

If this were a government survey, the results would be puzzling. The graduates working at the large law firms are among those “high-income individuals” that Simkovic tells us “often value privacy and are reluctant to share details about their finances.” Why are they so eager to disclose their salaries, when graduates working at smaller (and lower-paying) firms are not? And why do the graduates at every other law school act the same way? The graduates of Chicago’s Class of 2013 seem to have no sense of privacy: 149 out of 153 graduates working in the private sector happily provided their salaries, most of which were $160,000.

The answer, of course, is the NALP Directory. Law schools don’t need large-firm associates to report their salaries; the schools already know those figures. The current Directory offers salary information for almost 800 offices associated with firms of 200+ lawyers. In contrast, the Directory includes information about just 14 law firms employing 25 or fewer attorneys. That’s 14 nationwide–not 14 in New Jersey.

For the latter salaries, law schools must rely upon graduate reports, which seem difficult to elicit. When grads do report these salaries, they are much lower than the BigLaw ones. At Seton Hall, the nine graduates who reported small-firm salaries yielded a mean of just $51,183.

What Was the Problem?

I’m able to give detailed data in the above example because Seton Hall reports all of that information. It does so, moreover, for years going back to 2010. Other schools have not always been so candid. In the old days, some law schools merged the large-firm salaries provided by NALP with a handful of small-firm salaries collected directly from graduates. The school would then report a median or mean “private practice salary” without further information.

Was this “an uncontroversial and nearly universal data reporting practice, regularly used by the United States Government”? Clearly not–unless the government keeps a list of salaries from high-paying employers that it uses to supplement survey responses. That would be a nifty way to inflate wage reports, but no political party seems to have thought of this just yet.

Law schools, in other words, were not just publishing salary information without disclosing response rates. They were disclosing information that they knew was biased: they had supplemented the survey information with data drawn from the largest firms. The organization supervising the data collection process acknowledged that the salary statistics were badly skewed; so did any dean I talked with during that period.

The criticism of law schools for “failing to report response rates” became a polite shorthand for describing the way in which law schools produced misleading salary averages. Perhaps the critics should have been less polite. We reasoned, however, that if law schools at least reported the “response” rates (which, of course, included “responses” provided by the NALP data), graduates would see that reported salaries clustered in the largest firms. The information would also allow other organizations, like Law School Transparency to explain the process further to applicants.

This approach gave law schools the greatest leeway to continue reporting salary data and, frankly, to package it in ways that may still overstate outcomes. But let’s not pretend that law schools have been operating social science surveys with an unbiased method of data collection. That wasn’t true in the past, and it’s not true now.

, View Comments (24)

Law School Statistics

April 8th, 2015 / By

Earlier this week, I noted that even smart academics are misled by the manner in which law schools traditionally reported employment statistics. Steven Solomon, a very smart professor at Berkeley’s law school, was misled by the “nesting” of statistics on NALP’s employment report for another law school.

Now Michael Simkovic, another smart law professor, has proved the point again. Simkovic rather indignantly complains that Kyle McEntee “suggests incorrectly that The New York Times reported Georgetown’s median private sector salary without providing information on what percentage of the class or of those employed were working in the private sector.” But it is Simkovic who is incorrect–and, once again, it seems to be because he was misled by the manner in which law schools report some of their employment and salary data.

Response Rates

What did McEntee say that got Simkovic so upset? McEntee said that a NY Times column (the one authored by Solomon) gave a median salary for Georgetown’s private sector graduates without telling readers “the response rate.” And that’s absolutely right. The contested figures are here on page two. You’ll see that 362 of Georgetown’s 2013 graduates took jobs in the private sector. That constituted 60.3% of the employed graduates. You’ll also see a median salary of $160,000. All of that is what Solomon noted in his Times column (except that he confused the percentage of employed graduates with the percentage of the graduating class).

The fact that Solomon omitted, and that McEntee properly highlighted, is the response rate for the number of graduates who reported those salaries. That number appears clearly on the Georgetown report, in the same line as the other information: 362 graduates obtained these private sector jobs, but only 293 of them disclosed salaries for those jobs. Salary information was unavailable for about one-fifth of the graduates holding these positions.

Why does this matter? If you’ve paid any attention to the employment of law school graduates, the answer is obvious. NALP acknowledged years ago that reported salaries suffer from response bias. To see an illustration of this, take a look at the same Georgetown report we’ve been examining. On page 4, you’ll see that salaries were known for 207 of the 211 graduates (98.1%) working in the largest law firms. For graduates working in the smallest category of firms, just 7 out of 27 salaries (25.9%) were available. For public interest jobs that required bar admission, just 15 out of 88 salaries (17.0%) were known.

Simkovic may think it’s ok for Solomon to discuss medians in his Times column without disclosing the response rate. I disagree–and I think a Times reporter would as well. Respected newspapers are more careful about things like response rates. But whether or not you agree with Solomon’s writing style, McEntee is clearly right that he omitted the response rate on the data he discussed.

So Simkovic, like Solomon, seems to be confused by the manner in which law schools report information on NALP forms. 60% of the employed graduates held private sector jobs, but that’s not the response rate for salaries. And there’s a pretty strong consensus that the salary responses on the NALP questionnaire are biased–even NALP thinks so.

Misleading By Omission

The ABA’s standard employment report has brought more clarity to reporting entry-level employment outcomes. Solomon and Simkovic were not confused by data appearing on that form, but by statistics contained in NALP’s more outmoded form. Once again, their errors confirm the problems in old reporting practices.

More worrisome than this confusion, Solomon and Simkovic both adopt a strategy that many law schools followed before the ABA intervened: they omit information that a reader (or potential student) would find important. The most mind-boggling fact about Georgetown’s 2013 employment statistics is that the school itself hired 83 of its graduates–12.9% of the class. For 80 of those graduates, Georgetown provided a full year of full-time employment.

Isn’t that something you would want to know in evaluating whether “[a]t the top law schools, things are returning to the years before the financial crisis”? That’s the lead in to Solomon’s up-beat description of Georgetown’s employment statistics–the description that then neglects to mention how many of the graduates’ jobs were funded by their own law school.

I’m showing my age here, but back in the twentieth century, T14 schools didn’t fund jobs for one out of every eight graduates. Nor was that type of funding common in those hallowed years more immediately preceding the financial crisis.

I’ll readily acknowledge that Georgetown funds more graduate jobs than most other law schools, but the practice exists at many top schools. It’s Solomon who chose Georgetown as his example. Why are he and Simkovie then so silent about these school-funded jobs?

Final Thoughts

I ordinarily wouldn’t devote an entire post to a law professor’s errors in reading an employment table. We all make too many errors for that to be newsworthy. But Simkovic is so convinced that law schools have never misled anyone with their employment statistics–and here we have two examples of smart, knowledgeable people misled by those same statistics.

Speaking of which, Simkovic defends Solomon’s error by suggesting that he “simply rounded up” from 56% to 60% because four percent is a “small enough difference.” Rounded up? Ask any law school dean whether a four-point difference in an employment rate matters. Or check back in some recent NALP reports. The percentage of law school graduates obtaining nine-month jobs in law firms fell from 50.9% in 2010 to 45.9% in 2011. Maybe we could have avoided this whole law school crisis thing if we’d just “rounded up” the 2011 number to 50%.

, View Comments (11)

Timing Law School

March 10th, 2015 / By

Michael Simkovic and Frank McIntyre have a new paper analyzing historic income data for law school graduates. In this article, a supplement to their earlier paper on the lifetime value of a law degree, Simkovic and McIntyre conclude that graduates reap most of the value of a JD whether they graduate in good economic times or poor ones. (Simkovic, by the way, just won an ALI Young Scholar Medal. Congratulations, Mike!)

Simkovic and McIntyre’s latest analyses, they hope, will reassure graduates who earned their degrees in recent years. If history repeats, then these JDs will reap as much financial benefit over their lifetimes as those in previous generations. Simkovic and McIntyre also warn prospective students against trying to “time” law school. It’s difficult to estimate business cycles several years in advance, when a 0L must decide whether to take the plunge. And, again according to historical data, timing won’t make much difference. Under most circumstances, delay will cost more financially than any reward that successful timing could confer.

But Is This Time Different?

History does repeat, at least in the sense of economic conditions that cycle from good to bad and back again. There’s no doubt that recent law school graduates have suffered poor job outcomes partly because of the Great Recession and slow recovery. It’s good to know that graduates may be able to recover financially from the business-cycle component of their post-graduation woes. Although even here, Simkovic and McIntyre acknowledge that past results cannot guarantee future performance. The Great Recession may produce aftershocks that differ from earlier recessions.

All of this, though, edges around the elephant in the room: Have shifts occurred in the legal profession that will make that work less remunerative or less satisfying to law graduates? And/or have changes occurred that will make remunerative, satisfying work available to a smaller percentage of law graduates?

Simkovic and McIntyre have limited data on those questions. Their primary dataset does not yet include anyone who earned a JD after 2008. A supplemental analysis seems to encompass some post-2008 degree holders, but the results are limited. Simkovic and McIntyre remain confident that any structural change will help, rather than hurt, law graduates–but their evidence speaks to that issue only in historical terms at best. What is actually happening in the workplace today?

The Class of 2010

Five years ago, the Class of 2010 was sitting in our classrooms, anticipating graduation, dreading the bar exam, and worrying about finding a job. Did they find jobs? What kind of work are they doing?

I decided to find out by tracking employment results for more than 1,200 graduates from that year. I’ll be releasing that paper later this week, but here’s a preview: the class’s employment pattern has not improved much from where it stood nine months after graduation. The results are strikingly low compared to the Class of 2000 (the one followed by the massive After the JD study). The decline in law firm employment is particularly marked: just 40% of the group I followed works in a law firm of any size, compared to 62.3% for the Class of 2000 at a similar point in their careers.

A change of that magnitude, in the primary sector (law firms) that hires new law graduates, smacks of structural change. I’m not talking just about BigLaw; these changes pervaded the employment landscape. Stay tuned.

, View Comment (1)

Small Samples

August 1st, 2013 / By

I haven’t been surprised by the extensive discussion of the recent paper by Michael Simkovic and Frank McIntyre. The paper deserves attention from many readers. I have been surprised, however, by the number of scholars who endorse the paper–and even scorn skeptics–while acknowledging that they don’t understand the methods underlying Simkovic and McIntyre’s results. An empirical paper is only as good as its method; it’s essential for scholars to engage with that method.

I’ll discuss one methodological issue here: the small sample sizes underlying some of Simkovic and McIntyre’s results. Those sample sizes undercut the strength of some claims that Simkovic and McIntyre make in the current draft of the paper.

What Is the Sample in Simkovic & McIntyre?

Simkovic and McIntyre draw their data from the Survey of Income and Program Participation, a very large survey of U.S. households. The authors, however, don’t use all of the data in the survey; they focus on (a) college graduates whose highest degree is the BA, and (b) JD graduates. SIPP provides a large sample of the former group: Each of the four panels yielded information on 6,238 to 9,359 college graduates, for a total of 31,556 BAs in the sample. (I obtained these numbers, as well as the ones for JD graduates, from Frank McIntyre. He and Mike Simkovic have been very gracious in answering my questions.)

The sample of JD graduates, however, is much smaller. Those totals range from 282 to 409 for the four panels, yielding a total of 1,342 law school graduates. That’s still a substantial sample size, but Simkovic and McIntyre need to examine subsets of the sample to support their analyses. To chart changes in the financial premium generated by a law degree, for example, they need to examine reported incomes for each of the sixteen years in the sample. Those small groupings generate the uncertainty I discuss here.

Confidence Intervals

Statisticians deal with small sample sizes by generating confidence intervals. The confidence interval, sometimes referred to as a “margin of error,” does two things. First, it reminds us that numbers plucked from samples are just estimates; they are not precise reflections of the underlying population. If we collect income data from 1,342 law school graduates, as SIPP did, we can then calculate the means, medians, and other statistics about those incomes. The median income for the 1,342 JDs in the Simkovic & McIntyre study, for example, was $82,400 in 2012 dollars. That doesn’t mean that the median income for all JDs was exactly $82,400; the sample offers an estimate.

Second, the confidence interval gives us a range in which the true number (the one for the underlying population) is likely to fall. The confidence interval for JD income, for example, might be plus-or-minus $5,000. If that were the confidence interval for the median given above, then we could be relatively sure that the true median lay somewhere between $77,400 and $87,400. ($5,000 is a ballpark estimate of the confidence interval, used here for illustrative purposes; it is not the precise interval.)

Small samples generate large confidence intervals, while larger samples produce smaller ones. That makes intuitive sense: the larger our sample, the more precisely it will reflect patterns in the underlying population. We have to exercise particular caution when interpreting small samples, because they are more likely to offer a distorted view of the population we’re trying to understand. Confidence intervals make sure we exercise that caution.

Our brains, unfortunately, are not wired for confidence intervals. When someone reports the estimate from a sample, we tend to focus on that particular reported number–while ignoring the confidence interval. Considering the confidence interval, however, is essential. If a political poll reports that Dewey is leading Truman, 51% to 49%, with a 3% margin of error, then the race is too close to call. Based on this poll, actual support for Dewey could be as low as 48% (3 points lower than the reported value) or as high as 54% (3 points higher than the reported value). Dewey might win decisively, the result might be a squeaker, or Truman might win.

Is the Earnings Premium Cyclical?

Now let’s look at Figure 5 in the Simkovic and McIntyre paper. This figure shows the earnings premium for a JD compared to a BA over a range of 16 years. The shape of the solid line is somewhat cyclical, leading to the Simkovic/McIntyre suggestion that “[t]he law degree earnings premium is cyclical,” together with their observation that recent changes in income levels are due to “ordinary cyclicality.” (pp. 49, 32)

But what lies behind that somewhat cyclical solid line in Figure 5? The line ties together sixteen points, each of which represents the estimated premium for a single year. Each point draws upon the incomes of a few hundred graduates, a relatively small group. Those small sample sizes produce relatively large confidence intervals around each estimate. Simkovic & McIntyre show those confidence intervals with dotted lines above and below the solid line. The estimated premium for 1996, for example, is about .54, but the confidence interval stretches from about .42 to about .66. We can be quite confident that JD graduates, on average, enjoyed a financial premium over BAs in 1996, but we’re much less certain about the size of the premium. The coefficient for this premium could be as low as .42 or as high as .66.

So what? As long as the premiums were positive, how much do we care about their size? Remember that Simkovic and McIntyre suggest that the earnings premium is cyclical. They rely on that cyclicality, in turn, to suggest that any recent downturns in earnings are part of an ordinary cycle.

The results reported in Figure 5, however, cannot confirm cyclicality. The specific estimates look cyclical, but the confidence intervals urge caution. Figure 5 shows those intervals as lines that parallel the estimated values, but the confidence intervals belong to each point–not to the line as a whole. The real premium for each year most likely falls somewhere within the confidence interval for each year, but we can’t say where.

Simkovic and McIntyre could supplement their analysis by testing the relationship among these estimates; it’s possible that, statistically, they could reject the hypothesis that the earnings premium was stable. They might even be able to establish cyclicality with more certainty. We can’t reach those conclusions from Figure 5 and the currently reported analyses, however; the confidence intervals are too wide for certain interpretation. All of the internet discussion of the cyclicality of the earnings premium has been premature.

Recent Graduates

Similar problems affect Simkovic and McIntyre’s statements about recent graduates. In Figure 6, they depict the earnings premium for law school graduates aged 25-29 in four different time periods. The gray bars show the estimated premium for each time period, with the vertical lines indicating the confidence interval. Notice how long those confidence intervals are: The interval for 1996-1999 stretches from about 0.04 through about 0.54. The other periods show similarly extended intervals.

Those large confidence intervals reflect very small sample sizes. The 1996 panel offered income information on just sixteen JD graduates aged 25-29; the 2001 panel included twenty-five of those graduates; the 2004 panel, seventeen; and the 2008 panel twenty-six graduates. With such small samples, we have very little confidence (in both the every day and statistical senses) that the premium estimates are correct.

It seems likely that the premium was positive throughout this period–although the very small sample sizes and possible bimodality of incomes could undermine even that conclusion. We can’t, however, say much more than that. If we take confidence intervals into account, the premium might have declined steadily throughout this period, from about 0.54 in the earliest period to 0.33 in the most recent one. Or it might have risen, from a very modest 0.05 in the first period to a robust 0.80 more recently. Again, we just don’t know.

It would be useful for Simkovic and McIntyre to acknowledge the small number of recent law school graduates in their sample; that would help ground readers in the data. When writing a paper like this, especially for an interdisciplinary audience, it’s difficult to anticipate what kind of information the audience may need. I’m surprised that so many legal scholars enthusiastically endorsed these results without noting the large confidence intervals.


There has been much talk during the last two weeks about Kardashians, charlatans, and even the Mafia. I’m not sure any legal academic leads quite that exciting a life; I know I don’t. As a professor who has taught Law and Social Science, I think the critics of the Simkovic/McIntyre paper raised many good questions. Empirical analyses need testing, and it is especially important to examine the assumptions that lie behind a quantitative study.

The questions weren’t all good. Nor, I’m afraid, were all of the questions I’ve heard about other papers over the years. That’s the nature of academic debate and refining hypotheses: sometimes we have to ask questions just to figure out what we don’t know.

Endorsements of the paper, similarly, spanned a spectrum. Some were thoughtful, others seemed reflexive. I was disappointed at how few of the paper’s supporters engaged fully in the paper’s method, asking questions like the ones I have raised about sample size and confidence intervals.

I hope to write a bit more on the Simkovic and McIntyre paper; there are more questions to raise about their conclusions. I may also try to offer some summaries of other research that has been done on the career paths of law school graduates and lawyers. We don’t have nearly enough research in the field, but there are some other studies worth knowing.

, No Comments Yet

Financial Returns to Legal Education

July 21st, 2013 / By

I was busy with several projects this week, so didn’t have a chance to comment on the new paper by Michael Simkovic and Frank McIntyre. With the luxury of weekend time, I have some praise, some caveats, and some criticism for the paper.

First, in the praise category, this is a useful contribution to both the literature and the policy debates surrounding the value of a law degree. Simkovic and McIntyre are not the first to analyze the financial rewards of law school–or to examine other aspects of the market for law-related services–but their paper adds to this growing body of work.

Second, Simkovic and McIntyre have done all of us a great service by drawing attention to the Survey of Income and Program Participation. This is a rich dataset that can inform many explorations, including other studies related to legal education. The survey, for example, includes questions about grants, loans, and other assistance used to finance higher education. (See pp. 307-08 of this outline.) I hope to find time to work with this dataset, and I hope others will as well.

Now I move to some caveats and criticisms.

Sixteen Years Is Not the Long Term

Simkovic and McIntyre frequently refer to their results as representing “long-term” outcomes or “historic norms.” A central claim of the study, for example, is that the earnings premium from a law degree “is stable over the long term, with short term cyclical fluctuations.” (See slide 26 of the powerpoint overview.) These representations, however, rest on a “term” of just sixteen years, from 1996-2011. Sixteen years is less than half the span of a typical law graduate’s career; it is too short a period to embody long-term trends.

This is a different caveat from the one that Simkovic and McIntyre express, that we can’t know whether contemporary changes in the legal market will disrupt the trends they’ve identified. We can’t, in other words, know that the period from 2012-2027 will look like the one from 1996-2011. Equally important, however, the study doesn’t tell us anything about the years before 1996. Did the period from 1980-1995 look like the one from 1996-2011? What about the period from 1964-1979? Or 1948-1963?

The SIPP data can’t tell us about those periods. The survey began during the 1980s, but the instrument changed substantially in 1996. Nor do other surveys, to my knowledge, give us the type of information we need to perform those historical analyses. Simkovic and McIntyre didn’t overlook relevant data, but they claim too much from the data they do have.

Note that SIPP does contain data about law graduates of all ages. This is one of the strengths of the database, and of the Simkovic/McIntyre analysis. This study shows us the earnings of law graduates who have been practicing for decades, not just those of recent graduates. That analysis, however, occurs entirely within the sixteen-year window of 1996-2011. Putting aside other flaws or caveats for now, Simkovic and McIntyre are able to describe the earnings premium for law graduates of all ages during that sixteen-year window. They can say, as they do, that the premium has fluctuated within a particular band over that period. That statement, however, is very different than saying that the premium has been stable over the “long term” or that this period sets “historic norms.” To measure the long term, we’d want to know about a longer period of time.

This matters, because saying something has been “stable over the long term” sounds very reassuring. Sixteen years, however, is less than half the span of a typical law graduate’s career. It’s less, even, than the time that many graduates will devote to repaying their law school loans. The widely touted Pay As You Earn program extends payments over twenty years, while other plans structure payments over twenty-five years. Simkovic and McIntyre’s references to the “long term” suggest a stability that their sixteen years of data can’t support.

What would a graph of truly long-term trends show? We can’t know for sure without better data. The data might show the same pattern that Simkovic and McIntyre found for recent years. On the other hand, historic data might reveal periods when the economic premium from a law degree was small or declining. A study of long-term trends might also identify times when the JD premium was rising or higher than the one identified by Simkovic and McIntyre. A lot has changed in higher education, legal education, and the legal profession over the last 25, 50, or 100 years. That past may or may not inform the future, but it’s important to recognize that Simkovic and McIntyre tell us only about the recent past–a period that most recognize as particularly prosperous for lawyers–not about the long term.

Structural Shifts

Simkovic and McIntyre discount predictions that the legal market is undergoing a structural shift that will change lawyer earnings, the JD earnings premium, or other aspects of the labor market. Their skepticism does not stem from examination of particular workplace trends; instead it rests largely on the data they compiled. This is where Simkovic and McIntyre’s claim of stability “over the long term” becomes most dangerous.

On pp. 36-37, for example, Simkovic and McIntyre list a number of technological changes that have affected law practice, from “introduction of the typewriter” to “computerized and modular legal research through Lexis and Westlaw; word processing; electronic citation software; electronic document storage and filing systems; automated document comparison; electronic document search; email; photocopying; desktop publishing; standardized legal forms; will-making and tax-preparing software.” They then conclude (on p. 37) that “[t]hrough it all, the law degree has continued to offer a large earnings premium.”

That’s clearly hyperbole: We have no idea, based on the Simkovic and McIntyre analysis, how most of these technological changes affected the value of a law degree. Today’s JD, based on a three-year curriculum, didn’t exist when the typewriter pioneered. Lexis, WestLaw, and word processing have been around since the 1970s; photocopying dates further back than that. A study of earnings between 1996 and 2011 can’t tell us much about how those innovations affected the earnings of law graduates.

It is true (again, assuming for now no other flaws in the analysis) that legal education delivered an earnings premium during the period 1996-2011, which occurred after all of these technologies had entered the workforce. Neither typewriters nor word processors destroyed the earnings that law graduates, on average, enjoyed during those sixteen years. That is different, however, from saying that these technologies had no structural effect on lawyers’ earnings.

The Tale of the Typewriter

The lowly typewriter, in fact, may have contributed to a major structural shift in the legal market: the creation of three-year law schools and formal schooling requirements for bar admission. Simkovic and McIntyre (at fn 84) quote a 1901 statement that sounds like a melodramatic indictment of the typewriter’s impact on law practice. Francis Miles Finch, the Dean of Cornell Law School and President of the New York State Bar Association, told the bar association in 1901 that “current conditions are widely and radically different from those existing fifty years ago . . . the student in the law office copies nothing and sees nothing. The stenographer and the typewriter have monopolized what was his work . . . and he sits outside of the business tide.”

Finch, however, was not wringing his hands over new technology or the imminent demise of the legal profession; he was pointing out that law office apprentices no longer had the opportunity to absorb legal principles by copying the pleadings, briefs, letters, and other work of practicing lawyers. Finch used this change in office practices to support his argument for new licensing requirements: He proposed that every lawyer should finish four years of high school, as well as three years of law school or four years of apprenticeship, before qualifying to take the bar. These were novel requirements at the turn of the last century, although a movement was building in that direction. After Finch’s speech, the NY bar association unanimously endorsed his proposal.

Did the typewriter single-handedly lead to the creation of three-year law schools and academic prerequisites for the bar examination? Of course not. But the changing conditions of apprentice work, which grew partly from changes in technology, contributed to that shift. This structural shift, in turn, almost certainly affected the earnings of aspiring lawyers.

Some would-be lawyers, especially those of limited economic means, may not have been able to delay paid employment long enough to satisfy the requirements. Those aspirants wouldn’t have become lawyers, losing whatever financial advantage the profession might have conferred. Those who complied with the new requirements, meanwhile, lost several years of earning potential. If they attended law school, they also transferred some of their future earnings to the school by paying tuition. In these ways, the requirements reduced earnings for potential lawyers.

On the other hand, by raising barriers to entry, the requirements may have increased earnings for those already in the profession–as well as for those who succeeded in joining. Finch explicitly noted in his speech that “the profession is becoming overcrowded” and it would be a “benefit” if the educational requirements reduced the number of lawyers. (P. 102.)

The structural change, in other words, probably created winners and losers. It may also have widened the gap between those two groups. It is difficult, more than a century later, to trace the full financial effects of the educational requirements that our profession adopted during the first third of the twentieth century. I would not, however, be as quick as Simkovic and McIntyre to dismiss structural changes or their complex economic impacts.


I’ve outlined here both my praise for Simkovic and McIntyre’s article and my first two criticisms. The article adds to a much needed literature on the economics of legal education and the legal profession; it also highlights a particularly rich dataset for other scholars to explore. On the other hand, the article claims too much by referring to long-term trends and historic norms; this article examines labor market returns for law school graduates during a relatively short (and perhaps distinctive) recent period of sixteen years. The article also dismisses too quickly the impact of structural shifts. That is not really Simkovic and McIntyre’s focus, as they concede. Their data, however, do not provide the type of long-term record that would refute the possibility of structural shifts.

My next post related to this article will pick up where I left off, with winners and losers. My policy concerns with legal education and the legal profession focus primarily on the distribution of earnings, rather than on the profession’s potential to remain profitable overall. Why did law school tuition climb aggressively from 1996 through 2011, if the earnings premium was stable during that period? Why, in other words, do law schools reap a greater share of the premium today than they did in earlier decades?

Which students, meanwhile, don’t attend law school at all, forgoing any share in law school’s possible premium? For those who do attend, how is that premium distributed? Are those patterns shifting? I’ll explore these questions of winners and losers, including what we can learn about the issues from Simkovic and McIntyre, in a future post.

, View Comments (8)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe


Enter your email address to receive notifications of new posts by email.


Recent Comments

Recent Posts

Monthly Archives


Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests