Timing Law School

March 10th, 2015 / By

Michael Simkovic and Frank McIntyre have a new paper analyzing historic income data for law school graduates. In this article, a supplement to their earlier paper on the lifetime value of a law degree, Simkovic and McIntyre conclude that graduates reap most of the value of a JD whether they graduate in good economic times or poor ones. (Simkovic, by the way, just won an ALI Young Scholar Medal. Congratulations, Mike!)

Simkovic and McIntyre’s latest analyses, they hope, will reassure graduates who earned their degrees in recent years. If history repeats, then these JDs will reap as much financial benefit over their lifetimes as those in previous generations. Simkovic and McIntyre also warn prospective students against trying to “time” law school. It’s difficult to estimate business cycles several years in advance, when a 0L must decide whether to take the plunge. And, again according to historical data, timing won’t make much difference. Under most circumstances, delay will cost more financially than any reward that successful timing could confer.

But Is This Time Different?

History does repeat, at least in the sense of economic conditions that cycle from good to bad and back again. There’s no doubt that recent law school graduates have suffered poor job outcomes partly because of the Great Recession and slow recovery. It’s good to know that graduates may be able to recover financially from the business-cycle component of their post-graduation woes. Although even here, Simkovic and McIntyre acknowledge that past results cannot guarantee future performance. The Great Recession may produce aftershocks that differ from earlier recessions.

All of this, though, edges around the elephant in the room: Have shifts occurred in the legal profession that will make that work less remunerative or less satisfying to law graduates? And/or have changes occurred that will make remunerative, satisfying work available to a smaller percentage of law graduates?

Simkovic and McIntyre have limited data on those questions. Their primary dataset does not yet include anyone who earned a JD after 2008. A supplemental analysis seems to encompass some post-2008 degree holders, but the results are limited. Simkovic and McIntyre remain confident that any structural change will help, rather than hurt, law graduates–but their evidence speaks to that issue only in historical terms at best. What is actually happening in the workplace today?

The Class of 2010

Five years ago, the Class of 2010 was sitting in our classrooms, anticipating graduation, dreading the bar exam, and worrying about finding a job. Did they find jobs? What kind of work are they doing?

I decided to find out by tracking employment results for more than 1,200 graduates from that year. I’ll be releasing that paper later this week, but here’s a preview: the class’s employment pattern has not improved much from where it stood nine months after graduation. The results are strikingly low compared to the Class of 2000 (the one followed by the massive After the JD study). The decline in law firm employment is particularly marked: just 40% of the group I followed works in a law firm of any size, compared to 62.3% for the Class of 2000 at a similar point in their careers.

A change of that magnitude, in the primary sector (law firms) that hires new law graduates, smacks of structural change. I’m not talking just about BigLaw; these changes pervaded the employment landscape. Stay tuned.

, View Comment (1)

Small Samples

August 1st, 2013 / By

I haven’t been surprised by the extensive discussion of the recent paper by Michael Simkovic and Frank McIntyre. The paper deserves attention from many readers. I have been surprised, however, by the number of scholars who endorse the paper–and even scorn skeptics–while acknowledging that they don’t understand the methods underlying Simkovic and McIntyre’s results. An empirical paper is only as good as its method; it’s essential for scholars to engage with that method.

I’ll discuss one methodological issue here: the small sample sizes underlying some of Simkovic and McIntyre’s results. Those sample sizes undercut the strength of some claims that Simkovic and McIntyre make in the current draft of the paper.

What Is the Sample in Simkovic & McIntyre?

Simkovic and McIntyre draw their data from the Survey of Income and Program Participation, a very large survey of U.S. households. The authors, however, don’t use all of the data in the survey; they focus on (a) college graduates whose highest degree is the BA, and (b) JD graduates. SIPP provides a large sample of the former group: Each of the four panels yielded information on 6,238 to 9,359 college graduates, for a total of 31,556 BAs in the sample. (I obtained these numbers, as well as the ones for JD graduates, from Frank McIntyre. He and Mike Simkovic have been very gracious in answering my questions.)

The sample of JD graduates, however, is much smaller. Those totals range from 282 to 409 for the four panels, yielding a total of 1,342 law school graduates. That’s still a substantial sample size, but Simkovic and McIntyre need to examine subsets of the sample to support their analyses. To chart changes in the financial premium generated by a law degree, for example, they need to examine reported incomes for each of the sixteen years in the sample. Those small groupings generate the uncertainty I discuss here.

Confidence Intervals

Statisticians deal with small sample sizes by generating confidence intervals. The confidence interval, sometimes referred to as a “margin of error,” does two things. First, it reminds us that numbers plucked from samples are just estimates; they are not precise reflections of the underlying population. If we collect income data from 1,342 law school graduates, as SIPP did, we can then calculate the means, medians, and other statistics about those incomes. The median income for the 1,342 JDs in the Simkovic & McIntyre study, for example, was $82,400 in 2012 dollars. That doesn’t mean that the median income for all JDs was exactly $82,400; the sample offers an estimate.

Second, the confidence interval gives us a range in which the true number (the one for the underlying population) is likely to fall. The confidence interval for JD income, for example, might be plus-or-minus $5,000. If that were the confidence interval for the median given above, then we could be relatively sure that the true median lay somewhere between $77,400 and $87,400. ($5,000 is a ballpark estimate of the confidence interval, used here for illustrative purposes; it is not the precise interval.)

Small samples generate large confidence intervals, while larger samples produce smaller ones. That makes intuitive sense: the larger our sample, the more precisely it will reflect patterns in the underlying population. We have to exercise particular caution when interpreting small samples, because they are more likely to offer a distorted view of the population we’re trying to understand. Confidence intervals make sure we exercise that caution.

Our brains, unfortunately, are not wired for confidence intervals. When someone reports the estimate from a sample, we tend to focus on that particular reported number–while ignoring the confidence interval. Considering the confidence interval, however, is essential. If a political poll reports that Dewey is leading Truman, 51% to 49%, with a 3% margin of error, then the race is too close to call. Based on this poll, actual support for Dewey could be as low as 48% (3 points lower than the reported value) or as high as 54% (3 points higher than the reported value). Dewey might win decisively, the result might be a squeaker, or Truman might win.

Is the Earnings Premium Cyclical?

Now let’s look at Figure 5 in the Simkovic and McIntyre paper. This figure shows the earnings premium for a JD compared to a BA over a range of 16 years. The shape of the solid line is somewhat cyclical, leading to the Simkovic/McIntyre suggestion that “[t]he law degree earnings premium is cyclical,” together with their observation that recent changes in income levels are due to “ordinary cyclicality.” (pp. 49, 32)

But what lies behind that somewhat cyclical solid line in Figure 5? The line ties together sixteen points, each of which represents the estimated premium for a single year. Each point draws upon the incomes of a few hundred graduates, a relatively small group. Those small sample sizes produce relatively large confidence intervals around each estimate. Simkovic & McIntyre show those confidence intervals with dotted lines above and below the solid line. The estimated premium for 1996, for example, is about .54, but the confidence interval stretches from about .42 to about .66. We can be quite confident that JD graduates, on average, enjoyed a financial premium over BAs in 1996, but we’re much less certain about the size of the premium. The coefficient for this premium could be as low as .42 or as high as .66.

So what? As long as the premiums were positive, how much do we care about their size? Remember that Simkovic and McIntyre suggest that the earnings premium is cyclical. They rely on that cyclicality, in turn, to suggest that any recent downturns in earnings are part of an ordinary cycle.

The results reported in Figure 5, however, cannot confirm cyclicality. The specific estimates look cyclical, but the confidence intervals urge caution. Figure 5 shows those intervals as lines that parallel the estimated values, but the confidence intervals belong to each point–not to the line as a whole. The real premium for each year most likely falls somewhere within the confidence interval for each year, but we can’t say where.

Simkovic and McIntyre could supplement their analysis by testing the relationship among these estimates; it’s possible that, statistically, they could reject the hypothesis that the earnings premium was stable. They might even be able to establish cyclicality with more certainty. We can’t reach those conclusions from Figure 5 and the currently reported analyses, however; the confidence intervals are too wide for certain interpretation. All of the internet discussion of the cyclicality of the earnings premium has been premature.

Recent Graduates

Similar problems affect Simkovic and McIntyre’s statements about recent graduates. In Figure 6, they depict the earnings premium for law school graduates aged 25-29 in four different time periods. The gray bars show the estimated premium for each time period, with the vertical lines indicating the confidence interval. Notice how long those confidence intervals are: The interval for 1996-1999 stretches from about 0.04 through about 0.54. The other periods show similarly extended intervals.

Those large confidence intervals reflect very small sample sizes. The 1996 panel offered income information on just sixteen JD graduates aged 25-29; the 2001 panel included twenty-five of those graduates; the 2004 panel, seventeen; and the 2008 panel twenty-six graduates. With such small samples, we have very little confidence (in both the every day and statistical senses) that the premium estimates are correct.

It seems likely that the premium was positive throughout this period–although the very small sample sizes and possible bimodality of incomes could undermine even that conclusion. We can’t, however, say much more than that. If we take confidence intervals into account, the premium might have declined steadily throughout this period, from about 0.54 in the earliest period to 0.33 in the most recent one. Or it might have risen, from a very modest 0.05 in the first period to a robust 0.80 more recently. Again, we just don’t know.

It would be useful for Simkovic and McIntyre to acknowledge the small number of recent law school graduates in their sample; that would help ground readers in the data. When writing a paper like this, especially for an interdisciplinary audience, it’s difficult to anticipate what kind of information the audience may need. I’m surprised that so many legal scholars enthusiastically endorsed these results without noting the large confidence intervals.

Onward

There has been much talk during the last two weeks about Kardashians, charlatans, and even the Mafia. I’m not sure any legal academic leads quite that exciting a life; I know I don’t. As a professor who has taught Law and Social Science, I think the critics of the Simkovic/McIntyre paper raised many good questions. Empirical analyses need testing, and it is especially important to examine the assumptions that lie behind a quantitative study.

The questions weren’t all good. Nor, I’m afraid, were all of the questions I’ve heard about other papers over the years. That’s the nature of academic debate and refining hypotheses: sometimes we have to ask questions just to figure out what we don’t know.

Endorsements of the paper, similarly, spanned a spectrum. Some were thoughtful, others seemed reflexive. I was disappointed at how few of the paper’s supporters engaged fully in the paper’s method, asking questions like the ones I have raised about sample size and confidence intervals.

I hope to write a bit more on the Simkovic and McIntyre paper; there are more questions to raise about their conclusions. I may also try to offer some summaries of other research that has been done on the career paths of law school graduates and lawyers. We don’t have nearly enough research in the field, but there are some other studies worth knowing.

, No Comments Yet

Financial Returns to Legal Education

July 21st, 2013 / By

I was busy with several projects this week, so didn’t have a chance to comment on the new paper by Michael Simkovic and Frank McIntyre. With the luxury of weekend time, I have some praise, some caveats, and some criticism for the paper.

First, in the praise category, this is a useful contribution to both the literature and the policy debates surrounding the value of a law degree. Simkovic and McIntyre are not the first to analyze the financial rewards of law school–or to examine other aspects of the market for law-related services–but their paper adds to this growing body of work.

Second, Simkovic and McIntyre have done all of us a great service by drawing attention to the Survey of Income and Program Participation. This is a rich dataset that can inform many explorations, including other studies related to legal education. The survey, for example, includes questions about grants, loans, and other assistance used to finance higher education. (See pp. 307-08 of this outline.) I hope to find time to work with this dataset, and I hope others will as well.

Now I move to some caveats and criticisms.

Sixteen Years Is Not the Long Term

Simkovic and McIntyre frequently refer to their results as representing “long-term” outcomes or “historic norms.” A central claim of the study, for example, is that the earnings premium from a law degree “is stable over the long term, with short term cyclical fluctuations.” (See slide 26 of the powerpoint overview.) These representations, however, rest on a “term” of just sixteen years, from 1996-2011. Sixteen years is less than half the span of a typical law graduate’s career; it is too short a period to embody long-term trends.

This is a different caveat from the one that Simkovic and McIntyre express, that we can’t know whether contemporary changes in the legal market will disrupt the trends they’ve identified. We can’t, in other words, know that the period from 2012-2027 will look like the one from 1996-2011. Equally important, however, the study doesn’t tell us anything about the years before 1996. Did the period from 1980-1995 look like the one from 1996-2011? What about the period from 1964-1979? Or 1948-1963?

The SIPP data can’t tell us about those periods. The survey began during the 1980s, but the instrument changed substantially in 1996. Nor do other surveys, to my knowledge, give us the type of information we need to perform those historical analyses. Simkovic and McIntyre didn’t overlook relevant data, but they claim too much from the data they do have.

Note that SIPP does contain data about law graduates of all ages. This is one of the strengths of the database, and of the Simkovic/McIntyre analysis. This study shows us the earnings of law graduates who have been practicing for decades, not just those of recent graduates. That analysis, however, occurs entirely within the sixteen-year window of 1996-2011. Putting aside other flaws or caveats for now, Simkovic and McIntyre are able to describe the earnings premium for law graduates of all ages during that sixteen-year window. They can say, as they do, that the premium has fluctuated within a particular band over that period. That statement, however, is very different than saying that the premium has been stable over the “long term” or that this period sets “historic norms.” To measure the long term, we’d want to know about a longer period of time.

This matters, because saying something has been “stable over the long term” sounds very reassuring. Sixteen years, however, is less than half the span of a typical law graduate’s career. It’s less, even, than the time that many graduates will devote to repaying their law school loans. The widely touted Pay As You Earn program extends payments over twenty years, while other plans structure payments over twenty-five years. Simkovic and McIntyre’s references to the “long term” suggest a stability that their sixteen years of data can’t support.

What would a graph of truly long-term trends show? We can’t know for sure without better data. The data might show the same pattern that Simkovic and McIntyre found for recent years. On the other hand, historic data might reveal periods when the economic premium from a law degree was small or declining. A study of long-term trends might also identify times when the JD premium was rising or higher than the one identified by Simkovic and McIntyre. A lot has changed in higher education, legal education, and the legal profession over the last 25, 50, or 100 years. That past may or may not inform the future, but it’s important to recognize that Simkovic and McIntyre tell us only about the recent past–a period that most recognize as particularly prosperous for lawyers–not about the long term.

Structural Shifts

Simkovic and McIntyre discount predictions that the legal market is undergoing a structural shift that will change lawyer earnings, the JD earnings premium, or other aspects of the labor market. Their skepticism does not stem from examination of particular workplace trends; instead it rests largely on the data they compiled. This is where Simkovic and McIntyre’s claim of stability “over the long term” becomes most dangerous.

On pp. 36-37, for example, Simkovic and McIntyre list a number of technological changes that have affected law practice, from “introduction of the typewriter” to “computerized and modular legal research through Lexis and Westlaw; word processing; electronic citation software; electronic document storage and filing systems; automated document comparison; electronic document search; email; photocopying; desktop publishing; standardized legal forms; will-making and tax-preparing software.” They then conclude (on p. 37) that “[t]hrough it all, the law degree has continued to offer a large earnings premium.”

That’s clearly hyperbole: We have no idea, based on the Simkovic and McIntyre analysis, how most of these technological changes affected the value of a law degree. Today’s JD, based on a three-year curriculum, didn’t exist when the typewriter pioneered. Lexis, WestLaw, and word processing have been around since the 1970s; photocopying dates further back than that. A study of earnings between 1996 and 2011 can’t tell us much about how those innovations affected the earnings of law graduates.

It is true (again, assuming for now no other flaws in the analysis) that legal education delivered an earnings premium during the period 1996-2011, which occurred after all of these technologies had entered the workforce. Neither typewriters nor word processors destroyed the earnings that law graduates, on average, enjoyed during those sixteen years. That is different, however, from saying that these technologies had no structural effect on lawyers’ earnings.

The Tale of the Typewriter

The lowly typewriter, in fact, may have contributed to a major structural shift in the legal market: the creation of three-year law schools and formal schooling requirements for bar admission. Simkovic and McIntyre (at fn 84) quote a 1901 statement that sounds like a melodramatic indictment of the typewriter’s impact on law practice. Francis Miles Finch, the Dean of Cornell Law School and President of the New York State Bar Association, told the bar association in 1901 that “current conditions are widely and radically different from those existing fifty years ago . . . the student in the law office copies nothing and sees nothing. The stenographer and the typewriter have monopolized what was his work . . . and he sits outside of the business tide.”

Finch, however, was not wringing his hands over new technology or the imminent demise of the legal profession; he was pointing out that law office apprentices no longer had the opportunity to absorb legal principles by copying the pleadings, briefs, letters, and other work of practicing lawyers. Finch used this change in office practices to support his argument for new licensing requirements: He proposed that every lawyer should finish four years of high school, as well as three years of law school or four years of apprenticeship, before qualifying to take the bar. These were novel requirements at the turn of the last century, although a movement was building in that direction. After Finch’s speech, the NY bar association unanimously endorsed his proposal.

Did the typewriter single-handedly lead to the creation of three-year law schools and academic prerequisites for the bar examination? Of course not. But the changing conditions of apprentice work, which grew partly from changes in technology, contributed to that shift. This structural shift, in turn, almost certainly affected the earnings of aspiring lawyers.

Some would-be lawyers, especially those of limited economic means, may not have been able to delay paid employment long enough to satisfy the requirements. Those aspirants wouldn’t have become lawyers, losing whatever financial advantage the profession might have conferred. Those who complied with the new requirements, meanwhile, lost several years of earning potential. If they attended law school, they also transferred some of their future earnings to the school by paying tuition. In these ways, the requirements reduced earnings for potential lawyers.

On the other hand, by raising barriers to entry, the requirements may have increased earnings for those already in the profession–as well as for those who succeeded in joining. Finch explicitly noted in his speech that “the profession is becoming overcrowded” and it would be a “benefit” if the educational requirements reduced the number of lawyers. (P. 102.)

The structural change, in other words, probably created winners and losers. It may also have widened the gap between those two groups. It is difficult, more than a century later, to trace the full financial effects of the educational requirements that our profession adopted during the first third of the twentieth century. I would not, however, be as quick as Simkovic and McIntyre to dismiss structural changes or their complex economic impacts.

Summary

I’ve outlined here both my praise for Simkovic and McIntyre’s article and my first two criticisms. The article adds to a much needed literature on the economics of legal education and the legal profession; it also highlights a particularly rich dataset for other scholars to explore. On the other hand, the article claims too much by referring to long-term trends and historic norms; this article examines labor market returns for law school graduates during a relatively short (and perhaps distinctive) recent period of sixteen years. The article also dismisses too quickly the impact of structural shifts. That is not really Simkovic and McIntyre’s focus, as they concede. Their data, however, do not provide the type of long-term record that would refute the possibility of structural shifts.

My next post related to this article will pick up where I left off, with winners and losers. My policy concerns with legal education and the legal profession focus primarily on the distribution of earnings, rather than on the profession’s potential to remain profitable overall. Why did law school tuition climb aggressively from 1996 through 2011, if the earnings premium was stable during that period? Why, in other words, do law schools reap a greater share of the premium today than they did in earlier decades?

Which students, meanwhile, don’t attend law school at all, forgoing any share in law school’s possible premium? For those who do attend, how is that premium distributed? Are those patterns shifting? I’ll explore these questions of winners and losers, including what we can learn about the issues from Simkovic and McIntyre, in a future post.
.

, View Comments (8)

About Law School Cafe

Cafe Manager & Co-Moderator
Deborah J. Merritt

Cafe Designer & Co-Moderator
Kyle McEntee

ABA Journal Blawg 100 HonoreeLaw School Cafe is a resource for anyone interested in changes in legal education and the legal profession.

Around the Cafe

Subscribe

Enter your email address to receive notifications of new posts by email.

Categories

Recent Comments

Recent Posts

Monthly Archives

Participate

Have something you think our audience would like to hear about? Interested in writing one or more guest posts? Send an email to the cafe manager at merritt52@gmail.com. We are interested in publishing posts from practitioners, students, faculty, and industry professionals.

Past and Present Guests