Article

Law Review vs. Peer Review

A Qualified Defense of Student Editors

Law professors have been complaining about law journals since at least 1936, when Yale’s Fred Rodell published Goodbye to Law Reviews in, somewhat ironically, the Virginia Law Review.1 Rodell’s critique was unsparing, although it was directed primarily at academic authors.2 “There are two things wrong with almost all legal writing,” he wrote. “One is its style. The other is its content. That, I think, about covers the ground.”3 Rodell did not have much to say about the student editors of law reviews, other than to lump their own written work—notes and comments, which were unsigned in those days—along with the deadly writing of the lead articles.4 He said nothing about the selection and editing process, perhaps because his article had actually been solicited by the Virginia Law Review, which was obviously open to defying convention by publishing Rodell’s jeremiad in the first place.

In any case, the lamentations have continued to the present day, although now the complaints tend to be much more focused on the role of student editors (law review authors having evidently made their peace with content and style).5 The specifics will be immediately familiar to anyone who has ever worked in the legal academy: the selection criteria are arbitrary, the edits are ill-informed, the “submission window” is absurdly narrow, and the demand for footnotes is obsessive.6 As James Lindgren once put it—though of course writing, like Rodell, in a law review—the student editors “often select articles without knowing the subject, without knowing the scholarly literature, without understanding what the manuscript says, without consulting expert referees, and without doing blind reads. Then they try to rewrite every sentence.”7

The alternative would be peer review, as is practiced in virtually every other academic discipline, in which submitted articles are blind-read by one or more experts in the field, who then make recommendations to the journal editors.8 Expertise is meant to ensure the quality of the eventual publications, and blind-reading is designed to eliminate the sort of “letterhead bias” that privileges prestigious authors over relatively unknown authors.9

I have learned from my recent forays into the social and natural sciences, however, that peer review is attended by deep problems of its own. Most seriously, peer-reviewed journals typically—perhaps even universally—require exclusive submission.10 An author may send a manuscript to only one journal at a time, and must then wait for a decision before, if rejected, submitting it elsewhere.  That would not be terribly burdensome if academic journals provided relatively quick decisions, but the reality is just the opposite. Turnaround time for peer-reviewed journals is measured in many months and sometimes even years, especially if the initial response is a “revise and resubmit,” meaning that the author must rework the paper with no guarantee of acceptance.11

I. Peer Review and Its Discontents

In one extreme, but ultimately unsurprising, example, the sociologist Philip Cohen described what he called a “saga.” The process took 583 days between the initial submission of his article and its ultimate acceptance (it was 776 days until actual publication).12 Along the way, his article was considered by four journals, subjected to thirteen reviews, and revised multiple times, although the end product, according to Cohen, did not differ significantly from the initial manuscript.13

Cohen concluded that the peer review process is broken, and Wisconsin’s Professor Pamela Oliver came to the same conclusion:

The sociology review process is broken. There are too many reviews per paper and the norm that all papers go through a revise and resubmit process burns through reviewers, slows down the publication process, creates distorted papers as authors try to satisfy incompatible demands, and puts too high a premium on novelty over methodological rigor.14

All that delay and frustration might be worthwhile if it resulted in the publication of higher quality work, but the record in that regard is decidedly mixed. As has lately become well known, the field of social psychology is in the midst of a reproducibility crisis, as it turns out that fewer than half of a selection of published peer-reviewed studies held up when retested.15 As explained by the Open Science Collaboration:

We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. . . . [C]ollectively these results offer a clear conclusion: A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes.16

The peer-reviewed articles with which I am most familiar all turned out to have severe methodological errors that were not identified by referees prior to publication.

II. Flawed Social Science

In 2009, the ethnographer Alice Goffman published an article in the flagship American Sociological Review titled On the Run, which later became the basis for her book of the same name.17 The article (and the book, published by the University of Chicago Press in 2014) told the stories of young African American men in Philadelphia whose lives were afflicted by repeated encounters with the police.18 It included the results of purported survey that Goffman and a colleague conducted “of the 217 households that make up the 6th Street neighborhood:”19

[W]e found 308 men between the ages of 18 and 30 in residence. Of these men, 144 reported that they had a warrant issued for their arrest because of either delinquencies with court fines and fees or for failure to appear for a court date within the past three years. Also within the past three years, warrants had been issued to 119 men for technical violations of their probation or parole (e.g., drinking or breaking curfew).20

Although the survey apparently passed muster with the peer reviewers for both the American Sociological Review and the University of Chicago Press, it turned out to be not only flawed but virtually impossible. As Philip Cohen explained, the claimed response rate appears to have been 100%, which is obviously too high. The number of young men counted—which worked out to 1.4 per household—was also far too high based on census tract information.21 Cohen submitted a formal comment to the American Sociological Review, detailing the impossibility of the reported results and explaining that “errors in the survey need to be described and acknowledged if the results are to be interpretable.”22

The journal editors admitted the errors in the survey23 but nonetheless declined to publish the comment. Said Cohen of his inability to obtain the correction of a peer-reviewed article: “It’s just wrong that now the editors acknowledge there is something wrong in their journal—although we seem to disagree about how serious the problem is—but no one is going to formally notify the future readers of the article.”24

III. Medical Errors

The problems with peer review are not unique to psychology, sociology, or the other social sciences. They bedevil medicine and natural science as well. John Ioannidis, of the Stanford Medical School, has opined that “most published research findings are false,”25 and Richard Horton, editor-in-chief of The Lancet—which is the U.K.’s leading medical journal—has cautioned that “much of the scientific literature, perhaps half, may simply be untrue.”26

In 2011, a group of British researchers published the encouraging results of their years-long study of the devastating disease known as chronic fatigue syndrome (also called myalgic encephalomyelitis or ME/CFS).27 Writing in The Lancet, they announced that the subjects of their clinical trial had responded favorably to cognitive behavior therapy (“CBT”) and graded exercise therapy (“GET”), showing improvement at far greater rates than the two control arms.28 The outcome of the PACE trial, as it was known, was reported enthusiastically in the press since it seemed to offer relief for patients for whom there were no other treatments.30 Two years later, the PACE team, now writing in Psychological Medicine, reported even better results.30 Fully, 22% of patients had actually “recovered” following either CBT or GET, which was triple the rate for the untreated controls.31

Psychological Medicine is one of the most prominent journals in its field,32 and The Lancet is among the top medical journals in the world.33 Both subject articles to extensive peer review.34 Nonetheless, it turns out that the published results of the PACE trial had been dramatically overstated, for reasons that should have been obvious upon a first reading of the manuscripts.35

In a series of devastating post-publication reviews—published by the Virology Blog,36 the Journal of Health Psychology,37 and Fatigue,38 among other platforms39—it was shown that the PACE investigators had relaxed several of their original standards for improvement and recovery in the middle of the non-blinded trial. This made it far more likely that they would obtain positive results for the treatment arms.40

This problem was immediately noticed by several ME/CFS patients who were skeptical of the outcomes, but it had evidently been missed by the peer reviewers. Upon reanalysis using the measures for the original research protocol, it was determined that the actual recovery rate for CBT and GET was only 7%, which was statistically indistinguishable from the two control arms.41

Over 100 ME/CFS researchers, clinicians, and organizations have called upon Psychological Medicine to withdraw or correct the “recovery” article.42 But, as with the American Sociological Review, the editors have declined.43 Law journals, on the other hand, are typically more than willing to publish critiques, responses, and rejoinders to earlier articles. It is unlikely that law review editors would have been equally unresponsive in similar circumstances.

IV. A Thought Experiment

Given their respective shortcomings, are there actual circumstances in which law review editors might outperform peer reviewers? Would authors—or more importantly, readers—ever benefit from law students’ penchant for obsessive footnoting, as opposed to peer reviewers’ demands for multiple rewrites? A thought experiment may provide a partial answer.

The book version of Alice Goffman’s On the Run tells the story of Miss Deena, who provided Goffman’s introduction to the 6th Street neighborhood.44 Goffman got started in ethnography during her freshman year at the University of Pennsylvania, when she took a cafeteria job in connection with a sociology seminar.45 Miss Deena was her boss—“a short and reserved Black woman in her sixties . . . [who] was entering her third decade of service at the university’s cafeteria, and her fifteenth year in management.”46 Despite her loyalty to Penn, the university betrayed Miss Deena in the spring of 2002, which was at the end of Goffman’s freshman year:

[Miss Deena] got laid off from the cafeteria—seven months before her retirement would have kicked in. To this day, she receives no pension from the University, though she worked there full time for twenty-two years. Her daughter Rochelle and I were horrified, and made a number of futile attempts to right the wrong.47

It is indeed a haunting scenario, in which a devoted worker is cruelly fired after decades of service, right on the brink of obtaining pension benefits.48 The editors and reviewers at the University of Chicago Press evidently saw no problems with the plausibility of Goffman’s account, which was included in the book’s “Methodological Supplement.”49 To the professional editors, the story of Miss Deena evidently evoked the familiar theme of institutional cold-heartedness and worker exploitation, and it therefore made complete sense to them.50

Law review editors, on the other hand, might well have questioned the claim. As it happens, the practice described by Goffman was made illegal in 1974, with the passage of the Employee Retirement Income Security Act (“ERISA”).51 The law in effect during the entire time of Miss Deena’s employment—roughly 1980 through 2002—included a schedule of maximum time periods before pension rights would have to vest under an employer’s defined benefit plan (which is the only sort that “kicks in”).52 Although the schedule changed over time—the vesting period could once have been as long as ten years—there was never a point when an employee could have worked at the University of Pennsylvania for twenty-two years without acquiring some rights in a “kick-in” type plan.53

Would a law student editor have wondered, as the University of Chicago Press did not, whether Goffman’s account was possible under ERISA? Not every law student is aware of the details of pension law, but law reviews are structured in a way that brings many eyes, and thus the knowledge gained in many previous courses, to each article.54 Submitted articles are first considered for publication by a committee of editors, who may number between four and eight.55 Articles selected for publication are then assigned to at least two students: a senior articles editor, who is generally responsible for revising the manuscript, and a second-year student, who performs a “source and cite” check.56 Successive drafts are typically read by a managing editor and an editor-in-chief.57 In the usual case, then, an article will have been read by at least four, and as many as ten or more law students before publication.58

Thus, if only one student editor had simply been aware of ERISA–perhaps having taken a course covering, say, employment, pension, insurance, disability, or health law—the story of Miss Deena’s pension denial would have at least raised a question. Perhaps there was something that merited research or investigation, if for no reason other than to drop an appropriate footnote. Flagging further footnotes, after all, is one of the main functions of a law review editor.59

An inquiring law student would have turned first to the ERISA statute itself, which clearly spells out the vesting requirements for defined benefit employee pension plans60 (the other sort of plan, called “defined contribution,” vests immediately with no waiting period).61 That alone would have been enough to raise questions about the story, and certainly could have included calling for a citation—the go-to demand of student editors in the face of uncertainty.

If that were not forthcoming, a more enterprising student might even have asked for confirmation from the University of Pennsylvania’s human resources office. I made such an inquiry, and learned that Penn’s defined benefit plan “kicked in” after only five years of employment as of 1989; it had been ten years previously, which was still well short of Miss Deena’s twenty-two.62 In either case, Goffman’s depiction of a pension injustice was patently untenable.

There is a reasonable explanation for the discrepancy. Even when vested, most defined benefit plans do not begin making payments until the ex-employee reaches age sixty-five.63 Recall that Miss Deena was “in her 60s” when Goffman began working in the cafeteria, which would have been during the 2001-2002 academic year.64 It therefore appears most likely that she simply had not yet turned sixty-five when she was laid off. If Miss Deena’s sixty-fifth birthday was still seven months away, then she would not have been eligible to begin receiving payments on the day she lost her job.

The termination of Miss Deena was no doubt callous and unfair, related as it was (according to Goffman) to outsourcing and de-unionization. It is remotely conceivable, one supposes, that the University of Pennsylvania additionally cheated her by violating both ERISA and the provisions of its own pension plan while stonewalling Goffman’s alleged attempts to “right the wrong”—all at the risk of sparking ruinous class action litigation.65 But it is overwhelmingly more likely that there had been only a temporary and routine delay in Miss Deena’s payments, which might well have been discovered—as a matter of law, if not in fact—by scrupulous editors. The story of Miss Deena’s work and pension experience could have remained in a law review version of On the Run, but without the inaccurate assertion that it continued “to this day.”66

There is no guarantee, of course, that law review editors would have caught and corrected the ERISA problem in On the Run, but we know that the peer reviewers failed to notice it. It is true that law students, by definition, have some baseline knowledge of law. The book itself, however, is about the effects of law enforcement, so it would be a further problem if it turns out that no one with a legal background had been asked to review the manuscript.

V. Conclusion

The Harvard economist George Borjas has also written about the shortcomings of peer review:

The point is that many human emotions, including nepotism, professional jealousies, methodological disagreements, and ideological biases go into the peer review process. It would be refreshing if we interpreted the “peer-reviewed” sign of approval as the flawed signal that it is, particularly in areas where there seems to be a larger narrative that must be served. The peer-review process may well be the worst way of advancing scientific knowledge—except for all the others.67

Let me suggest that Borjas may well be wrong about the latter point. For all of their flaws and naiveté, law review editors are likely to demand proof, or at least citations, for assertions that go unquestioned by peer reviewers—not because they know more than the experts, but because they recognize that they know less. And therein lies their virtue.

 

* Edna B. and Ednyfed H. Williams Memorial Professor of Law, Director, Fred Bartlit Center for Trial Advocacy, Northwestern University Pritzker School of Law. Thanks are due to Eva Derzic for research assistance.

1.Fred Rodell, Goodbye to Law Reviews, 23 Va. L. Rev. 38 (1936).

2.   See, e.g., id. at 40-41, 43.

3.   Id. at 38.

4.   Id. at 44–45.

5.   Adam Liptak, The Lackluster Reviews That Lawyers Love to Hate, N.Y. Times: Sidebar (Oct. 21, 2013), http://www.nytimes.com/2013/10/22/us/law-scholarships-lackluster-reviews.html.

6.   See, e.g., Alfred L. Brophy, The Signaling Value of Law Reviews: An Exploration of Citations and Prestige, 36 Fla. St. U. L. Rev. 229, 231 (2009) (“It really is extraordinary that students pick articles in areas in which they have little expertise.”); Erik M. Jensen, The Law Review Manuscript Glut: The Need for Guidelines, 39 J. Legal Educ. 383, 384–85 (noting “haphazard review” of submissions by student editors and their preference for “sexy topics”); Richard A. Posner, Law Reviews, 46 Washburn L.J. 156, 157–58 (2006) (“On the side of substance, [editors of law reviews have an] especial preoccupation . . . with trying to maximize the number of footnotes, citations, and cross-references.”); Kalyani Robbins, The Dwindling Timeline for Law Review Submissions and its Impact on Scholarship, PrawfsBlawg (Nov. 1, 2015, 2:59 PM), http://prawfsblawg.blogs.com/prawfsblawg/2015/11/the-dwindling-timeline-for-law-review-submissions-and-its-impact-on-scholarship.html.

7.   James Lindgren, An Author’s Manifesto, 61 U. Chi. L. Rev. 527, 527 (1994).

8.   Richard A. Posner, Against the Law Reviews: Welcome to a World Where Inexperienced Editors Make Articles About the Wrong Topics Worse, Leg. Aff., Nov./Dec. 2004, at 57 (“[I]n other academic fields, except law, the most prestigious journals are edited by seasoned specialists . . . [and] the scholar-editors usually are strongly influenced by the advice they receive from other professors, to whom they refer the submitted articles for peer review.”).

9.   See, e.g., Elsevier, What is Peer Review?, Elsevier, https://www.elsevier.com/reviewers/
what-is-peer-review (last visited July 25, 2017).

10.   See, e.g., Publishing Your Article or Book, Harv. Kennedy School, http://guides.library.
harvard.edu/hks/publishing (last visited July 25, 2017) (“Most scholarly journals require that you submit your article to them exclusively for review.”).

11.   See, e.g., Michael S. Harris, The ‘Revise and Resubmit,Inside Higher Ed. (Aug. 3, 2015), https://www.insidehighered.com/advice/2015/08/03/essay-how-academics-should-approach-revise-and-resubmit-responses-journals (noting there is “no guarantee” that the journal will accept an article after revisions).

12.   Philip N. Cohen, Our Broken Peer Review System, Fam. Inequality (Oct 5, 2015, 6:00 AM), https://familyinequality.wordpress.com/2015/10/05/our-broken-peer-review-system-in-one-saga/.

13.   Id.

14.   Pamela Oliver, The Revolt of the Reviewers: Towards Fixing a Broken Publishing Process, Am. Soc., no. 47, 2016, at 344 (abstract).

15.   Jessica Firger, Science’s Reproducibility Problem: 100 Psych Studies Were Tested and Only Half Held Up, Newsweek: Tech & Science (Aug. 28, 2015, 3:05 PM), http://www.newsweek.com/reproducibility-science-psychology-studies-366744.

16.   Open Sci. Collaboration, Estimating the Reproducibility of Psychological Science, Sci., Aug. 2015, at 943.

17.   Alice Goffman, On the Run: Wanted Men in a Philadelphia Ghetto, Am. Soc. Rev., June 2009, at 339.

18.   Id.

19.   Id. at 342.

20.   Id. at 343.

21.   Philip N. Cohen, On the Ropes (Goffman Review), Fam. Inequality (May 28, 2015, 6:00 AM), https://familyinequality.wordpress.com/2015/05/28/on-the-ropes-goffman-review/.

22.   Philip N. Cohen, Survey and Ethnography: Comment on Goffman’s “On the Run, at 4 (Working Paper, June 22, 2015), https://www.terpconnect.umd.edu/~pnc/working/GoffmanComment-06-22-15.pdf; see also Philip N. Cohen, On Goffman’s Survey, Fam. Inequality (June 19, 2015, 10:49 AM), https://familyinequality.wordpress.com/2015/06/19/on-goffmans-survey/.

23.   Philip N. Cohen, Comment on Goffman’s Survey, American Sociological Review Rejection Edition, Fam. Inequality (Aug. 26, 2015, 1:19 PM), https://familyinequality.wordpress.com/2015/08/
26/comment-on-goffmans-survey-american-sociological-review-rejection-edition/.

24.   Id.

25.   John. P. A. Ioannidis, Why Most Published Research Findings Are False, PLoS Med., Aug. 2008, at 696, 699.

26.   Richard Horton, Offline: What is Medicine’s 5 Sigma?, Lancet, Apr. 11, 2015, at 1380, 1380.

27.   P. D. White et al., Comparison of Adaptive Pacing Therapy, Cognitive Behaviour Therapy, Graded Exercise Therapy, and Specialist Medical Care for Chronic Fatigue Syndrome (PACE): A Randomised Trial, Lancet, Mar. 5, 2011, at 823.

28.   Id.

29.   See, e.g., Daily Mail Reporters, Got ME? Fatigued Patients Who Go Out and Exercise Have Best Hope of Recovery, Finds Study, Daily Mail (Feb. 18, 2011), http://www.dailymail.co.uk/health/article-1358269/Chronic-fatigue-syndrome-ME-patients-exercise-best-hope-recovery-finds-study.html.

30.   P. D. White et al., Recovery From Chronic Fatigue Syndrome After Treatments Given in the PACE Trial, Psych. Med., Oct. 2013, at 2227.

29.   Id.

32.   According to Thomson Reuters, Psychological Medicine had an impact factor of 5.491 in 2015: 7 out of 76 in Psychology; 17 out of 170 in Psychiatry; and 6 out of 121 in Clinical Psychology. Thomson Reuters, 2015 InCites Journal Citation Reports (2015).

33.   The Lancet’s impact factor in 2016 was 44.002. Thomson Reuters, 2016 InCites Journal Citation Reports (2017).

34.   See Lancet, Information For Authors (Feb. 2017), http://www.thelancet.com/pb/assets/raw/
Lancet/authors/lancet-information-for-authors.pdf.

35.   See, e.g., David Tuller, Trial by Error, Continued: Did the PACE Study Really Adopt a ‘Strict Criterion’ for Recovery?, Virology Blog (Nov. 4, 2015), http://www.virology.ws/2015/11/04/trial-by-error-continued-did-the-pace-study-really-adopt-a-strict-criterion-for-recovery/ (summarizing many inconsistencies in initial 2011 PACE trials).

34.   David Tuller, ME/CFS, Virology Blog, http://www.virology.ws/mecfs/ (last visited July 25, 2017) (listing links to negative trial review series by multiple authors noting serious errors in PACE methodology).

37.   Keith J. Geraghty, ‘PACE-Gate’: When Clinical Trial Evidence Meets Open Data Access, J. of Health Psych. (Jan. 11, 2016), http://journals.sagepub.com/doi/pdf/10.1177/1359105316675213.

38.   Carolyn Wilshire et al., Can Patients with Chronic Fatigue Syndrome Really Recover After Graded Exercise or Cognitive Behavioural Therapy? A Critical Commentary and Preliminary ReAnalysis of the PACE Trial, 5 Fatigue, no. 1, 2017, at 43.

39.   The PACE trials also drew the attention of the New York Times, among others. See, e.g., Julie Rehmeyer & David Tuller, Getting It Wrong on Chronic Fatigue Syndrome, N.Y. Times: Sunday Review (Mar. 18, 2017), https://www.nytimes.com/2017/03/18/opinion/sunday/getting-it-wrong-on-chronic-fatigue-syndrome.html?ribbon-ad-idx=2&rref=opinion&_r=0; see also Steven Lubet, How a Study About Chronic Fatigue Syndrome Was Doctored, Adding to Pain and Stigma, The Conversation (Mar. 22, 2017), http://theconversation.com/how-a-study-about-chronic-fatigue-syndrome-was-doctored-adding-to-pain-and-stigma-74890.

40.   See Rehmeyer & Tuller, supra note 39.

39.   See Wilshire, supra note 38, at 44, 47.

42.   See Vincent Racaniello, An Open Letter to Psychological Medicine About “Recovery” and the PACE Trial, Virology Blog (Mar. 13, 2017), http://www.virology.ws/2017/03/13/an-open-letter-to-psychological-medicine-about-recovery-and-the-pace-trial/. (Note, I am one of the signatories on the open letter.)

43.   See Vincent Racaniello, An Open Letter to Psychological Medicine, Again!, Virology Blog (Mar. 23, 2017), http://www.virology.ws/2017/03/23/an-open-letter-to-psychological-medicine-again/ (noting that the “misleading claims” were allowed to stand without correction or retraction).

44.   Alice Goffman, On the Run: Fugitive Life in an American City (2014).

45.   Id. at 211.

46.   Id. at 211–12.

47.   Id. at 192.

48.   Id.

49.   Id. at 211.

50.   See, e.g., Howard Becker, Additional Praise, in Goffman, supra note 44 (“On the Run tells, in gripping, hard-won detail, what it’s like to be trapped on the wrong side of the law with no way out—the situation of so many young black Americans today. A brilliant fieldworker.”).

51.   Pub. L. No. 93–406, 88 Stat. 829.

52.   See U.S. Dep’t of Lab. Employee Benefits Admin., Retirement Plans and ERISA FAQs, https://www.dol.gov/agencies/ebsa/about-ebsa/our-activities/resource-center/faqs/retirement-plans-and-erisa-consumer (last visited July 25, 2017) (noting that employees do not have an “immediate right” to retirement contributions made by employers); see also 29 U.S.C. § 1053 (current minimum vesting standards).

53.   See Workplace Flexibility 2010, Geo. U. L. Center, A Timeline of the Evolution of Retirement in the United States (2010), http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1049&context=legal.

54.   Michael L. Closen & Robert J. Dzielak, The History and Influence of the Law Review Institution, 20 Akron L. Rev. 15, 43–48 (1996).

55.   Id. at 43–45 (describing the process of article consideration).

56.   Id. at 45 (summarizing the process of “source and cite”); see also Darby Dickerson, Citation Frustrations—And Solutions, 30 Stetson L. Rev. 477, 478 (2000) (“A principal duty of law review staff members is to ‘cite and source’ articles selected for publication. ‘Cite and source’ is the process through which law review members check the substantive accuracy of articles, place citations in the proper form, ensure that cited sources are still good law, and correct grammatical and typographical errors.”).

57.   See, e.g., Closen & Dzielak, supra note 54, at 45–46.

58.   See generally id. at 44–49 (providing a general overview of the law review submission process and tasks of various law review members).

58.   Emails from Bradley Williams, May 15 and 16, 2017; see also Dickerson, supra note 56, at 478.

60.   See 29 U.S.C. § 1002(35) (defining “defined benefit” plan), § 1054(b)(1) (explaining criteria for “defined benefit” plan).

61.   See id. § 1002(34) (defining “defined contribution” plan), § 1054(b)(2) (explaining the criteria for “defined contribution” plan).

62.   Email to author from Joanne Blythe, University of Pennsylvania Division of Human Resources, August 19, 2015.

63.   See 29 U.S.C. § 1056(a)(1) (providing that the commencement date for payment of benefits for most pension plans “occurs the date on which the participant attains the earlier of age 65 or the normal retirement age specified under the plan” unless certain other criteria are present).

64.   Goffman, supra note 44, at 212.

65.   Id. at 192.

66.   Id.

67.   George J. Borjas, A Rant on Peer Review, LaborEcon (June 30, 2016, 7:35 PM), https://
gborjas.org/2016/06/30/a-rant-on-peer-review/.

The full text of this Article is available to download as a PDF.