Friday, September 6, 2013

Syria intervention and the law, continued

Thanks for the many terrific comments (on- and offline) on my recent post regarding possible U.S. intervention in Syria.  I don't have time today to respond to all of them, but a couple points made by my colleague Garrett Epps are particularly compelling.

Garrett suggests that if I'm right that there is (morally speaking) a legitimate choice to be made between obeying international law and following the dictates of morality, then the U.S. ought to at least formally request Security Council authorization before taking unilateral action.  I agree with this suggestion.  Of course, the Council's response probably would be a foregone conclusion, given the virtual inevitability of a Russian (and possibly a Chinese) veto.  But formally requesting Council authorization would be an important exercise in self-discipline and in respect for the law.  It would force the administration to hone its arguments regarding why military intervention really is necessary and to present those arguments publicly.  It would at least imply an acknowledgment that any unilateral action by the U.S. would be against international legal norms.  And if (as seems inevitable) the Council denies the request, the resulting chastisement, while falling short of formal sanctions for breaking the law, would at least impose some cost (in the coin of international public opinion) on the U.S. for doing so.

As I wrote in my previous post, law should have some teeth to prevent legal subjects from, in all good conscience, over-disobeying the law thanks to overconfidence in their own moral judgments.  A formal request of, and rejection by, the Security Council may be the best international law can provide in this instance in the way of teeth.  Of course, I am under no illusions that this will actually happen; I can't envision any U.S. President (even a second-term former law professor) willing to pay the domestic political price that surely would follow from the perception of begging the U.N. for permission to act and coming away with nothing to show for it.

Garrett also points out another wrinkle:  the fact that part of international law is customary law, that is, law that is formed not by formal treaties, proclamations, or court decisions but rather by patterns of practice over time.  Subjects of international law can actually make the law to which they are subject by taking some action that, while not legal (at least not clearly legal) when taken, becomes accepted as legitimate by other participants in international law.  This complicates matters; it suggests that if the U.S. acts unilaterally, that action may eventually come to be regarded, ex post, as legal.  (Some argue that NATO's intervention in Kosovo in 1999, which occurred without Security Council approval, should now be regarded as legal for this reason.)  And while I'm no international law expert, it strikes me that the case for the customary legality of intervention might be weakened if the U.S. were to ask the Security Council's permission and that permission were to be denied.

I doubt very much, however, that the hope of making new customary international law by itself is enough to justify unilateral action.  For one thing, the precedent that would be set if unilateral "humanitarian" intervention become the norm would be dangerous and not necessarily to our liking.  The need for humanitarian intervention will often be a matter of reasonable dispute -- hence the presence of international procedures, however, imperfect, for determining its existence -- and if the U.S. can (legally) make these decisions unilaterally, then so can China, Russia, or for that matter North Korea or Syria or Iran.

In other words, it's one thing for the U.S. to act, and to be seen as acting, against established norms of international law; that is unlikely to set a legal precedent.  It's another thing entirely for the U.S. to help create a customary norm that unilateral intervention is legally justified (or for that matter to argue now, in defense of unilateral action, that such a customary norm already exists).

We can learn a lot about this distinction from Justice Robert Jackson's dissenting opinion in Korematsu v. United States, the now-infamous Supreme Court decision validating the internment of Japanese Americans during World War II.  Jackson acknowledged that the military cannot always be held to legal standards in time of war.  "It would be impracticable and dangerous idealism to expect or insist that each specific military command in an area of probable operations will conform to conventional tests of constitutionality,"he wrote.  "When an area is so beset that it must be put under military control at all, the paramount consideration is that its measures be successful, rather than legal."  Jackson thus acknowledged the possibility that the internment order was morally, if not legally, permissible.

But Jackson objected to the Court's legal validation of the order.  "A military order, however unconstitutional, is not apt to last longer than the military emergency," Jackson noted.  "But once a judicial opinion rationalizes such an order to show that it conforms to the Constitution, or rather rationalizes the Constitution to show that the Constitution sanctions such an order, the Court for all time has validated the principle .... The principle then lies about like a loaded weapon ready for the hand of any authority that can bring forward a plausible claim of an urgent need."

A legal norm permitting military intervention for humanitarian purposes might turn out to be Jackson's "loaded weapon," waiting to be deployed by any nation with a quasi-plausible argument of "humanitarian" exigency.  A saliently illegal humanitarian intervention is dangerous, to be sure, but at least it doesn't establish a legal principle we all might come to regret later.

Finally, a word about my very tentative "leaning" in the previous post toward "militarily appropriate" strikes against Syria.  Part of me regrets writing that; there are many complex moral and policy considerations in the mix, most of them well beyond my capacity for well-informed judgment.  I am in fact profoundly conflicted about what should be done as a practical matter; I'm not sure how I would vote on the proposed resolution if I were a member of Congress, though for various reasons I still lean somewhat in favor.  A vote for intervention, however, has enormous potential to haunt those who cast it two or three years down the road, if "surgical strikes" against Syria become, as they very well might, "boots on the ground," "nation building," and all the horrible things we are justifiably sick of thanks to the last decade in Iraq and Afghanistan.  Only a fool would deny the huge actual and potential costs of intervention.  I didn't mean to deny them in my previous post.  What morality and policy have to say about intervention are extremely difficult questions.  My point was simply that these questions are different from, and not necessarily preempted by, the question whether intervention is illegal.

Thursday, September 5, 2013

Thanks to Larry Solum ...

... for the kind mention in his Legal Theory blog of my paper "What Lies Beneath:  Interpretive Methodology, Constitutional Authority, and the Case of Originalism," which I recently posted on SSRN and is forthcoming in the BYU Law Review.

Wednesday, September 4, 2013

Striking at Syria would be illegal. Should we do it anyway?

In yesterday's New York Times, Yale law professors Oona Hathaway and Scott Shapiro engage the question of whether the U.S. ought to unilaterally strike at Syria despite the absence of a U.N. Security Council authorization.  Hathaway and Shapiro suggest, though they don't quite declare, that the answer is no.  Striking without U.N. approval, they note, would violate the U.N. Charter to which the U.S. is a party and thus would be illegal.  Doing so would set a precedent that would make it easier for other nations, in the future, to take unilateral military action, perhaps with less justification than the U.S. (in light of Syria's use of chemical weapons on its own people) now can claim.  This in turn would threaten to return the world to the pre-U.N. status quo in which nations (at least powerful nations) routinely struck at other nations on trumped-up grounds.  "The question Congress and Mr. Obama must ask now," write Hathaway and Shapiro, "is whether employing force to punish Mr. Assad’s use of chemical weapons is worth endangering the fragile international order that is World War II’s most significant legacy."

I think this is the right way to put the question, although there is some nuance here that, to their credit, Hathaway and Shapiro seem to recognize.  The temptation for many Americans, including many lawyers, will be to question whether international law is really the sort of law that imposes an obligation of obedience on its subjects, particularly in the face of a pressing moral crisis like Assad's use of chemical weapons.  But international law is law; as a party to the U.N. Charter, the United States has an obligation to respect its authority just as Americans have an obligation to respect the authority of our Constitution and laws.  (Indeed one could argue that the U.N. Charter has more binding force than most domestic laws.  After all, the U.S. voluntarily became a party to the charter, but among U.S. citizens, only naturalized citizens and government officials have affirmatively sworn allegiance to the U.S. government and its laws.)

The fact that international law, including the U.N. Charter, is legally binding on the United States does not end the debate, however.  Even valid law cannot impose an absolute, indefeasible obligation to obey its commands; sometimes the demands of morality outstrip those of law.  This is why I like the way Hathaway and Shapiro have put the issue, as a choice between the arguments for obeying the law (preserving "fragile international order") and the arguments for disobeying it in the name of morality (punishing Assad's use of chemical weapons -- and, one might add, possibly deterring their future use).  It's important to recognize that the presence of valid law implies a choice, not a blind duty of obedience.  If the law generally is good law, of course, then in the vast majority of cases the choice is an easy one, so easy perhaps that it doesn't seem like a choice at all:  Of course we should obey the law.  But in some extraordinary cases the choice becomes very hard, because obeying the law is likely to bring dire moral consequences.  The Syria situation seems to me like such an extraordinary case.  There is much to be said for disobeying the law in this instance in the name of preventing a greater evil.

One consideration that Hathaway and Shapiro don't emphasize, however, is the importance of being willing to take one's medicine when one willfully disobeys the law in the name of a greater moral good.  The modern tendency is to equivocate around inconvenient legal obligations -- to mount sophistic arguments for why one isn't really disobeying the law at all.  But the choice between obeying the law and serving a higher moral cause must be a real choice, with real teeth attached to it.  If one decides to disobey the law in the name of morality, than one must be prepared to suffer the legal consequences of one's actions.  Without the very real threat of legal consequences for disobedience -- even for justified disobedience -- people would be far too ready to engage in unjustified disobedience, wrongly confident in the strength of their own unilateral moral judgment.  The threat of legal sanctions for disobedience is necessary to make would-be disobeyers think at least twice before deciding that morality justifies their actions.

It is highly unlikely, however, that the U.S. would suffer any formal legal sanctions for striking Syria without Security Council authorization.  This is true for the same reason that the Security Council is highly unlikely to provide authorization in the first place:  At least one Council member would veto any such resolution.  (The U.S. itself holds such a veto as a permanent Security Council member.)  The absence of any real threat of sanctions for disobeying the law serves, to my mind at least, as a reason against disobeying the law and striking Syria unilaterally; we (that is, we collectively -- the United States, as represented by our government) are that much more likely to be overconfident in our own judgment and insufficiently respectful of the value of international law without the prospect of significant punishment for disobeying that law.

At the same time, the blatant procedural flaws within the Security Council -- the presence of five powerful permanent members (China, France, Russia, the UK, and the US) with absolute power to veto Council resolutions -- strengthens our reasons to act unilaterally by undermining the procedural fairness of the law.  I have argued in my scholarly work that the authority of law rests in large part on the fairness of its procedures -- on the capacity of those procedures to avoid or resolve disputes in a way that can be perceived as relatively impartial and thus can be accepted by those subject to the law.  The stacked deck that is the Security Council flies in the face of this essential impartiality, weakening the U.N. Charter's claim to be validly binding law in the first place.

All of which makes this, for me, a very close case.  The U.N. Charter is law, binding upon the U.S. and other nations, and it is law that, as Hathaway and Shapiro note, serves a vital dispute-avoiding purpose.  But it is law that is rather saliently compromised by procedural dysfunction within the institutions charged with carrying it out.  And, in this instance, it is law whose claim to our obedience is strongly opposed by a forceful moral argument that action must be taken in Syria -- although we should be wary of being overconfident about our own moral judgments in this respect.

In the end, I lean toward doing the following:  Striking against Syria in a militarily appropriate fashion, unilaterally if need be, and doing so in full acknowledgment that the action is a violation of our legal obligations under the U.N. Charter.  But there is no good, clean choice here -- especially since people are likely to die either way.  Thank God I'm not the President.

Thursday, August 8, 2013

Our autocratic Chief Justice

A fascinating op-ed by Linda Greenhouse in yesterday's Times points out that Congress has delegated to one man (or, hypothetically, one woman) an enormous amount of unreviewable power, including the sole authority to appoint judges to the Foreign Intelligence Surveillance Court (much in the news these days) and to a surprising number of other important judicial bodies.

Greenhouse first looked into the issue because of reports that the current Chief Justice, John Roberts, has filled 10 of the FIS Court's 11 seats with judges nominated by Republican presidents.  (Only current life-tenured federal judges are eligible to serve on that court.)  But, as she discusses, the issue is much bigger than the FIS Court or John Roberts.

We tend to think of appeals-court judges, and particularly Supreme Court Justices, as effectively sharing power with other coequal judges or Justices on the same court:  Roberts can't unilaterally impose his will in deciding a case because he needs at least four other Justices to go along with him.  In a strikingly large number of instances, however, Roberts (or any other Chief Justice) can in effect unilaterally impose his will, by deciding who will serve on other important judicial bodies created by Congress.  And, unlike with most Presidential appointments -- which must be confirmed by the Senate -- the Chief Justice's decision of whom to appoint is entirely unreviewable in these cases.

We can't blame Roberts or any particular Chief Justice for this (although we might wonder why 91% of Roberts's appointments to the FIS Court apparently share his own political party).  The blame lies with Congress, which repeatedly has followed the path of least resistance by delegating to someone else the responsibility of filling these important positions.  Not that Congress should be in the business of making every special-court appointment itself by legislation -- that would be way too cumbersome.  But to delegate huge swaths of that authority to a single official (the Chief Justice) is dangerously irresponsible, especially given the increasing longevity of any given Chief Justice's tenure.  (Roberts himself, now 58, was only 50 years old when he assumed office in 2005.  There is every reason to expect him to serve for another 20 years at least -- that is, until 2033.  That's a long time in which to exercise the sole discretion to choose members of the FIS Court and other judicial bodies.)

Is there a better alternative?  This is a discussion that's just beginning, but here's a tentative idea:  a single panel of sitting federal judges charged with making appointments to these other special courts and judicial bodies.  The panel's members could be nominated by the President and confirmed by the Senate, like other high-ranking officers.  Each could serve for a limited term, with the terms initially staggered so that the panel's membership would turn over gradually.  There are weaknesses in this proposal, of course, including the problem of securing appointments by Senate confirmation in today's polarized political climate.  But the notion of a single panel with rotating membership to make these appointments seems superior, to me, to the current default mode of simply letting the Chief Justice do it.

Much debate about the institutional role of the Supreme Court centers on the constitutional requirement of lifetime tenure, which can't be changed without a constitutional amendment.  The vast appointment authority of the Chief Justice, though, is a problem created solely by Congress, and so it's a problem that Congress can and should fix.

Friday, June 28, 2013

Winning ugly: the Court's same-sex marriage rulings, Part II

In my post yesterday, I explained my view that the Supreme Court's decision this week in United States v. Windsor was an "ugly win" -- a victory for gay rights, but a victory that is heavily qualified and that came at the expense of good legal craft.  I also suggested that the ugliness of Windsor might have been by design and might even have been necessary.

Hollingsworth v. Perry was an ugly win too, probably for many of the same reasons, although Perry was in some respect both less ugly and less of a win than Windsor.  At issue in Perry was California's Proposition 8, a voter initiative that amended the state's constitution in 2008 to legally define marriage as a union between one man and one woman.  (California permits same-sex "domestic partnerships," which carry the same legal rights and obligations as marriage without the name.)  Same-sex couples wishing to marry in California sued the state in federal court, claiming that Prop. 8 violated both the equal-protection and due-process guarantees of the federal Constitution.  The district court ("district court" is the name for trial courts in the federal system) issued a sweeping opinion ruling for the plaintiffs, asserting that Prop. 8 should be subjected to strict scrutiny (for an overview of levels of scrutiny, see yesterday's post) and holding that the law failed to survive even the more-deferential rational-basis review.  On appeal to the Ninth Circuit, the Court of Appeals affirmed the district court's ruling, but on substantially narrower grounds, holding that because Prop. 8 deprived same-sex couples of a then-existing right to marry for no good reason, it could only have been motivated by animosity against homosexuals, which is not a legitimate state interest under the Romer decision (also discussed in yesterday's post).

We have a pretty good indication from Windsor that five current Justices (Kennedy, Ginsburg, Breyer, Sotomayor, and Kagan) believe that laws prohibiting same-sex marriage violate the Constitution.  So it was well within the realm of possibility that a majority of the Court would rule against Prop. 8, either on the broad grounds employed by the district judge or, more likely, on the narrow basis relied upon by the Court of Appeals.  But of course that's not what happened.

Instead, three of the Court's center-left Justices (Ginsburg, Breyer, and Kagan) joined with two conservatives (Scalia and Chief Justice Roberts) in an opinion dismissing the appeal for lack of what lawyers call "standing," that is, capacity to pursue a claim or an appeal in federal court.  And while Roberts's opinion for the Court is a finer specimen of legal argument than Kennedy's in Windsor, it's hardly a Holmesian model of persuasion.

Thursday, June 27, 2013

Winning ugly: the Court's same-sex marriage rulings, Part I

I don't take naturally to blogging, as will be obvious to anyone who's tried to follow this blog since its inception earlier this year.  I suppose I'm too verbose for the blogging format; I persist in thinking that lots of things are too complicated to explain adequately in a few quickly composed paragraphs.  That, and I'm not a very good on-the-spot thinker.  It usually takes me a while to organize my thoughts enough to say something even marginally worthwhile and to say it well.

And then sometimes there's an additional problem:  I find myself feeling profoundly ambivalent about something I know I should post on, and thus struggling for something incisive to say about it.

This is the difficulty with the Supreme Court's same-sex marriage rulings yesterday, and it's the reason I haven't managed to post about them until today.  As someone who supports gay rights (including same-sex marriage) in a moral sense, and who believes they properly find some protection in the Constitution in a legal sense, I celebrate yesterday's results.  But as someone who cares about legal craft -- about not just what courts decide but how they decide it and justify that decision in writing -- I'm disheartened by the decisions that produced those results.

To indulge a sports metaphor, these were ugly wins for gay rights.  The best team won, but it relied on sloppy play and some questionable calls by the umpires.

And to make matters worse (or at least more complicated), there are, on reflection, very good reasons for the sloppiness of the decisions, or at least for most of it.  It may in fact be the case that winning ugly was the only way to win at this stage of the game.

In this post, I want to focus on what is in some ways the most frustrating of the two decisions:  United States v. Windsor, in which the Court, by a five-to-four vote (dividing along predictable ideological lines), held that part of the federal Defense of Marriage Act (DOMA) violates the Constitution's equal-protection guarantee.  Unlike in the other case, Hollingsworth v. Perry -- more on Perry in a subsequent post -- the Court in Windsor directly resolved the core constitutional issue posed by the litigation.  The frustrating part -- the ugly part -- is the reasoning (if it can be called that) that the Court, in the person of Justice Anthony Kennedy, used to resolve the issue.  Kennedy's opinion was, with all due respect (and much is in fact due -- again, stay tuned), so thinly reasoned and so riddled with red herrings that I would have required a rewrite if it had been a student paper.

The DOMA provision invalidated in Windsor in essence prohibited recognition of same-sex marriages for purposes of federal law.  The plaintiff, Edith Windsor, married her same-sex partner in Canada while both were residents of New York; New York law recognized their marriage as valid (and has since been changed to allow same-sex marriages to be performed in that state).  When Windsor's spouse died, Windsor, thanks to DOMA, was denied the federal estate-tax exemption that applies to surviving spouses.  She paid the resulting $363,000 tax and then sued the government, claiming DOMA's denial of her tax exemption violated her constitutional right to the equal protection of the laws.

A majority of the Court -- Kennedy, joined by the four Justices on the center-left of the Court (Ginsburg, Breyer, Sotomayor, and Kagan) -- agreed with Windsor.  But Kennedy's opinion for the majority meandered blithely around the central issues and stated its conclusion almost as fiat.  Reading it reminded me of long car trips taken with my family as a kid; I'd be half-asleep for most of the way, lulled into drowsiness by the rhythmic hum of the engine, and would awaken suddenly to find us at our destination, with little clue about exactly how we got there.

Tuesday, June 25, 2013

Who watches the watchmen? The Court's gutting of a "crowning achievement of the Civil Rights movement" (and, not incidentally, Congress' power to enforce civil rights)


So the Voting Rights Act decision is in. In Shelby County v. Holder today, the Court invalidated section 4 of the VRA, which determines which states and localities are subject to section 5. (Section 5 is the operative provision of the Act; it requires states and localities identified pursuant to section 4 -- in practice, states and localities with a history of racial discrimination in voting, most of them in the South -- to obtain "preclearance" from the Justice Department before altering their voting policies.) Based on Adam Liptak's report in the N.Y. Times, it appears the five-Justice majority (the predictable suspects: Roberts, Scalia, Kennedy, Thomas, Alito) held that Congress could not continue to rely on voting data compiled in the 1960s in renewing the list of states identified by section 4, as Congress did most recently in 2006. As Liptak's article points out, the ruling technically leaves Congress free to reauthorize section 4 based on more-recent data; but the currently polarized Congress is highly unlikely to do that anytime soon. The result is that section 5's preclearance requirement is now meaningless, as there are no states or localities identified by section 4 to which that requirement can be applied.


There are a number of troubling things about this ruling, but the most troubling to me is the self-interested shift of constitutional power from Congress to the Court that the ruling manifests. The Fifteenth Amendment to the Constitution prohibits racial discrimination in voting by the states (or the federal government) and, in section 2 of that Amendment, gives Congress "the power to enforce" its provisions "by appropriate legislation." Section 5 of the Fourteenth Amendment similarly grants Congress power to enforce the important operative provisions of that Amendment, including its guarantees of equal protection and due process of law. But the Court, under the banner of states' rights but, one suspects, worried in fact about its own authority as the arbiter of constitutional meaning, has from the start been parsimonious in interpreting the scope of Congress' powers under these provisions. Under the Court's current doctrine, Congress cannot exercise its section 2 or section 5 enforcement powers unless it does so in a way that is "proportional" to and "congruent" with the denials of rights by the states that Congress is trying to remediate or prevent. In practice, this means the Court itself retains the authority to say, in essence, whether the problem Congress is trying to redress is severe enough to justify congressional action, and whether the action Congress has taken is an appropriate way to redress the problem.


Today's ruling continues this trend. By negating congressional enforcement under section 2 where that enforcement is not backed up with current data about on-the-ground voting practices, the Court is in essence subjecting Congress to a heightened level of scrutiny when it wields its civil-rights enforcement powers. It isn't enough, apparently, that Congress has a rational, reasonable argument that threats to voting rights still exist and that preclearance in these states is an effective way to meet those threats. Now Congress must actually prove its case to the courts before it can exercise its constitutionally delegated authority.


I think this has it exactly backwards. It makes sense to require the government to affirmatively establish a strong justification whenever it acts to deprive someone of a constitutionally guaranteed right -- although even in rights cases, as any 1L Con. Law student knows, the Court is very deferential to the government in all but a handful of contexts. But it makes little sense to require the government to affirmatively establish a strong justification whenever it acts pursuant to one of its constitutionally granted powers -- particularly when the power in question was granted precisely for the purpose of preventing or remediating rights violations by the state governments. The whole point of section 5 of the Fourteenth Amendment and section 2 of the Fifteenth Amendment was to allow Congress, using general legislation, to bypass the cumbersome process of litigation in the courts that otherwise would be necessary whenever voting or other rights are violated. By subjecting congressional enforcement of voting rights to what is essentially heightened judicial scrutiny, the Court substantially defeats that constitutional purpose.


Worse, the Court does so in a way that enhances its own institutional power. Under current law (as magnified by today's ruling), Congress in effect must get the Court's permission before enforcing voting rights, permission that (today's decision suggests) will not be forthcoming without a well-developed and contemporaneous factual record. Presumably even a statute that passes muster would have to be revisited by Congress every few years, lest the factual basis for the statute go stale. The result is a substantial shift of constitutional power from Congress to the courts, one that parallels what the Court has been doing to Congress' power to regulate commerce since the early 1990s.


The Court's legitimacy is on its firmest ground when it acts as a relatively impartial arbiter of disputes between the government and its citizens, disputes that the political branches themselves cannot be trusted to resolve because their own authority is at stake. Of course, every time the Court invalidates some state or federal statute, in some sense the relative power of the Court is enhanced. But this becomes a serious problem when the very subject of the dispute is the Court's own power; in such cases the Court cannot claim to be a relatively neutral arbiter. By subordinating Congress' authority to enforce civil rights to its own supposed authority to interpret the scope of those rights, the Court opens itself to reasonable charges that it is simply playing power politics.


Who watches the watchmen?

And now for something completely different ...

Departing momentarily from my breathless anticipation of SCOTUS's same-sex-marriage and Voting Rights Act rulings, allow me to recommend this trenchant post from Stanley Fish (not that he needs my recommendation).

Fish decries the failure of humanities teachers (and liberal-arts teachers more broadly) to carefully describe the value of the threatened enterprise in which they are engaged.  He cites a recent pro-liberal-arts report by the Academy of Arts and Sciences as a case in point.

Reiterating a common theme in his blog posts for the Times, Fish suggests that there is a kind of noninstrumental value in studying the humanities, as "a cloistered and separate area in which inquiry is engaged in for its own sake and not because it yields useful results."  Maybe so, but emphasizing the monastic nature of the enterprise is hardly an effective way to sell it to a skeptical, penny-pinching public.  And anyway, I think there is much more to be said in defense of the humanities.

Much more to be said, in fact, than can comfortably fit into a single blog post.  (And I'm supposed to be on vacation.)  But allow me to suggest in rough outline two central social goods that can flow, under the right conditions, from liberal-arts education.

First, liberal-arts education can teach young people to think and communicate effectively, skills that are every bit as valuable in a global economy as know-how in math and science.  As Fish intimates, the abilities to think analytically, and to communicate effectively the results of one's thinking, are threatened not just by the ongoing deemphasis of the humanities, but also by other forces, some internal to our educational system and some external.  Government education funding depends increasingly on quantifiable performance metrics, which in practice means standardized tests, which in turn pushes K-12 schools (and increasingly colleges) to "teach to the test" -- emphasizing discrete parcels of knowledge that are susceptible to multiple-choice assessment along with tactics for "gaming" the tests themselves.  The Internet, with its "preference for chunked-up bits of information" (Fish's apt phrase), has become our primary source of knowledge.  Texting and e-mailing, which reward informality and spontaneity and make physical proximity irrelevant, have replaced letter-writing (and increasingly even face-to-face conversation) as our primary means of communication.

I see the results of these forces in my law school classrooms.  Students are every bit as bright as they were ten or fifteen years ago, but on average they are noticeably less well-prepared.  The act of reading carefully for content and context often flummoxes them; indeed the very point of the enterprise often escapes them.  The notion that information they are fed by "authority" figures (including their professors) might not be fully trustworthy or comprehensive rarely occurs to them.  The ideas that there might be multiple viewpoints regarding an issue, or multiple reasonable arguments about the proper resolution of that issue, strike many of them as unnatural.  The capacities to develop an argument based on evidence and to communicate that argument in an effectively orderly manner typically must be learned almost from scratch.

Liberal-arts education, if done well, can lay solid foundations for each of these important skills.  In reading literature or history, a student is not just (or even primarily) absorbing a set of facts; she is developing the capacity to understand complex ideas and arguments, to evaluate those ideas and arguments for strengths and weaknesses, and to identify in them potential biases and information gaps.  In writing about literature or history -- a key component of a good liberal-arts education -- the student is learning to organize her own thoughts into a careful analysis based on evidence and to present that analysis in a way that can be understood by and persuasive to others.

These skills in analytical thinking and communication are not luxuries; they are core components of a person's ability to contribute to society.  Not everyone needs them in the same degree -- the world does need scientists, mathematicians, and engineers, after all -- but everyone should have them to some degree.  Even scientists and engineers need to be able to evaluate others' work and to present their own work to others.  No wonder a majority of Fortune 500 executives would choose a liberal-arts education for their own children, according to the AAAS report.

Which leads me to the second central point:  Analytical thinking and communications skills among the citizenry -- and the educational methodologies that cultivate them -- are crucial to a well-functioning democracy.  To put it more bluntly:  Liberal-arts education is essential to democracy.  The core democratic premise of fundamental political decisionmaking by the people themselves -- not by a disconnected, unaccountable elite -- becomes unobtainable, unrealistic, chimerical if many or most of the people themselves cannot effectively absorb, understand, and evaluate information, reach reasonable conclusions based on that evaluation, and communicate those conclusions effectively to other citizens.  These are the skills a liberal-arts education can teach.  And they are fading fast in our instant-gratification, hands-off-my-pocketbook, what-have-you-done-for-me-lately society.

Monday, June 24, 2013

Fisher v. University of Texas -- a very brief reaction while on vacation



I’m on vacation and have only skimmed the Fisher opinions, but it doesn’t look like much has changed as a result of this case.

The Court said the 5th Circuit failed to really apply strict scrutiny by essentially deferring to the University regarding whether (and how much) race-consciousness was necessary and scrutinizing the program only for “good faith,” which I guess means that the asserted diversity objective is in fact genuine (as opposed to some insidious motive, e.g., racism or attempting to gain a racial advantage).

The 5th Cir. can be forgiven for its deferential posture, I think, given that the language of O’Connor’s opinion for the Court in Grutter suggested that a certain amount of deference to the judgment of professional educators was appropriate.  Nonetheless, it’s understandable and not at all surprising that the current Court majority would think some more-exacting scrutiny of means is required.

But of course the Court still hasn’t given us any sort of formula for what kind of exacting scrutiny is needed.  Kennedy’s opinion for the Court says we need to ask whether the means chosen are “‘necessary’ … to achieve the educational benefits of diversity.”  But there’s not much discussion of what this might entail.  Which, again, is not surprising, partly because an in-depth explanation would require a good explanation of precisely what the “educational benefits of diversity” really are (which, even if Kennedy were willing to embark on such an explanation, probably would have scared off some of the more-conservative Justices); partly because the Court continues to reject “quotas” or other quantitative measures of diversity; and (largely for this latter reason) partly because the narrow-tailoring assessment inevitably will be extremely fact-sensitive.

So the application of strict scrutiny in these cases will remain a matter of “I know it when I see it”:  If you can convince five Justices on the Court (or, in most cases, two judges on a Court of Appeals) that a program is not truly “necessary … to achieve the educational benefits of diversity,” whatever they are and whatever that means, then you can win your challenge.  Nothing really new here.

This case also reaffirms that Kennedy is on the fence about of affirmative action – he’s not against it in theory, but he hasn’t found an actual example of it that he likes.  We knew this from his opinions in Grutter and Gratz and has subsequent opinion in Parents Involved, and his opinion here tells us that nothing’s changed.

And this case reaffirms that Scalia and Thomas will vote against any affirmative-action program, although Scalia was uncharacteristically coy about stating that in his brief concurrence.

The main thing this case adds to our understanding of the issue is that Roberts is not categorically opposed to affirmative action – all indications are his stance is closer to Kennedy’s than to Scalia’s and Thomas’s (otherwise presumably he would have joined one of their opinions or written a separate concurrence).  So that’s modest good news for affirmative-action supporters.

Tuesday, February 19, 2013

Online learning and law schools

An editorial in today's New York Times appropriately cautions against a wholesale, reflexive move toward online courses in higher education, and the caution is especially appropriate in the context of legal education.

As anyone following developments in legal education knows, law schools are, like other components of higher education, scrambling to find ways to cut skyrocketing costs.  I'd be surprised if many law schools don't consider increased reliance on online courses as a cost-cutting measure.  But many of the drawbacks of online education are magnified in the law school context.

The Times editorial mentions one such drawback:  the inability of at-risk students to get individualized attention from instructors in an online-only format.  Law schools, particularly (but not exclusively) those outside the "elite" top twenty or thirty, have at-risk students just like undergraduate institutions do.  These students often are surprised to find themselves at risk; most law students performed well in undergrad.  But learning the law is a challenging and unfamiliar process, very different from most undergraduate programs, and many previously strong students have trouble with it.  The best way to overcome those difficulties usually is to meet face-to-face with professors and academic support personnel; another good technique is to form study groups with fellow students.  None of these steps is easy to take in an online-only world.

And there is another consideration that is, if not unique to law schools, then especially salient in that context.  Legal education, even in a large lecture course, typically is more interactive than undergraduate education, and for good reason.  In my first-year Civil Procedure and Constitutional Law courses, for example, I spend a lot of class time calling on students without advance notice and asking them questions relating to the cases and other materials we have read.  The point of this technique -- often (and sometimes pejoratively) called "the Socratic method" -- is fourfold.

First, I want to make the presentation of the material more interesting by teasing out the central points interactively rather than declaring them didactically.  If the Socratic method is done well, not only the student "on call" but the other students in the classroom will try to think of answers to the questions the instructor poses and will attempt to predict where the instructor is going with a line of questioning.  This is a kind of active learning that I believe is more effective, most of the time, than passively listening to a lecture and taking notes.

Second, I want to give students experience thinking on their feet under pressure -- something virtually every lawyer will be called upon to do in practice, some of them quite regularly indeed.

Third, I want the students to know they're not alone in their frequent confusion.  When an on-call student struggles a bit (maybe a lot) with an answer, other struggling students realize there's nothing wrong with the process of struggle:  Everyone goes through it.  Of course this benefit too depends entirely on doing Socratic right; an instructor who humiliates a struggling student sends the message that struggle is unacceptable rather than a natural part of learning.

And fourth -- particularly in my first-year courses -- I want the students to get the message that preparation is essential.  An unprepared student will be embarrassed if he or she is called on, and that's not a bad thing.  Lawyers, after all, can't go to court or attend a meeting with a client unprepared.  Better to be momentarily embarrassed in front of a sympathetic group of fellow students than to cost your client a case or your firm a client.

None of these goals can be achieved to anywhere near the same degree in an online format, at least not until "virtual classroom" technology makes dramatic improvements.  Which is not to say that online instruction won't work for some aspects of the law-school curriculum -- specialized upper-level courses, perhaps.  But it is to suggest that law schools should strongly resist the trend toward online education in the context of first-year and other building-block courses.  To take these courses out of the live classroom would be to drain them of much of their value in training future lawyers.

Monday, February 18, 2013

A post-law school residency requirement?

John Farmer, dean of the law school at Rutgers-Newark and a distinguished practitioner, proposes in the New York Times a requirement that law-school graduates apprentice for two years in a sort of residency program like that required of medical grads.

While there are many details to be worked out -- chief among them the far greater number of law graduates than medical graduates -- I like the basic substance of Farmer's proposal.  It would at least begin to redress two huge structural problems in the existing legal market.

The first problem is the mismatch between demand (lots of demand for low-cost legal services; not so much for high-cost ones) and supply (lots of law grads with enormous debt; not enough high-paying jobs available to help them pay it off).  Requiring a two-year legal residency after law school, while suspending debt payment for that period, would allow recent grads to meet a wide variety of legal needs at relatively low cost, thus fulfilling much currently unmet demand.  And while it wouldn't make graduates' debts go away, it would at least make recent grads more marketable, by giving them the kind of experience paying employers are looking for.

Which leads to the second current problem:  Most legal clients, and thus many legal employers, are no longer willing to subsidize the training of newly-minted lawyers fresh from law school, and law schools are not particularly well suited to supply the hands-on training in diverse areas that clients and employers want.  A residency requirement would provide the practical training that employers won't and law schools can't.

I also like the proposal because of its source:  someone inside the legal academy who also seems likely to have credibility among practitioners.  (According to the brief bio following the Times op-ed, Dean Farmer was "a former attorney general of New Jersey and senior counsel to the 9/11 Commission" before accepting the Rutgers-Newark deanship.)  There is a danger that the current crises in legal practice and legal education will be "addressed" with simplistic panaceas (e.g., doing away with the third year of law school) and scapegoating (e.g., blaming law professors for everything).  Both tendencies are on display in this disappointingly sloppy and one-sided Times article from Feb. 10, which reports a series of complaints and tentative suggestions aired at an ABA task-force meeting in Dallas as if they were consensus recommendations of the ABA.

Real reform is going to take cooperation among the law schools (including their faculties), the bar, and the bodies charged with setting standards for admission to practice (the state supreme court in most states), all of whom are partially to blame for the current state of things, and all of whom will have to pitch in to create solutions.  Many law schools (mine included) are working hard to rethink their roles and their methods in light of the changes in the market they serve.  Dean Farmer's proposal is a good example of this, and it deserves to be taken seriously.

Thursday, February 14, 2013

R.I.P. Ronald Dworkin, 1931-2013

Extremely sad news today that Ronald Dworkin has died.  (I assume the Times will supplement this notice with a more extensive obituary soon.)  Dworkin was one of the most influential legal philosophers of the past century, notable for the importance of his views to central debates in jurisprudence, for his influence on constitutional theory, and for his dissemination of ideas in legal theory to a broader audience through his books and outlets such as the New York Review of Books.  Larry Solum's entry on Dworkin's death is here; I'm sure many will follow.

UPDATE:  Here is a link to the Times's full obituary of Dworkin.

News of his death prompted me to look back through my files -- the few remaining ones that consist of paper documents -- in an attempt to locate a letter I remembered receiving from Dworkin following the publication of my first law review article.  I found the letter, dated July 16, 1996.  (Almost seventeen years ago, which seems a very long time, though perhaps not that long when you consider that we were still communicating almost exclusively by "snail mail" back then.)

I had sent Dworkin a copy of the article -- published in the Yale Law Journal, an incredibly lucky strike for a first article and one I'm still trying to live up to -- in part because a large section of it was devoted to critiquing an aspect of Dworkin's well-known "law as integrity" concept.  Dworkin's reply was short and polite, though not entirely sweet:

Dear Professor Peters,

I appreciate your sending me a copy of your article Foolish Consistency, and I look forward to reading it.  Just glancing at it I found the remark that I think integrity distinct from both justice and equality.  I do think it is distinct from justice, as I defined that term, rather specially.  But not from equality:  on the contrary I think of integrity as a mode or aspect of equality, deriving from the requirements of a community of equals.  I just mention this, though I will probably discover, when reading the article, that the verbal point makes no difference to your argument.

Thanks again.

Sincerely,

Ronald Dworkin

That is the entirety of the letter, and it is the only time I can be said to have "communicated with" Ronald Dworkin.  I was utterly thrilled by it, despite the nit he picked with my characterization of his arguments and not least of which because he referred to me as "Professor Peters" in the salutation.  (At the time I was a mere Fellow.)  Dworkin was one of my intellectual heroes -- still is, I suppose.

I don't know whether Dworkin did in fact discover that his "verbal point" made no difference to my arguments, or even if he ever read the article at all.  I responded to his letter with a more extensive letter of my own, explaining my interpretation of "integrity" and inviting him (if that's the word) to comment further on my paper.  I didn't hear back from him.  But that one brief missive remains, in its own way, a highlight of my career.

Monday, January 14, 2013

Should we give up on the Constitution? Part III: Beyond constitutive rules


This post is the third in a series inspired by Mike Seidman's provocative op-ed in the New York Times, in which Seidman seems to be arguing for some form of disobedience to the Constitution.  In my previous post, I contended that some degree of constitutional law is necessary for functional democracy to exist.  To continue a metaphor I used in that post, democracy without constitutional law would be like trying to play a game of baseball while both teams continually argue about what the rules should be.  At least a basic level of constitutional law is necessary to literally constitute democracy -- to create it and define it so that we can participate in it without constantly fighting about what it means.

In the American experience, however -- and increasingly in constitutional systems around the globe -- constitutional law appears to extend beyond basic democratic ground rules.  Many of our most familiar constitutional provisions take the form, not of structural rules necessary to establish a working democratic government (such as rules governing how laws are made, enforced, and interpreted), but rather of "rights" that impose limits on what democratic government may do.  The First Amendment to the federal Constitution, for example, prohibits laws "respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press."  The Second Amendment prevents the government from "infring[ing]" "the right of the people to keep and bear Arms."  The Fifth and Fourteenth Amendments forbid government deprivations of "life, liberty, or property, without due process of law."

These and similar constitutional provisions don't appear to fit comfortably within the category of constitutive democratic rules.  Instead, they seem like restrictions on what our democratic government is allowed to do once it is constituted.  To further indulge the baseball metaphor, constitutional rights appear less like the constitutive rules that require three strikes for an out and four balls for a walk, and more like a hypothetical rule requiring pitchers to throw a fastball on a 3-2 count.  They seem to dictate how the game should be played, not what the game is in the first place.

Can constitutional rights against democracy be justified just as constitutional rules establishing democracy can?  And what about seemingly trivial constitutive rules -- rules that don't really seem necessary to establish democracy?  Seidman mentions one such rule in his essay:  the provision of Article I, section 7 requiring "Bills for raising Revenue" to "originate in the House of Representatives" rather than in the Senate.  It's difficult to imagine that such a rule is essential to the existence of a functioning democratic government; if it were, then many so-called "democracies" around the world that lack such a provision would not in fact be democracies at all.  (A rough analogy here might be the designated-hitter rule in Major League Baseball's American League.  Clearly that rule is not essential to the game of baseball -- otherwise the National League, which has no designated hitter, would be doing something other than playing baseball, as would the many Little League, high school, and college teams that do without a designated hitter.)

Seidman and many other critics of constitutional law -- including most prominently NYU law professor and philosopher Jeremy Waldron, who devoted an influential 1999 book to such a critique -- typically suggest that constitutional rights and other provisions that go beyond basic constitutive rules are illegitimate.  Once we have a basic working democracy in place, these critics contend, we ought to use the ordinary processes of that democracy -- lawmaking by the elected legislature, in whatever form our democracy happens to require -- to work out these other details for ourselves.  We ought to decide democratically, not constitutionally, what rights people have and what the details of our democratic system should look like.  In Seidman's words, we ought to "extricat[e] ourselves from constitutional bondage" and "settle our disagreements through mature and tolerant debate" rather than by obeisance to the "archaic, idiosyncratic and [sometimes] downright evil provisions" of a centuries-old document.

Can this critique of constitutional law be answered persuasively?  I think it can.  To see how, though, we first need to consider two popular but ultimately unpersuasive attempts to defend constitutional law.

Constitutional law and natural rights


People often defend constitutional rights -- provisions that impose substantive limits on what democratic government may do, like the First Amendment's prohibition of laws "abridging the freedom of speech" or the Fourteenth Amendment's requirement of "equal protection of the laws" -- on the ground that these provisions are necessary to protect preexisting "natural" rights from unjustified interference by the democratic majority.  The idea, which finds some support in the philosophy of the seventeenth- and eighteenth-century European Enlightenment and in the rhetoric of American founders like Thomas Jefferson, James Madison, and George Mason, is that government is instituted largely for the purpose of defending people's natural rights, and therefore it must be prohibited from doing things that vitiate that purpose by infringing those rights.

People may indeed have natural rights, and government may indeed have the function (at least in part) of protecting them.  But the aim of protecting natural rights is not, by itself, a persuasive justification of constitutional rights.  That goal cannot explain why we ought to prefer the constitutional Framers' opinions about rights over whatever conclusions we reach through ordinary democratic processes.

I wrote above that people "may" have natural rights.  I used this tentative construction because of course people reasonably disagree about natural rights -- about whether they exist, what they are if they do exist, and what they entail in any given set of circumstances.  As Supreme Court Justice James Iredell put it in 1789, "[t]he ideas of natural justice are regulated by no fixed standard:  the ablest and the purest men have differed upon the subject."  Some people believe, for example, that an individual has a natural right to liberty that immunizes her from any requirement to aid others -- to attempt to rescue a drowning man, for example, or even to spend a few dollars to buy a meal for a starving person when the money would not be missed -- unless the individual in question is responsible for the other person's peril.  Others disagree.  And in a society as culturally and ethnically diverse as most modern democracies, disagreement on these issues will be endemic and rampant.

Given the fact that people inevitably will disagree about natural rights, why then should we resolve those disagreements by deferring to the views of the constitutional Framers rather than working them out through regular democratic procedures?  Democratic procedures, after all, are the way our society normally settles its differences about important issues.  It would indeed seem, in Seidman's words, "bizarre" to settle them instead by reference to the opinions of late-eighteenth or mid-nineteenth-century constitutional Framers.

The answer to this quandary -- why we ought to defer to the Constitution's conclusions about rights rather than working things out for ourselves -- cannot be simply that "the Framers got it right" or "the Framers did a pretty good job of protecting natural rights, all things considered."  These questions -- whether the Framers did in fact get it right or do a pretty good job of protecting natural rights -- are precisely the questions that people will disagree about.  In order for law to possess what legal philosophers call "authority" -- to require obedience to its commands -- it must provide some strong reason to obey those commands even for those who disagree with them.  Otherwise people who disagree with what the law requires would lack any good reason to obey the law.  But the idea that the Constitution protects natural rights provides no such reason, because it cannot explain why someone who disagrees with what the Constitution says about rights nonetheless should obey it.  When someone who disagrees with a constitutional command asks why she should obey that command, the answer cannot be simply "because the Constitution is right and you are wrong."

So the goal of protecting natural rights, standing alone, cannot justify constitutional rights or other aspects of constitutional law.  People disagree on questions involving natural rights, and there is no obvious reason to prefer the Constitution's resolution of those disagreements over the resolutions reached by democratic procedures.

The Moral-Guidance approach

Many defenders of constitutional rights offer a more nuanced justification, however.  They acknowledge that people disagree, quite reasonably, about questions involving natural rights, but they claim that we are more likely to reach the correct answers to these questions by deferring to what the constitutional Framers decided than by using ordinary democratic processes to answer them.  They point to some special features of the constitutional process that they think make that process more reliable than ordinary democracy on questions of rights:  the exceptional wisdom of the Framers, perhaps, or (more commonly) the extraordinarily deliberative and participatory nature of the Framing process.  And these theorists say we ought to obey the Constitution, rather than work out questions of rights through democratic means, because these special features make the Constitution a more-reliable source of the truth about rights (whatever that truth may be) than ordinary democracy.

We might call this defense of constitutional rights a Moral-Guidance approach, because it rests on the notion that the Constitution can guide us toward the moral truth about rights.  Note that the Moral-Guidance defense provides a reason for people to obey the Constitution even when they disagree with it.  That reason is that the Constitution is more likely than the alternative -- regular democratic government -- to generate true or correct or good answers to questions about rights.  The rationale is analogous to the practice of deferring to experts' opinions in other areas of life -- to a patient relying on her doctor's medical advice, for example, or a client relying on his attorney's legal counsel.  We should obey the Constitution, the Moral-Guidance approach holds, because the Constitution, compared with ordinary democracy, is an expert on matters of rights.

The main problem with this Moral-Guidance approach is that its premise of relative moral expertise typically will be unconvincing.  Our Constitution provides an especially acute example of the difficulty.  Most of the Framers of the original Constitution and the Bill of Rights in the late 1800s practiced, endorsed, or at least tolerated slavery; virtually all of them thought women should have no place in public life.  These and other glaring moral gaffes make it unlikely that the Framers themselves, as a group, possessed some special moral expertise that is superior to our own.  Moreover, many capable adults were excluded altogether from participation in the ratification process:  not just women, slaves, and most other African-Americans, but also Indians and many non-propertied citizens.  The arbitrary exclusivity of the Framing undermines the claim that the process possessed special moral expertise by virtue of its exceptionally participatory and deliberative nature.

And even if we can somehow overlook the obvious moral failings of the Framers and the deficiencies of their process, the fact remains that the world they lived in looked radically different from our own.  Whatever the Framers' views on free speech, they could not have taken account of developments like television and the Internet; whatever their views on guns, they could not have predicted the advent of hand-held automatic weapons; whatever their beliefs about due process of law, they could not have anticipated the threat of international terrorism.  The list goes on and on.  Even if the Framers were moral experts in the abstract, it is hard to see why we should defer to their supposed expertise on contemporary moral questions that they simply could not have considered.

So the idea that the Framers were relative moral experts is undercut by the obvious moral gaffes they made, by the salient deficiencies in their process, and by their inability to apply their moral views to future conditions they could not predict.  Of course, we can imagine a constitution that mitigates these difficulties.  A constitution (or a constitutional provision) that has been enacted relatively recently would be less likely to feature clear moral errors, either in its substance or in the process of its enactment, because the moral sensibilities of its framers would be more likely to match our own.  And it would be more likely that the framers of such a constitution anticipated any given circumstance in which the law they created might apply to us today, shortly after the framing.

But constitutions typically are designed to last a long time; despite Thomas Jefferson's suggestion, replacing a constitution every 19 years probably is not the recipe for a stable democracy.  As a constitution grows older, the moral gap between its framers and those subject to it will grow ever larger, and the problem of moral obsolescence will emerge in ever more potent form.  In fact, the phenomenon of unanticipated conditions is likely to arise almost immediately, especially in our era of blindingly rapid technological change.  Imagine, say, a new Free Speech Clause, reconceived to meet the problems posed by television, smart phones, and the Internet.  How quickly would the original understandings behind the Clause become obsolete in our quickly evolving world?  Ten years?  Five?

The fact of moral obsolescence, then, is not unique to our 220-year-old Constitution; it probably is inevitable in any relatively stable constitutional system.  And that fact poses a problem for Moral-Guidance accounts of constitutional authority, because it undermines the persuasiveness of relative moral expertise as a reason to obey a constitution.  As a constitution grows older, the idea that its framers were comparative experts on the moral quandaries that face subsequent generations becomes less and less convincing.

To these difficulties, we can add one final shortcoming of Moral-Guidance accounts.  Remember that the premise of these accounts is that the Constitution is morally wiser than the alternative decisionmaking procedures, namely those of everyday democracy.  Note, however, that people who disagree in substance with what the Constitution requires therefore have reason to doubt this premise.  If I think the Constitution gets it wrong on the question, say, of whether individuals have a right to keep and bear arms, I necessarily must doubt the proposition that the Constitution is a moral expert on this question.  How could the Constitution be a moral expert on the question of gun rights if it answers that question (in my view) so blatantly incorrectly?  Indeed, how can the Constitution be a moral expert on anything if it gets the question of gun rights so badly wrong?  My disagreement with what the Constitution requires implies that the Constitution lacks moral expertise, on that question and perhaps on others.  And since, on a Moral-Guidance approach, the Constitution's authority rests on the premise of its superior moral expertise, my disagreement with it implies that there is no basis for the Constitution's authority.

In other words, on a Moral-Guidance account of the Constitution's authority, people who strongly disagree with the Constitution also will have reason to question the Constitution's authority over them.  This would make the Constitution weakest when it is needed most -- in cases where there is strong disagreement with its commands.  The Moral-Guidance approach thus leads ultimately to a risk of constitutional anarchy.

Two common answers to the question of the Constitution's authority -- of why we ought to obey the Constitution when we disagree with it -- therefore turn out to be deeply problematic.  The simple notion that we should obey the Constitution because it protects natural rights will not satisfy those who disagree with the Constitution's treatment of rights; it cannot provide a reason to obey the Framers' views of rights rather than our own.  The more nuanced idea that we should obey the Constitution because the Framing process possessed some special moral expertise is unpersuasive in light of the glaring more deficiencies of that process, the inability of the Framers to foresee modern moral issues, and the evidence of our own moral disagreement with what the Constitution sometimes tells us to do.

Fortunately there is a better account available -- a better answer to the question of why we ought to obey the Constitution, even (especially) when we disagree with it.  I will describe that account in my next post.

Sunday, January 6, 2013

Should we give up on the Constitution? Part II: Constitutionalism and constitutive democratic rules

Three distinct questions about the Constitution

In this sequel to my previous post on this topic, I want to distinguish among three questions that I think are implicit in the op-ed by Mike Seidman in the New York Times that prompted that post.  The three questions are these:  (1) Should we give up on the idea of constitutionalism altogether?  (2) Should we "give up on" our own Constitution on the ground that it is irredeemably "archaic, idiosyncratic [or] downright evil," as Seidman claims?  And (3) if the answer to question (2) is yes, does "giving up" on the Constitution imply simply ignoring it -- ceasing to obey it -- or rather using the Constitution's own procedures to fix it?  It's not particularly clear from Seidman's piece which of these questions he means to answer when he advocates "giving up on the Constitution."  But it's important to distinguish among them, because it is entirely possible to answer some of them affirmatively and others negatively.

In this post, I tackle question (1), focusing on a very elementary case in favor of constitutionalism:  We need it to, literally, constitute democratic government.  In my next post, I'll continue the discussion of constitutionalism by exploring the role of constitutional law beyond the establishment of basic, constitutive democratic rules.  In subsequent posts, I'll discuss questions (2) and (3), taking question (3) first because it flows most naturally from my discussion of question (1).

The idea of constitutionalism

So let's begin with the most fundamental question:  whether we should give up on the idea of constitutionalism altogether.  First it will be helpful to define what we mean by "constitutionalism."

By constitutionalism, I mean the practice of deferring to legal rules that are both entrenched and secondary.  "Entrenched" rules are rules that are especially difficult to eliminate or change.  A typical way of entrenching a legal rule is to require particularly onerous procedures in order to change that rule.  Most written constitutions do this by establishing amendment procedures that are outside the ordinary legislative process -- approval by direct popular vote rather than by the legislature, for example, or ratification by supermajorities in the legislature or by a special convention.  The amendment provisions of Article V of the United States Constitution are particularly demanding:  They require a two-thirds vote of both houses of Congress (or a majority vote by a special constitutional convention called by two-thirds of the state legislatures), followed by ratification by conventions or legislatures in three-fourths of the states.  These procedures are burdensome enough that the Constitution has been amended only twenty-seven times in its nearly 225-year history (ten of them all at once within the first two years of its existence).

The concept of "secondary" legal rules might be a bit less familiar.  Legal philosophers often distinguish secondary legal rules from "primary" legal rules.  Primary legal rules are rules that govern people's everyday conduct, from traffic laws to tax regulations to the legal requirements for forming contracts to laws prohibiting assault and murder.  Secondary legal rules are rules that govern the procedures for creating and changing these primary legal rules and, sometimes, the permissible content of these primary rules.  Rules about who makes law and how (e.g., bicameralism in Congress, the division of power between the federal and state governments) are examples of secondary rules; so are rules about whether and to what extent the government can regulate areas of conduct like speech and religion.  These secondary rules don't directly apply to the conduct of most people living their everyday lives.  Instead, they govern the making and content of the primary legal rules that apply to everyday conduct.  Most constitutional provisions are examples of secondary legal rules.  (The only current exception in the U.S. Constitution is section 1 of the Thirteenth Amendment, which directly prohibits private actors from practicing  "slavery [or] involuntary servitude.")

Constitutionalism, then, is the practice of deferring to legal rules that govern the processes of ordinary democracy and that cannot easily be changed using those processes.  In the United States, constitutionalism includes canonical written texts (on the federal and state levels) that communicate entrenched secondary rules.  It also typically includes the practice of judicial review -- the authoritative interpretation of these (often vague) texts by judges who are relatively insulated from electoral politics.  It is not clear that either of these features is necessary for true constitutionalism, however.  Great Britain, for example, has done without either a written constitution or judicial review for most of its modern history, and yet most Britons consider their system to be a "constitutional" one.

It's important to note, however, that a system must possess some binding, entrenched, secondary rules, whatever their source or mode of interpretation, in order to be considered "constitutional" in a meaningful sense.  Britain can be called a "constitutional" system because political participants in Britain typically consider themselves bound by constitutional rules, even though most of those rules are not codified in a canonical document or enforced by a political insular judiciary.  The lack of a written constitution or judicial review need not be fatal to constitutionalism, but a lack of binding constitutional rules (whatever their form, and however they are interpreted) would be fatal.

Constitutionalism and constitutive rules

Seidman's essay is not entirely transparent with respect to its primary target.  Is he attacking constitutionalism generally?  Or only constitutionalism as we practice it in the United States -- that is, the U.S. Constitution?  If the latter, which aspects of the Constitution does he oppose, and what does he suggest we do about it?  None of these questions is clearly answered in his piece (though to be fair, I should note that the Times op-ed apparently represents a tiny snippet of what undoubtedly will be much more extensive arguments in Seidman's forthcoming book).

Sometimes Seidman seems to be challenging the idea of constitutionalism itself, not simply objecting to our own Constitution or provisions of it.  For example, Seidman advocates "extricating ourselves from constitutional bondage" and debating issues "solely on the merits, without shutting down the debate with a claim of unchallengeable constitutional power." I read Seidman as suggesting here that democracy would be better without the yoke of constitutionalism around its neck -- that we would be better off deciding issues (some issues, anyway) in a purely democratic fashion rather than feeling ourselves bound by, and thus continually adverting to, constitutional limits on how we can decide those issues and on what we can decide.

Seidman's position thus suggests the possibility of a pure, unadulterated, unfettered system of democracy, free of annoying constitutional limitations.  But this assumption is a mistake, one commonly made by critics of constitutionalism.  In fact it is impossible to have democracy without some degree of constitutionalism.  Constitutional law is necessary to literally constitute democracy.

Consider the fact that even Seidman seems to acknowledge the need for a basic level of constitutional law.  He thinks we should not "have a debate about, for instance, how long the president’s term should last or whether Congress should consist of two houses.  Some matters are better left settled, even if not in exactly the way we favor."  What Seidman is recognizing here is that some degree of entrenched constitutional rules -- literally constitutive rules -- is necessary to get democracy up and running in the first place.  We cannot have a functioning system of democracy if we are continually debating the details of how laws get made, who has the authority to interpret and enforce them, and so on.  Constitutive constitutional law is in this sense a necessary condition of democracy.

It is interesting, however, that many of Seidman's examples of what he calls the "archaic, idiosyncratic and downright evil provisions" of our actual Constitution arguably fall into this constitutive category.  He asks "why [we] should ... care" about Article I, sec. 7's requirement that "Bills for raising Revenue" originate in the House rather than in the Senate.  He thinks the president should "have to justify military action against Iran solely on the merits," without relying on his Article II power as commander-in-chief.  He asserts that Congress' power of the purse, conferred by Article I, should "be defended on contemporary policy grounds, not abstruse constitutional doctrine."

So, on the one hand, we have Seidman recognizing the need for some constitutive democratic rules -- that "[s]ome matters are better left settled, even if not in exactly the way we favor."  But on the other hand, we have Seidman questioning the authority of certain constitutive democratic rules (the requirements for revenue bills, the allocation of military power to the president, the conferral of the taxing and spending power on Congress).  Why does Seidman think some constitutive rules are "better left settled" while others should be open to debate?

The answer is not that there is some definitive, logical line to be drawn between the "settled" provisions and the debatable ones.  Just about any constitutive provision reasonably can be debated on its merits -- from seeming minutia like the question of where revenue bills should originate to big-picture issues like whether and how power should be divided between the federal and state governments, or between the different branches of the federal government.  There is no single inherently correct way to organize a democracy, and even if there were, people inevitably would disagree about what it is.

Such disagreements are all the more inevitable given the fact that the participants frequently will stand to gain or lose depending on how they are resolved.  (Members of the House gain power relative to members of the Senate if revenue bills must originate there; state government officials lose power if more authority is ceded to the federal government; and so on.)  The inevitability of disagreement is precisely the reason we need constitutive rules -- legally enforceable provisions that settle these debates by specifying the one correct way to do things.  Without them, continual fights about the meaning of democracy would make the actual operation of democracy impossible.  It would be like trying to play baseball with the teams constantly debating the definition of a strike.

So we need constitutional law, at least at the constitutive level, in order to have democracy itself.  Seidman's implied dichotomy between "pure" democracy and adulterated constitutional democracy is a false one.  To debate the issue of, say, the president's military power "solely on the merits," without reference to constitutional provisions, would be to throw the very idea of democracy up for grabs.

Seidman tries to finesse this point when he cites the examples of countries "like Britain and New Zealand," which "have systems of parliamentary supremacy and no written constitution, but are held together by longstanding traditions, accepted modes of procedure and engaged citizens."  These countries may not have written constitutions or judicial review (although, perhaps revealingly, both Britain and New Zealand recently have taken steps in that direction), but, as I suggested earlier, this doesn't mean they don't have constitutional law.  It means only that their constitutive rules take the form of entrenched statutes, traditional institutions, and established procedures rather than provisions of a single canonical document.  Political participants in Britain frequently debate what their "constitution" requires, despite the lack of a single governing text; they believe themselves bound by a set of constitutive rules even if those rules are not written down on a definitive piece of parchment.  The examples of Britain and similar systems prove only that not all constitutions must derive from a unitary constitutional text, not that constitutional law is unnecessary.

By the same token, the fact that we lack clear constitutive rules governing every detail of democratic procedure does not mean that we could function without constitutive rules altogether.  In the United States, there are many details of democratic government that either are not governed by constitutional rules at all (such as the filibuster in the Senate) or are subject to rules whose precise meaning is in dispute (such as the allocation of power over foreign and military affairs between Congress and the president).  We can get away with uncertainty (and the resulting disputes -- which, it should be noted, are frequent and heated) on these issues because the more-fundamental mechanisms of government are settled.  Power struggles between the president and Congress are tolerable because at least we know that there is a president and a Congress; we know, for example, that the president is the candidate who won the most electoral votes in the most recent quadrennial election, and that Congress consists of two Senators from each state and representatives apportioned by population as determined by a decennial census.  At some point, however, the absence of settled constitutive rules would pose a serious threat to the stable operation of democracy.  (Imagine serious recurring disputes over how many senators could be seated from each state, or the length of a president's term in office.)

Nor does the lack of clear constitutive rules on some issues imply that we can pick and choose which rules to follow when the rules are clear.  Seidman suggests as much when he questions some constitutive rules (the revenue-bill provision, the commander-in-chief power, Congress' power of the purse) while asserting that other, similar matters are "better left settled" (bicameralism, terms of office).  But people inevitably will disagree about which constitutive issues are "better left settled" and which are not; and so an approach that says "contest those constitutive rules that should be contested and leave the others alone" is a recipe for chaos.  Even if we could establish a rule for which constitutive rules may be contested and which must be considered settled, that rule itself would be a constitutive rule potentially subject to contestation.  (Indeed, one might understand our Constitution itself as embodying an implicit constitutive rule to this effect:  Those rules included in the Constitution (say, the president's power as commander-in-chief) are, by virtue of that fact, considered settled, while those rules not included in the Constitution (say, the Senate filibuster) are, by virtue of that fact, open to democratic debate.)  There is no logical limiting principle once we open the door to disobeying or contesting some constitutive rules.

The idea of democratic government necessitates constitutive rules to establish and maintain that government; and the necessity of constitutive rules implies that all such rules must be binding on the participants in the democratic process governed by them.  In other words, the very notion of democratic government presupposes some degree of constitutional law.  So we have a partial answer to question (1), the question of whether to give up on constitutionalism itself.  We can't give up on it altogether, not if we want to remain a democratic society.  At the very least we need basic constitutive rules upon which to build our democracy.

Note that this answer does not imply any particular answer to question (2), the question of whether we ought to give up on our actual Constitution.  It might be that our Constitution cannot be defended as a set of constitutive democratic rules.  For one thing, maybe our Constitution, or parts of it, can't properly be understood in terms of constitutive rules at all.  (Can the right to choose an abortion, for instance, credibly be characterized as a basic ground rule of democratic government?)  In my next post, I will suggest that the case for constitutionalism extends beyond bare-bones constitutive rules.  And in a future post, I'll argue that many (though probably not all) aspects of our current Constitution can be defended, at least in the abstract, as legitimate expressions of constitutionalism.

Even if most or all of our Constitution can be understood as constitutive rules, however, it might be the case that those rules are so bad that they are not "better left settled" -- that we are worse off obeying these "archaic, idiosyncratic and downright evil" rules than we would be if we left everything up for grabs or, perhaps better, started over and tried to draft a better set of constitutive rules.  I will address this possibility, too, a few posts down the line.

Note too, however, that we have learned something, albeit something rather incomplete and tentative, about question (3), the question of whether we should simply ignore or stop obeying our Constitution rather than following its own procedures for changing it.  The necessity of constitutive democratic rules provides a strong argument against simply abandoning all our constitutive rules without agreeing on something to take their place or, perhaps worse, allowing democratic participants (government officials, judges, citizens) to pick and choose which rules to follow and which to ignore.  Either of these approaches likely would result, not in the realization of the sort of substantive democracy Seidman seems to envision, but rather in a kind of chaos in which all bets are off and no one agrees on how government is supposed to function.  The ultimate outcome of this chaos might be democracy, although there is no guarantee of that.  But it seems probable that the short-term result would be far more painful than the result of continuing to live under an admittedly far-from-perfect constitutional system.