Helping the New Gerrymandering Standard Survive in the Supreme Court

Tweet
Thanksgiving brings the nation a win for fair representation, in the form of a way to deal with partisan gerrymandering. A three-judge court ruled that the Wisconsin state legislative map is a partisan gerrymander: a map drawn to favor one major political party over the other (decision: Whitford Op. and Order, Dkt. 166, Nov. 21, 2016). The court applied a mathematical standard created by Nicholas Stephanopoulos and Eric McGhee, the “efficiency gap.” The case is now headed for consideration by the Supreme Court.
If this standard, or another that addresses the same need, is adopted widely, it would resolve a major gap in election law. This is an important case: In this year’s House elections, Democrats would have had to win the popular vote by at least 9 percentage points to take control. That is the largest partisan asymmetry on record. It would be reduced considerably if districting were done according to principles that treated both parties equally.
Let me outline the state of play and some potential weaknesses in the proposed standard. As an additional approach, I have developed two standards, based on longstanding statistical practice, which could help overcome skepticism by the Supreme Court. My standards can be calculated automatically at gerrymander.princeton.edu, and are described in detail in the Stanford Law Review.
The current state of play on the partisan-gerryrmandering issue is as follows:
Partisan gerrymanders are considered justiciable, which means that courts are empowered to strike them down. (Davis v. Bandemer, 1986)
Supreme Court justices have not come to agreement on a manageable standard which could be applied in general.
A majority of the current Supreme Court has expressed interest in the idea of partisan symmetry, loosely defined as the idea that if the parties switched statewide vote totals, they would also switch seat totals. (LULAC v. Perry, 2006)
Any solution must fit within a large body of existing law. Here are a few principles that have been considered but rejected:
Odd shapes of districts are considered insufficient evidence. Indeed, sometimes odd shapes are needed to connect communities of interest, or to comply with the Voting Rights Act. Generally speaking, single-district gerrymanders are not justiciable, except on grounds of race.
It is insufficient to point out that a minority of votes could elect a majority of representatives. Such an event can occur by chance.
Sub-proportional representation is also out, e.g, 40% of votes giving less than 40% of seats. Winner-take-all systems generally do not produce such an outcome.
Justice Anthony Kennedy, who is necessary* for a five-vote majority for a possible gerrymandering standard, says that partisan gerrymanders are justiciable under the Fourteenth Amendment (equal protection) and the First Amendment (freedom of association). So the legal underpinnings are there for a standard to be adopted.
To me, “manageable” suggests that a judge should be able to apply the standard without too much help from expert witnesses. There is nothing wrong with expert witnesses. But this is a Constitutional question, and a judge might not want to outsource the critical thinking.
For example, one could take the approach of drawing thousands of possible maps, and make a statistical argument from the results. But to do this, experts have to start with some set of districting standards, which implicitly contain priorities that do not reflect the give-and-take of the legislative process or of requirements such as satisfying the Voting Rights Act or joining communities of interest. In short, redistricting is not a game of chance. A randomly-generated process only reveals the range of outcomes that are possible, not what is desirable. So this is shaky ground.
Now let’s turn to the standard used in the Wisconsin case. It revolves around the key principle that partisan gerrymandering must consider the statewide map as a whole. This is likely to be the basis for any successful standard.
In particular, here is the basic concept of the efficiency gap: Look at the statewide pattern of results. When one party gets just enough votes to win its races by tiny margins, it has used its votes efficiently. If a party’s wins are large, then votes have been wasted. If the two major parties differ in their total number of wasted votes, that is an efficiency gap.
Skipping over the details of Stephanopoulos and McGhee’s assumptions, they define the efficiency gap in a way that is equivalent to the following formula (see footnote 88 of the Wisconsin decision):
efficiency gap = S – 0.5 – 2*(V – 0.5) = S – 2*V + 0.5
where S is the party’s seat share and V is the party’s vote share. Any point on the blue diagonal of the following plot has an efficiency gap of zero percent:
I have plotted Wisconsin elections from 2010 to 2016.The vertical distance between each data point and the blue diagonal is the efficiency gap. The points are approximately lined up from left to right, which means that across a wide range of outcomes, Democrats are held to a similar number of seats, less than 40 out of 99 total. In this way, Republicans in the Wisconsin Assembly is protected from changes in the will of voters.
The efficiency gap works because the diagonal line is close to where the relationship between seats and votes has been observed to fall historically, based on decades of elections in winner-take-all systems worldwide. (It is possible to derive the exact votes-to-seats relationship from basic mathematical principles. Today I will skip that.)
Despite its virtues, the efficiency gap has some weak points.
It relies on “wasted votes,” a phrase that may rankle a judge. I can imagine Justice Alito or Roberts asking: how can a legitimately-cast vote be said to be “wasted”? There are other details of the mathematical argument that could be examined, though I do not think the judges will drill into them beyond what I have said above.
A critical justice could call the blue diagonal a form of enforced proportionality – treif. It’s not proportionality, exactly – but it could be argued that the efficiency gap establishes a norm of what level of representation is appropriate. If the Supremes don’t like such a baldly stated standard, they might instead want to see a definition of asymmetry that does not explicitly recommend a specific number of seats.
In some years the efficiency gap almost goes away. See the data point for 2014, during which Republicans won by fat margins in Wisconsin, and “wasted” as many votes as Democrats, giving a very small efficiency gap. So the Wisconsin gerrymander “wastes” more votes in some years than others. If the measure of gerrymandering is the efficiency gap, why not just wait until the next election, when it may very well shrink?
>>>
Older, simpler statistical tests might possibly resolve these difficulties. Here are two tests for gerrymandering based on textbook principles established nearly 100 years ago.
1. The mean-median difference. As I wrote in the New York Times, a partisan advantage in a closely-divided state is revealed by the fact that the median (i.e. the middle value) is different from the mean (i.e. the average). When these two numbers are far apart, there is an advantage to whichever party is favored by the median. And the statistical properties of the mean-median difference were discovered decades ago.
Wisconsin redistricting came under single-party control by Republicans for the post-2010 redistricting cycle. Previous rounds of redistricting were done by court order after failure of the two parties to come to agreement on a map. From 1984 to 2000, the mean-median difference was an average of 0.1% toward Republicans – basically zero. From 2002 to 2010, the mean-median difference averaged 3.5% toward Republicans. In the 2012/2014/2016 elections, the mean-median difference averaged 6.4% toward Republicans. This is a large difference, comparable to the most extreme Congressional gerrymanders in Pennsylvania and North Carolina.
(Figure 5B from my Election Law Journal article analyzing Wisconsin; asterisks indicate statistical significance.)
Note that the mean-median difference varies considerably less than the efficiency gap from year to year, and is a good measure of partisan asymmetry even in years like 2010 and 2014, when the efficiency gap was low. This is because any structural gap between the mean and the median is likely to persist, even if one party is lifted by a wave of popular support.
2. Are individual district wins more lopsided for one party than the other? The core strategy of partisan gerrymandering is to pack opponents into a few districts for lopsided wins, while spreading one’s own voters more thinly. We can just ask whether statistically, Democratic and Republican win margins are different. This can be done using the two-sample t-test, probably “the most widely used statistical test of all time.”
The p-values give the probability (one-tailed) that Republicans would have gained this advantage under chance conditions. The advantage arose suddenly in 2012, too fast to be explained by slow trends such as the accumulation of Democrats in high-density population centers.
As a technical note, Wisconsin does present special problems for statistical analysis. In 2016, nearly half of Assembly races were uncontested in Wisconsin. For doing statistical analysis, something has to be done to estimate voter preference in such districts. These details are discussed in my Stanford Law Review** and my Election Law Journal article; they usually do not have a major effect on the outcomes of the tests.
Overall, I am optimistic that the Supreme Court will at least give this issue a fair hearing. There are two similar cases brewing, one in Maryland, where Democrats perpetrated the gerrymander; and one in North Carolina, where Republicans are the culprits. Within the coming 12 months, we may know whether or not partisan gerrymandering will be allowed in post-2020 redistricting.
*I am assuming that whoever is appointed to the vacancy on the Supreme Court will vote as Scalia did, against the justiciability of partisan gerrymanders. In LULAC, Scalia opined that it was time to give up looking for a clear standard. Will the standards described here win out over the new justice’s likely political preference? I am skeptical, but then again this issue may seem like a technical one rather than a more emotional one such as voting rights. In my view, it is at least as consequential. I note that the Maryland case brings all motivations into alignment.
**The SLR article also gives a third test, one that uses computer simulation to calculate how many seats were ill-gained. However, that is best applied to House redistricting schemes. It has one notable virtue: it can take into account the natural advantages that come from population clustering. If you want to try it out, it is available at gerrymander.princeton.edu.
I thank Stephen Wolf for reading and commenting on this post.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *