2014 Feb Redesign Roof

electionLine

2014 Feb Redesign Search

2014 Feb Redesign Print/Email

Print | Email

Nice Social Bookmark

electionlineWeekly--June 9, 2011

Table of Contents

I. In Focus This Week

Data Dispatches
Exploring Election Performance Metrics

By Vassia Stoilov and Andreas Westgaard

As state and local governments continue to struggle with balancing their budgets, looking at every possible way to cut costs, performance management has become a critical component in linking performance to budgeting.



Good performance management requires good data collection and the establishment of core metrics used to assess how well a state or county performs day-to-day administration, the result of which are goal-setting agendas that increase efficiency and reduce costs.

While the concept of government performance is not new, it has become increasingly prevalent in the past two decades with the passage of both the Government Performance and Results Act (GPRA) in 1993 and the Program Assessment Rating Tool (PART), an executive order signed by George W. Bush in 2002, serving as a harbinger of this trend.

GPRA mandates that federal agencies develop strategic plans, annual performance plans, and performance reports. The PART complements GPRA, in that it tries to identify the outcomes of an agency, establishes quantifiable long-term and annual performance measures, and finally rates the agency on its achievement of these metrics.

Even though election administration has not remained immune to the trend to incorporate performance assessments, it has mostly remained “under the radar.”

At the federal level both the U.S. Election Assistance Commission and the Federal Election Commission, are obligated to produce annual performance reports.

Performance assessment in local election administration has also been marked by one of the federalist system’s defining features — decentralization.

Typically counties and city governments play the central role in assessing election administration performance with much of that data — usually broken down by performance standard and strategic goal — found as part of annual reports and annual budgets issued by county or in some cases by city governments (as in the case of Minneapolis).  

Decentralization, of course, begets substantial diversity in the level of performance measurements across counties. Looking at a sample of performance reports from various states and counties across the U.S., a few general observations can be made.

First, because state election codes vary, performance goals and measures used by counties in different states vary. There are, however, some common goals, across counties across different states, such as providing or increasing voter registration among eligible voters.

Second, performance goals and measures used by counties within the same state also vary, as counties typically finance the conduct of elections and seem to be determining what outcomes they hope to achieve.

For example, North Carolina’s Randolph and Wake counties have different goals in spite of their close proximity. In prior performance reports, Wake County’s performance measurements relating to voter registration included the cost per voter registration card processed and the time required to electronically transmit results, while Randolph County’s performance measurements includedthe number of structures that meet ADA compliances and the timely processing of death certificates to maintain precise voter registration files.

Third, performance goals and measures used by the same county have also changed from year to year. For instances, the Randolph County Elections Office specified in its annual report that beginning in FY06, the county would change how it measures its strategic goal “to alleviate crowded conditions at polling places on Election Day.” The metric for this goal would change from “percent of votes cast reconciled with number of voters on Canvass Day” to the “number of voters participating in one-stop [early] voting”. This decision came after the county decided that the former metric did not actually measure the set goal.

Fourth, outcomes and goals vary in terms of breadth. Examples of concrete performance goals are those set by Fairfax County, Va. possibly the result of a detailed Strategic Plan by the State Board of Elections in accordance with Virginia Performs.

One of Fairfax’s key performance measures is “to provide the legally mandated one voting machine for each 750 registered voters in each precinct with a minimum of three voting machines per precinct and a countywide average of 4.46 voting machines per precinct” (a goal for FY10 that actually changed for FY12).

In contrast, New York’s Schuyler County uses the broader goal, of trying “to fully staff all polling places with well trained, knowledgeable election inspectors.”

Finally, there are exceptions in the quality and availability of reports produced by select counties. Both Montgomery County and Prince George’s County in Maryland publish detailed performance measures in the annual operating budget, outlining the mission set by the department and the dollar amount allocated to the operating costs of an election.

While much debate has focused on the difficulty of finding performance indicators that would serve across states, the debate almost assumed that election administrators were not conducting performance assessments. As far as we can tell, that's not the case and there may be even an even greater wealth of indicators across county and state lines – with the potential for the more than 3,000 counties to set unique performance metrics.

Thus, the search for national election performance indicators may be better served to begin from the bottom-up, rather than the top-down, especially since performance indicators, however basic, may have been in use since the late 1990s. If this is the case, certain counties may have had considerable time to hone their performance metrics and could provide valuable insight to those counties who are just getting started.