Know Your Election Audit Types
What Works aand What Does Not Work!
Do you understand the various ways our elections in the United States audited? Here are the details of each and how they apply.
Risk Limiting Audit
Picture an election like a huge jar full of mixed-color marbles, and the tabulator’s job is to tell you which color won. A Risk-Limiting Audit (RLA) is how you check the winner without recounting every single marble unless the check says you must. The “risk limit” is the promise: if the reported winner is wrong, the audit has a known, small maximum chance of failing to catch that (for example, a 5% risk limit means at most a 5% chance the audit would incorrectly “pass” a wrong outcome). NIST+2Colorado Secretary of State+2
Here’s what an RLA looks like, step by step, in plain language:
Start with a voter-verifiable paper trail
RLAs depend on having paper ballots (or voter-verifiable paper records) that are the “ground truth.” The audit is about checking the reported outcome against what a careful human inspection of the paper would show. NIST+1Pick the “risk limit” and the contests to audit
Election officials choose how strict the check should be (the risk limit) and which races/contests are being audited. A closer race generally requires checking more ballots than a landslide does. Colorado Secretary of State+1Build the “map” so any randomly chosen ballot can be found
The audit needs a reliable ballot manifest basically a directory that says where ballots live (containers, batches, box numbers, etc.) so when the random process selects Ballot #X, officials can retrieve the right physical ballot with chain-of-custody controls. U.S. Election Assistance Commission+1Generate randomness in public (the “seed”)
To prevent cherry-picking, many RLA programs generate a public random seed (often in a public meeting) and use it to drive a pseudo-random number generator that selects which ballots will be audited. GovDelivery+1Randomly select ballots (or batches) to audit
Using that seed and the manifest, the process produces a list of specific ballots to pull (or, in some methods, batches). NIST+1Retrieve those ballots under tight procedures
Officials pull the selected ballots using documented handling rules so observers can be confident the right ballots were retrieved and nothing was swapped. (This is where chain-of-custody discipline matters.) U.S. Election Assistance Commission+1Humans interpret those ballots, and you compare to what the system said
There are two common flavors:
Ballot-polling RLA: humans read the sampled ballots and use statistics to see if that sample provides strong evidence the reported winner really won. NIST+1
Ballot-level comparison RLA: humans read the sampled ballots and compare each one to the system’s corresponding cast vote record (CVR). Discrepancies become quantified “evidence” and affect whether you must expand the sample. electionline+1
Do the math: “Is the evidence strong enough yet?”
After each round, RLA formulas compute whether the observed agreement between paper and reported results is strong enough to “confirm” the outcome at the chosen risk limit. NIST+1If problems show up, the audit automatically expands
This is the heart of RLAs: if discrepancies are too frequent or too severe, the audit doesn’t “shrug” it pulls more randomly selected ballots and checks again. NIST+1Worst case: it escalates to a full hand count for that contest
If the evidence never becomes strong enough, the RLA “fails safe” by escalating until the outcome is resolved potentially by a full hand count (effectively a recount of the contest being audited). NIST+2Verified Voting+2
One last plain-English boundary: an RLA is designed to verify the correctness of the outcome (who won), not to investigate everything that could possibly go wrong in an election (like voter eligibility, coercion, or whether someone should’ve received a ballot
LOGIC AND ACCURACY TESTING
A Logic & Accuracy test (often called an L&A test) is the “dress rehearsal” election officials run on voting machines before real voting and tabulation to make sure two things are true:
Logic: the system is set up correctly (right ballot styles, contests, rules like overvotes, etc.).
Accuracy: the equipment counts votes exactly as marked. U.S. Election Assistance Commission+1
Here’s what it looks like in real life, in plain language:
Ballots are finalized and the election is programmed
Officials load the official ballot definitions into the election system (the candidates, contests, precinct/ballot styles, and rules). L&A is typically when equipment is about to be placed into “election mode,” so documentation and control matter. U.S. Election Assistance Commission+1They create a “test deck” (a pre-audited set of marked ballots)
This is a stack of ballots marked in a known pattern so the expected totals are already calculated. The deck usually includes normal votes and tricky cases like overvotes, undervotes, and blank ballots to confirm the system handles them correctly. U.S. Election Assistance Commission+1They prove the machines start at zero
They verify the equipment shows no votes are stored (often by printing a “zero tape” / zero report).They run the test deck through every relevant device
Test ballots are scanned through the same tabulators that will count real ballots (precinct scanners and/or central-count scanners). If ballot-marking devices are used, they’re tested too—by producing test ballots that are then scanned.They compare the machine totals to the known expected totals
If the system totals don’t match the pre-audited results, the test fails.If there’s an error, they fix it and rerun until it’s clean
The point is an errorless count before equipment is approved for use. Arizona Legislature+1They lock it down for the election
After it passes, equipment/media are secured (sealed, logged, controlled access) until used for real voting or tabulation.
Using Arizona as an example:
Arizona makes this concept explicit in A.R.S. § 16-449:
Requires testing the tabulating equipment and programs before the election to ensure it “will correctly count” votes.
Requires public notice (at least 48 hours), bipartisan observation (inspectors not of the same party), and that the test be open to parties/candidates/press/public.
Requires using a preaudited group of ballots with predetermined votes, and specifically includes overvote test ballots to confirm rejection behavior.
Requires correcting errors and achieving an errorless count before approval, plus certain reporting/filing steps if changes occur. Arizona Legislature
What an L&A test is not
It’s not a full “hack test” or a guarantee against every threat. It’s a functional correctness check: “Given this election setup, does the system display/record/tabulate correctly and handle edge cases the way the law and ballot rules require?” U.S. Election Assistance Commission+1
Post-Election Field Audit of the Tabulation System (for example Pro V&V in Az)
What PV&V audited for:
PV&V’s report says the core purpose was to confirm the software and hardware used in the November 2020 General Election matched what was federally/state certified for use. Maricopa County+1
Concretely, their scope included:
Software integrity verification (hash matching)
For EMS and ICC workstations/servers, they generated SHA-256 hash values from copied software and compared them to known SHA-256 values (i.e., “known good” certification values). Maricopa County+1
Firmware verification on precinct scanners (sample testing)
They extracted firmware from a sample of 35 ImageCast Precinct 2 (ICP2) units, generated SHA-256 hashes for that firmware, and compared those hashes to known values from the EAC Federal Test Campaign. Maricopa County+1
Malware/virus screening
A malware/virus detection tool was run on the workstations/servers to check for malicious software. Maricopa County+1
Hardware configuration verification
They performed visual hardware verification on selected devices, checking components/subcomponents against known references; they reported no discrepancies in the units inspected. Maricopa County+1
Network analysis (internet connectivity)
PV&V evaluated network wiring/switching and ran commands testing connectivity to known internet/public IP addresses; they concluded the network they evaluated was a “Closed Network” with no internet access. Maricopa County+1
Accuracy test (does it capture/store/report votes correctly)
They ran a formal Accuracy Test (VVSG-referenced) using a county-provided test deck, processed through tabulators to reach at least 1,549,703 ballot positions, and reported the votes were tallied/adjudicated to an accurate ballot count (they note two ballot-jam anomalies handled during the test). Maricopa County+1
What the scope wasn’t
Based on PV&V’s own description, this was primarily a system integrity + malware + network + accuracy audit of the tabulation environment (EMS/ICC/ICP2)—not a ballot-by-ballot recount, not a voter-roll eligibility investigation, and not a full process/chain-of-custody audit. Maricopa County+1
Stautory “hand count audit” - “early ballot audit”
In Arizona, in 2020 thy conducted a statutory “hand count audit / early ballot audit” under A.R.S. § 16-602—it’s meant to verify the tabulation equipment counted correctly, by comparing a human hand count of a small sample of paper ballots to the machine totals for those same ballots. Arizona Legislature+1
What they did in Maricopa County for the 2020 General Election
Think of it like a quick “trust-but-verify” check done after Election Day:
They waited until the Election Day ballots were in
The law requires the sampling not begin until ballots from polling places/vote centers are delivered and unofficial totals are public. Arizona LegislatureThe sample wasn’t supposed to be “hand-picked” by officials
Arizona law says the county party chairs (or designees) for recognized parties do the selection by lot, without a computer—i.e., a public random draw process. Arizona Legislature
In Maricopa’s 2020 report, the Republican, Democratic, and Libertarian chairs met and the order of selection was chosen by lot. Arizona Secretary of State+1They audited two different pools
Vote centers (in-person ballots): Maricopa reports it hand-counted 2% of vote centers—4 of 175. Arizona Secretary of State+1
Early ballots: They selected 26 early-ballot batches totaling about 5,000 ballots (the statute’s “1% or 5,000, whichever is less” rule). Arizona Secretary of State+1
They hand-counted a limited set of races, not every contest
The Maricopa report lists five races/categories selected for counting (examples shown in the report include President, a statewide race, a statewide ballot measure, a federal race, and a state legislative race). Arizona Secretary of State+1They compared hand totals vs machine totals and looked for discrepancies beyond a “designated margin”
A.R.S. § 16-602 sets the escalation rules: if the difference is below the designated margin, the machine count stands; if it meets/exceeds the margin, they repeat and can expand the audit—up to a full jurisdiction count for that race if needed. Arizona Legislature
Maricopa’s report states no discrepancies were found in that hand count audit.
THE VOTE CENTER/TALLY CENTER AUDIT
In Arizona, the “2% audit at the tech center” is the statutory hand-count audit under A.R.S. § 16-602—a paper-ballot spot-check done at the county’s central counting center (what people often call the “tech center”). Its purpose is simple: verify the tabulation system’s reported totals match what the paper ballots actually say, within a small “designated margin.” Arizona Legislature
What they did (how it worked)
Waited until Election Day ballots were delivered and unofficial totals were public
The law says selection doesn’t start until ballots from polling places are delivered to the central counting center and the unofficial totals are made public. Arizona LegislatureRandomly selected 2% of polling locations (vote centers)
The county party chairs (or designees) for qualifying parties do the selection by lot (no computer). Arizona Legislature
In vote-center counties, the EPM treats each vote center as the equivalent of a precinct/polling place and audits at least 2% of vote centers (or 2, whichever is greater). Arizona Courts
Randomly selected which races to hand-count
For a general election, § 16-602 calls for (up to) five race categories (plus President in presidential elections): e.g., President, a statewide candidate race, a statewide ballot measure, a federal race, and a state legislative race—chosen by lot. Arizona Legislature+1Hand-counted the paper ballots from those selected vote centers
Bipartisan hand-count boards manually tallied the votes for the selected races on those ballots and compared the hand totals to the machine totals. Arizona Legislature+1Checked whether any difference exceeded the “designated margin,” and escalated if needed
If discrepancies exceed the designated margin, the statute requires repeating/expanding steps that can escalate the audit. Arizona Legislature
What it was auditing “for”
It was auditing for tabulation accuracy—i.e., did the scanners/tabulation system count the votes the same way a human reading the paper ballots does? It is not a forensic audit of voters, signatures, or chain-of-custody in the broad sense; it’s a machine-vs-paper verification check. Arizona Legislature
What Maricopa’s 2020 “2%” sample actually was
Maricopa’s official hand-count audit report shows:
“Total Vote Centers Counted (2%): 4” with 2,917 ballots cast in those vote centers. Arizona Secretary of State
The vote centers listed include Trinity Bible Church of Sun City West, ASU Polytechnic Campus, Betania Presbyterian Church, and Turf Paradise. Arizona Secretary of State
The report’s selection worksheet shows the race categories chosen (President, statewide candidate, statewide measure, federal candidate, state legislative). Arizona Secretary of State+1
DOOR KNOCKING CANVASS
In the widely circulated Maricopa-focused effort, canvassers went door to door and then compared what they heard to voter/history data. The public claims centered on things like:
“Ghost votes”: people listed as having voted even though canvassers believed the voter didn’t live at the registration address.
“Lost/missing votes”: people telling canvassers they voted, but the canvassers didn’t see a corresponding participation record (or drew conclusions from partial data).
An Associated Press fact check describing that report noted it was based on interactions with about 4,570 voters in a handful of precincts, and that the report then extrapolated to much larger countywide numbers. Anchorage Daily News
What they were trying to “audit for”
Not the machines and not the paper ballots themselves—rather, they were trying to “audit” (informally) for anomalies in voter participation vs. residency and mismatches between self-reported voting and recorded participation. Anchorage Daily News


