Technology For Voting Remotely Explained


What is "Remote Voting"?

Also known as e-voting or mobile voting, remote voting allows individuals to participate in a public election without physically visiting a polling station (wiki).

What is the Objective of this Site?

The time has come where remote voting is no longer a matter of convenience. It’s a matter of necessity.  The goal of this site is to explain how remote voting systems can be configured in plain language, as the technology required is not easy for the general public to understand, but it must be more clear, or it will never be adopted.

What Info & Tech are Required for Remote Voting?
  • Unique Identifier for Voter w/ Minimal PII
  • Liveness Detection & 3D Face Matching (FaceTec)
  • App/Webpage for Ballot Casting User Interface
  • Server-side Vote Counting System (Microsoft ElectionGuard)
  • Anonymous Vote Storage & Audit System
  • Election Results App/Webpage & Output API

Remote Voting Flow

The Importance of Liveness Detection & Facial Authentication in Remote Voter Verification:

In traditional voting, having the scrutinizer analyze the ID, info and eligibility of a voter satisfies most security questions. Additionally, seeing the person who will be responsible for counting your ballot satisfies the chain of trust voters must have after they deposit their ballots.

Until now, technology simply wasn’t capable of replicating the same security assurance afforded to in-person verification. 

Liveness is the only way that election administrators can be sure that the person is both eligible and casting their vote in real time. Those two components provide the same voter verification that makes in-person voting so secure.


Is Remote Voting Really Secure?

It can be, but the correct technical components shown above must be in place to ensure these five things:

  • Universal availability to all citizens regardless of hardware
  • 1 citizen = 1 vote
  • 100% voter anonymity
  • Immutable ballot selections 
  • Tamper-proof software systems


Why Don't We Use Remote Voting Already?
Two reasons: People
are scared of things they do not understand, and, in many cases, it's not legal.

The process of voting is steeped in tradition going back to 139 B.C. Any changes to the voting process have always elicited a very strong reaction due to its role as a fundamental element of our society. 

Solutions such as vote by mail or internet voting have been tested and well received but have failed to produce a solution that meets the same security standards as in-person voting. But just as society has changed over the years, so has our ability to vote. Paper and pen were replaced by ballot barking devices and optical scan tabulators. 

But just as society has changed over the years, so has our ability to vote. Paper and pen were replaced by ballot barking devices and optical scan tabulators. 

Now the time has come where remote voting is no longer a matter of convenience. It’s a matter of necessity. 

Antiquated laws that call for paper ballots from 100+ years ago remain on the books.  But, in the face of the unprecedented world health crisis brought on by the coronavirus pandemic, the legislative roadblocks that have prevented remote voting in the past will likely be removed as the technology becomes available, and everyone agrees that the public safety and the democratic process depend on it.

“While I know there is resistance to changing a Senate tradition to allow for remote voting during national emergencies, I believe this is an important issue and worthy of robust discussion amongst our Senate colleagues.” Sen. Rob Portman - The Hill


Can't We Just Mail in Our Votes Using The Absentee Ballot System? 

There have always been ways to mail in a vote in most states in the US, but often an excuse is required for why that person cannot be physically present.  Called absentee ballots (wiki), they are also used by citizens and military personnel stationed overseas.  There are many problems with this method: it's slow, costly, there is no way for a voter to know their ballot was not lost in the mail, and it is an obvious regression.  Government processes must be more efficient and more transparent, not less.


Can The Remote Authentication Technology Be Used for Other Things? 

Yes!  A similar system can be used to enable citizens to access government benefits remotely while preventing fraud and abuse.  Digital unemployment payments, Social Security and pension checks, and aid and stimulus funds can all be distributed this way, and will be in the near future.


For More Information on Government Related Remote Authentication:

Remote Onboarding for Citizens:

Remote Passport Anti-Morphing:


Certified Liveness Was The Critical Missing Component For Remote Voting: 
Source -

What is “Liveness”?
In biometrics, Liveness Detection is an AI computer system’s ability to determine that it is interfacing with a physically present human being and not an inanimate spoof artifact.  Note: It’s not called “Liveliness”. Don’t make that rookie mistake!

The History of Liveness

In 1950, Alan Turing (
wiki) developed the famous "Turing Test".  It measures a computer's ability to exhibit human-like behavior.  Conversely, Liveness Detection is AI that determines if a computer is interacting with a live human. 

Alan Turing
Turing c. 1928


The "Godmother of Liveness"

Dorothy E. Denning (
wiki) is a member of the National Cyber Security Hall of Fame and coined the term “Liveness” in her 2001 Information Security Magazine Article: It's "liveness," not secrecy, that counts.  She states:

A good biometrics system should not depend on secrecy," and,

“... biometric prints need not be kept secret, but the validation process must check for liveness of the readings."

Decades ahead of her time, Dorothy E. Denning’s vision for Liveness Detection in biometric authentication could not have been more correct.

Dorothy E. Denning

Early Academic Papers About Liveness & Anti-Spoofing

One of the earliest papers on Liveness was published by Stephanie Shuckers, S.A., in 2002.  "Spoofing and anti-spoofing measures", and it is widely regarded as the foundation of today's academic body of work on the subject.  The paper states that "Liveness detection is based on recognition of physiological information as signs of life from liveness information inherent to the biometric".  

Later in 2016, her follow-up, "Presentations and Attacks, and Spoofs, Oh My", continued to influence presentation attack detection research and testing. 

Is Facial Recognition the Same as Liveness & Face Authentication?

No, and we all need to start using the correct terminology if we ever want to stop confusing people about biometrics! 

Facial Recognition is for surveillance; it's the 1-to-N matching of images captured with cameras the user doesn't control, like those in a casino or an airport.  And it only provides "possible" matches for the surveilled person from face photos stored in an existing database. 

Face Authentication (1:1 Matching+Liveness), on the other hand, takes User-initiated data collected from a device they do control and confirms that User's identity for their own direct benefit, like, for example, secure account access.

They may share a resemblance and even overlap in some ways, but don't lump the two together. Like any powerful tech, this is a double-edged sword; Facial Recognition is a threat to privacy while Face Authentication is a huge win for it.


Should We Fear Centralized Face Authentication?

Fear of biometric authentication stems from the belief that centralized storage of biometric data creates a "honeypot" that, if breached, compromises the security of all other accounts that rely on that same biometric data.

Biometric detractors argue, "You can reset your password if stolen, but you can't reset your face."  While this is true, it is a failure of imagination to stop there.  We must ask, "What would make centralized biometric authentication safe?"

The answer is Certified Liveness Detection.  With it, the biometric honeypot is no longer something to fear because our security doesn't rely on our biometric data being kept secret.

Learn more about how Certified Liveness Makes Centralized Safe in this comprehensive FindBiometrics white paper.

How Liveness Detection Protects Us

Ms. Denning's photo posted above is biometric data, and is now cached on your computer.  Is she somehow more vulnerable now that you have it?  Not if her accounts are secured with Certified Liveness Detection, because that photo won't fool the AI.  Nor will a video, a copy of her driver license, passport, fingerprint, or iris.  She must be physically present to access her accounts, so she need not worry about keeping her biometric data "secret".

Liveness Detection prevents bots and bad actors from using stolen photos, deepfake videos, masks, or other spoofs to create or access online accounts.  Liveness ensures only real humans can create and access accounts.

Liveness checks solve some very serious problems.  For example, Facebook had to delete 5.4 billion fake accounts in 2019 alone!  Requiring proof of Liveness would have prevented these fakes from ever being created.

Note: In 2019, the crypto-currency wallet ZenGo offered a challenge: spoof Certified Liveness Detection and "steal" one Bitcoin (worth over $11,000 at the time).  A hi-res photo of the ZenGo CEO was provided, and the savvy cypherpunks gave it their best shot.  The ZenGo wallet remained unspoofed, and the bitcoin stayed safe, proving the efficacy of Certified Liveness Detection in one of the most public displays of biometric security to date (

Liveness for Onboarding, KYC and Enrollment

Requiring every new user to prove their Liveness before they are even asked to present an ID Document during digital onboarding is itself a huge deterrent to fraudsters who never want their real face on camera.
If an onboarding system has a weakness, the bad guys will exploit it to create as many fake accounts as possible.  To prevent this, Certified Liveness Detection during new account onboarding should be required.  Then we know that the new account belongs to a real human and their biometric data can be stored as a trusted reference of their digital identity in the future.


No Liveness Data = No Honeypot Risk
Two types of data are required for every Face Authentication: Face Data (for matching) and Liveness Data (to prove the Face Data was collected from a live person). 

Liveness Data must be timestamped, be valid only for a few minutes, and then deleted. Only Face Data should ever be stored.  New Liveness Data must be collected for every authentication attempt.  

Face photos are just "Face Data" without the corresponding Liveness Data, so they cannot be used to spoof Certified Liveness Detection, and thus, storing photos does not create honeypot risk.

Note: Think of the stored Face Data as the lock, the User's newly collected Face Data as a One-Time-Use key, and the Liveness Data as proof that key has never been used before. 

ISO/IEC 30107 - Liveness Testing Global Standard is the International Organization for Standardization’s (ISO) testing guidance for evaluation of Anti-Spoofing technology, a.k.a., Presentation Attack Detection (PAD).  Three document editions have been published to date, with a fourth edition currently in progress (as of November, 2019).
“bio” “metrics” literally means to measure live human physical characteristics.  Ironically, it took until late 2017 for anyone to release official guidance on how to determine if the subject of a biometric scan is actually alive.

Due to "hill-climbing" attacks (see Glossary, below), biometric systems should never reveal which part of the system did or didn't catch a spoof.  And while ISO 30107-3 gets a lot right, it unfortunately encourages testing both Liveness and Matching at the same time.  Scientific method requires the fewest variables possible be tested at once, so Liveness testing should be done with a solely Boolean (true/false) response.  Tests should not allow systems to have multiple-decision layers that could allow an artifact to pass Liveness but fail Matching because it didn't "look" enough like the enrolled subject. 
Spoof Artifact Levels

When a non-living object that exhibits human traits (an "artifact") is presented to a camera or biometric sensor, it's called a "spoof".  Photos, videos, masks, and dolls are all common examples of spoof artifacts.

 Artifact Level Description Example
 Level 1 (A)
 (iBeta testing available)
Hi-res paper & digital photos, digital deepfakes, hi-def challenge/response videos and paper masks.
 Level 2 (B)
 (iBeta testing available)
Commercially available lifelike dolls, and human-worn resin, latex & silicone 3D masks under $300 in price.
 Level 3 (C)
 (iBeta Does NOT test)
Custom-made ultra-realistic 3D masks, wax heads, etc., up to $3,000 in creation cost.


 Bypass Type Description Example
 Level 4 
 (Spoof Bounty Avail)


Decrypt & edit the contents of a 3D FaceMap™ to contain synthetic data not collected from the session, have the Server process and respond with Liveness Success.


 Level 5
 (Spoof Bounty Avail)

Successfully take over the camera feed & inject previously captured frames that result in the Server responding with Liveness Success.



Non-Certified Liveness

Unfortunately, some types of Liveness Detection are uncertifiable because they are not secure enough to pass the lowest level of the ISO 30107 Presentation Attack Detection guidance requirements. 

Uncertifiable Liveness Detection methods include: blink, smile, turn/nod, colored flashing lights, making random faces, speaking random numbers, and many more. All easily spoofed.

User security and hard-won corporate credibility is put at risk by trusting unscrupulous vendor's exaggerated claims. 

When vendors claim to have "Robust Liveness Detection", they should "Pass the test or give it a rest!"

Note: Watch USAA Bank's non-certified "Facial Recognition" app security
get spoofed by a crude photo slideshow, easily unlocking
one of their user's bank accounts ------------------->


The Threat of Deepfakes
So-called "deepfakes" have been around for years, but now even the general public understands that digital media can be manipulated easily.

If non-certified Liveness Detection is vulnerable to deepfake spoofs derived from photos or videos, it cannot be used for biometric security. 

Note: Watch as a basic "deepfake" puppet is created in 20 seconds
that can be used to spoof almost every non-certified liveness
vendor on the market today ------------------->




Free 2D Liveness Detection Providers Listed Below

FaceTec provides Free 2D Liveness Detection to ALL of its Customers & Partners. These 2D Liveness Checks are 97% accurate against Level 1-3 Spoof Attack Vectors. While not as secure as 3D Liveness (+99.997% accurate), there are scenarios where 2D Liveness Checks make sense. For example, at a Customs Checkpoint in an airport or at a retail store's self-checkout. Scenarios where a fraudster is unlikely to be able to use a Deepfake avatar or bypass the camera and injecting a pre-recorded video. 

2D Liveness doesn't require a Device SDK or special user interface, it works on any mugshot-style 2D face photo, the number of checks is Unlimited, and the 2D images are processed and stored 100% on the Customer's Server.

You can contact ANY* of the FaceTec Certified Vendors below and ask about Free 2D Liveness Detection, or visit 2D Passive Liveness Checks for more information. *Participation of FaceTec Distribution Partners may vary.

Certified FaceTec 3D Liveness Vendors
FaceTec created its $100,000 Spoof Bounty Program to prove real-world Level 1,2 & 3 PAD security, and Level 4 & 5 Biometric Template Tampering, and Virtual-Camera & Video Injection Attack Detection.

All organizations have a fiduciary duty to provide the strongest Liveness Detection available to their users when they are asked to perform remote biometric onboarding, identity verification or face authentication.


 Certified 3D Face Liveness Vendors
With security powered & proven by the
$100,000 Spoof Bounty Program 

& NIST/NVLAP Lab Certified PAD: Level 1 & 2 AI*
        01 Systems
BTS Digital
Bryk Group
e4 Global
Gulf Data-gDi
PBSA Group
Pulsar AI
Solus Connect
Sum & Substance
TiC Now

South Africa
South Africa
United Kingdom
United Kingdom
United Kingdom

$100,000 Spoof Bounty Program 

Incentivized public bypass testing for Template Tampering,
Level 1-3 Presentation, Video Replay & Virtual Camera Attacks.

*Vendors listed above have not have been individually tested by
a NVLAP/NIST accredited lab for Level 1&2 Presentation Attacks, they are distributing
FaceTec's software, which has had v6.9.11 Certified to Level 2 + Level 1 regression testing.

$100,000 Spoof Bounty Program

Don't be a guinea pig; insist that your Biometric Vendor maintain a persistent Spoof Bounty Program to ensure that they are aware of and robust to any emerging threats, like Deepfakes.  As of today, the only Biometric Authentication Vendor with an active, real-world Spoof Bounty is FaceTec.  Having now rebuffed over 8,000 Real-World Spoof Attacks, the goal of the $100,000 Spoof Bounty Program remains to uncover unknown vulnerabilities in the Liveness AI and Security scheme so they can be patched, and the anti-spoofing capabilities elevated even further.  Visit to participate. 


Editors' Note: Should Liveness Detection Be Required By Law?

We believe that legislation must be passed to make Certified Liveness Detection mandatory if biometrics are used for Identity & Access Management (IAM).  Our personal data has already been breached, so we can no longer trust Knowledge Based Authentication (KBA).  We must turn our focus from maintaining databases full of "secrets" to securing attack surfaces.  Current laws already require organic foods to be certified, and every medical drug must be tested and approved.  In turn, governments around the world should require Certified Liveness Detection be used to protect the digital safety and biometric security of their citizens.


Resources & Whitepapers

Information Security Magazine - Dorothy E. Denning's (
wiki) 2001 article, “It Is "Liveness," Not Secrecy, That Counts

There's a New Sheriff in Town - Standardized PAD Testing & Liveness Detection - Biometrics Final Frontier

Gartner, “Presentation attack detection (PAD, a.k.a., “liveness testing”) is a key selection criterion.  ISO/IEC 30107 “Information Technology — Biometric Presentation Attack Detection” was published in 2017.  
(Gartner’s Market Guide for User Authentication, Analysts: Ant Allan, David Mahdi, Published: 26 November 2018). FaceTec’s ZoOm was cited in the report.  For subscriber access:
Forrester, "
The State Of Facial Recognition For Authentication - Expedites Critical Identity Processes For Consumers And Employees"  By Andras Cser, Alexander Spiliotes, Merritt Maxim, with Stephanie Balaouras, Madeline Cyr, Peggy Dostie.  For subscriber

Ghiani, L., Yambay, D.A., Mura, V., Marcialis, G.L., Roli, F. and Schuckers, S.A., 2017. Review of the Fingerprint Liveness Detection (LivDet) competition series: 2009 to 2015. Image and Vision Computing58, pp.110-128: 

Schuckers, S., 2016. Presentations and attacks, and spoofs, oh my. Image and Vision Computing55, pp.26-30:  

Schuckers, S.A., 2002. Spoofing and anti-spoofing measures. Information Security technical report(4), pp.56-62:  


Glossary - Biometrics Industry & Testing Terms:

1:1 (1-to-1) – Comparing the biometric data from a subject User to the biometric data stored for the expected User.  If the biometric data does not match above the chosen FAR level, the result is a failed match.

1:N (1-to-N) – Comparing the biometric data from one individual to the biometric data from a list of known individuals, the faces of the people on the list that look similar are returned.  This is used for facial recognition surveillance, but can also be used to flag duplicate enrollments.

Artifact (Artefact) –  An inanimate object that seeks to reproduce human biometric traits. 

Authentication – Concurrent Liveness Detection and 1:1 biometric matching of the User.

Bad Actor – A criminal; a person with intentions to commit fraud by deceiving others.

Biometric – The measurement and comparison of data representing the unique physical traits of an individual for the purposes of identifying that individual based on those unique traits.

Certification – The testing of a system to verify its ability to meet or exceed a specified performance standard.  Testing labs Like iBeta issue certifications.

Complicit User Fraud – When a User pretends to have fraud perpetrated against them, but has been involved in a scheme to defraud by stealing an asset and trying to get it replaced by an institution.

Cooperative User – When a testing organization is guided by ISO 30107-3, the human Subjects used in the tests must provide any and all biometric data that is requested.  This helps to assess the complicit User fraud and phishing risk, but only applies if the test includes matching (not recommended).

Centralized Biometrics – Biometric data is collected on any supported device, encrypted and sent to a server for enrollment and later authentication for that device or any other supported device.  When the User’s original biometric data is stored on a secure 3rd-party server, that data can continue to be used as the source of trust, and their identity can be established and verified at any time.  Any supported device can be used to collect and send biometric data to the server for comparison, enabling Users to access their accounts from all of their devices, new devices, etc., just like with passwords.  Liveness Detection is the most critical component of a centralized biometric system, and because certified Liveness did not exist until recently, centralized biometrics have not yet been widely deployed.

Credential Sharing – When two or more individuals do not keep their credentials secret and can access each others accounts.  This can be done to subvert licensing fees or to trick an employer into paying for time not worked (also called “buddy punching”).

Credential Stuffing – A cyberattack where stolen account credentials, usually comprising lists of usernames and/or email addresses and the corresponding passwords, are used to gain unauthorized user account access.

Decentralized Biometric – When biometric data is captured and stored on a single device and the data never leaves that device.  Fingerprint readers in smartphones and Apple’s Face ID are examples of decentralized biometrics.  They only unlock one specific device, they require re-enrollment on any new device, and further do not prove the identity of the User, whatsoever.  Decentralized biometric systems can be defeated easily if a bad actor knows the device's override PIN number, allowing them to overwrite the User’s biometric data with their own.

Deepfake – A deepfake (a portmanteau of “deep learning” and “fake”) is an AI-based technology that can produce or alter digital video content so that it presents something that did not in fact occur.

End User – An individual human who is using an application.

Enrollment – When biometric data is collected for the first time, encrypted and sent to the server.  Note: Liveness must be verified and a 1:N check should be performed against all the other enrollments to check for duplicates.

Face Authentication – 1:1 Face Matching + Liveness takes User-initiated data collected from a device they do control and confirms that User's identity for their own direct benefit, like, for example, secure account access.

Face Matching – Newly captured images/biometric data of a person are compared to the enrolled (previously saved) biometric data of the expected User, determining if they are the same.

Facial Recognition –  2D Face Matching used for surveillance; it's the 1-to-N matching of images captured with cameras the User doesn't control, like those in a casino or an airport. And it only provides "possible" matches for the surveilled person from face photos stored in an existing database.

Face Verification – Matching the biometric data of the Subject User to the biometric data of the Expected User.

FAR (False Acceptance Rate) – The probability that the system will accept an impostor’s biometric data as the correct User’s data and incorrectly provide access to the impostor.

FIDO – The acronym for Fast IDentity Online, FIDO is an independent standards body that provides guidance to organizations that choose to use Decentralized Biometric Systems (

FRR/FNMR/FMR – The probability that a system will reject the correct User when that User’s biometric data is presented to the sensor.  If the FRR is high, Users will be frustrated with the system because they are prevented from accessing their own accounts.

Hill-Climbing Attack – When an attacker uses information returned by the biometric authenticator (match level or liveness score) to learn how to modify their attacks to increase the probability of spoofing the system. 

iBeta – A NIST-certified testing lab in Denver Colorado; the only lab currently certifying biometric systems for anti-spoofing/Liveness Detection to the ISO 30107-3 standard (

Identity & Access Management (IdAM/IAM) – A framework of policies and technologies to ensure only authorized users have appropriate access to restricted technology resources, services, physical locations and accounts.  Also called identity management (IdM).

Impostor – A living person with traits similar enough to a Subject User that the system determines the biometric data is from the same person.

ISO 30107-3 – The International Organization for Standardization’s testing guidance for evaluation of Anti-Spoofing technology (

Knowledge-Based Authentication (KBA) - Authentication method that seeks to prove the identity of someone accessing a digital service.  KBA requires knowing a user's private information to prove that the person requesting access is the owner of the digital identity.  Static KBA is based on a pre-agreed set of shared secrets.  Dynamic KBA is based on questions generated from additional personal information.

Liveness Detection – The ability for a biometric system to determine if User biometric data has been collected from a live human or an inanimate, non-living Artifact.

NIST (National Institute of Standards and Technology) – The U.S. government agency that provides measurement science, standards, and technology to advance economic advantage in business and government (

Phishing – When a User is tricked into giving a Bad Actor their passwords, PII, credentials, or biometric data.  Example: A User gets a phone call from a fake customer service agent and they request the User’s password to a specific website.

PII – Personally Identifiable Information is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context (

Presentation Attack Detection (PAD) – A framework for detecting presentation attack events. Related to Liveness Detection and Anti-Spoofing.

Root Identity Provider – An organization that stores biometric data appended to corresponding personal information of individuals, and allows other organizations to verify the identities of Subject Users by providing biometric data to the Root Identity Provider for comparison.

Spoof – When a non-living object that exhibits some biometric traits is presented to a camera or biometric sensor.  Photos, masks, or dolls are examples of Artifacts used in spoofs.

Subject User – The individual that is presenting their biometric data to the biometric sensor at that moment.

Synthetic Identity – When a Bad Actor uses a combination of biometric data, name, social security number, address, etc. to create a new record for a person who doesn't actually exist, and for the purposes of using an account in that name.

Editors & Contributors

Kevin Alan Tussy


John Wojewidka
Senior Editor


Josh Rose
Tech Editor



© 2021, All rights reserved.