Digg this story   Add to del.icio.us  
Researcher: Biometrics Unproven, Hard To Test
Ann Harrison, SecurityFocus 2002-08-07

Just how accurate are the face identification systems being rolled out around the country? It turns out, testing them is harder than it looks.

SAN FRANCISCO--James Bond technologies like face recognition, fingerprint sensors, hand geometry, and other biometric security systems may be impossible to accurately evaluate, unless researchers also measure the performance of the testers and the demographics of the subjects, a key researcher said Wednesday.

"Vulnerability tests have been around for a decade, the problem is developing test protocols to test for vulnerabilities," says Dr. Jim Wayman, director of the biometric test center at San Jose State University, speaking at the 11th annual USENIX Security Symposium. "Going from technical results to what happens in a real world system, you have to go through a mathematical modeling system.

Wayman is developing test protocols for evaluation of biometrics device performance, which are slated to post as an annex to the ISO 15408 Common Criteria. He notes that while testing protocols are still in their infancy, millions of dollars are already being poured into biometric systems.

In particular, the U.S. government is a believer in the technology. Wayman notes that DARPA -- the Pentagon's Defense Advanced Research Projects Agency -- has a $42 million, four-year project to develop human identification technologies for use at U.S. Embassies around the world. The office of the U.S. Secretary of Defense also recently requested proposals for face recognition systems that can pick faces out of crowded backgrounds.

Wayman says the government's faith in facial identification may owe something to the exaggerated claims of biometrics vendors, such as the unnamed company which claimed on its Web site that "facial surveillance can yield instant results, verify the identity of suspects instantly and search through millions of records for possible matches, automatically and reliably."

"The hype is basically factually correct, it is difficult to take it head-on with test results," says Wayman. But he notes that vendors exaggerate potential benefits. At best, he says, facial recognition systems limit the range of possible matches to a third of all possible positive match candidates.

High Error Rates
A system used by the Bellagio casino in Las Vegas, Nevada, for instance, which uses facial recognition to match people to a photo database of known card cheats, must wait until the subject actually looks up from their cards. Face recognition systems have a "failure to acquire" rate measuring how often a camera does not recognize a face against a background, says Wayman. "The idea that this operates automatically is entirely incorrect... They are nothing more than investigative tools, they are not automatic systems."

If the surveillance photo is taken outdoors and the mug shot shows the face in 45 degrees, the failure to match rate is 80% for a typical automated system," says Wayman. Varying illumination based on the color of clothing creates a 40% chance that the system will not match the subject to a stored photograph.

Wayland notes that if a photo had been taken of a non-disguised Osama bin Laden entering an airport, bin Laden would have about a 60% chance of being identified with a facial recognition system. About one in a hundred people would be tagged by the system for a false positive match to a suspect, regardless of the size of the database, says Wayman.

According to Wayman, the conclusion of the UK Defense Evaluation Research Agency's study on facial recognition is that the technology can be an aid to surveillance and database searches but the photo must be full face and there must be adequate staff to match the photo with the target photo in the database. Wayman adds that there is no research into privacy safeguards. "There is almost no research on privacy and biometrics," he says.

Current performance testing of biometrics systems includes technology, scenario, vulnerability, security and operational testing. But Wayland adds that most biometric systems are not evaluated for other important factors such as user attitudes, public perception, and cost-benefit analysis. "Most [biometric] pilot projects are discontinued, not because of failure, but inability to get cost-benefit analysis of the systems," says Wayman.

Fingerprint Systems Flawed
Wayman identified potential problems with less exotic biometric devices as well -- like fingerprint readers. Older people and children produce much less distinct fingerprints, which contributes to a higher "failure to enroll rate" for these groups. "If you want to know what is the error rate for fingerprinting, you have to know a whole lot more what algorithm was being used and what the population looks like," says Wayman.

The right thumbprint used by the California Department of Motor Vehicles for negative identification has failed because two fingerprints are needed for an accurate match, says Wayman. He notes that there is about a 2% rate of error with a single print, but two fingerprints taken from the same person in a row are never the same. "One fingerprint doesn't buy you very much and the system does not work," says Wayman.

Wayman said fingerprints also change significantly over a six week interval and are subject to environmental degradation. "Fingerprints can be easily damaged either accidentally or on purpose by the use of caustic chemicals," he says.

Finally, Wayman observed that testers of biometric systems cannot predict the performance of people in one biometric environment with their performance in another.

According to Wayman an operational test of a hand geometry biometric system at Newark Airport revealed a false match error rate of one in one thousand, and a non-match of four in one hundred people -- good results by any standard. But another hand geometry system tested at Sandia Labs which had an error rate of two tenths of a percent, returned error rates of 20% when it was deployed at nearby Kirkland Airforce Base.

He says the errors were linked to the fact that unlike the Sandia test subjects, the Kirkland users were untrained, used the devices outdoors, and were not rewarded for correct usage. "The performance results were taken of one group of people in one environment, and not the performance of technology as a whole," says Wayman.

The USENIX Security Symposium continues through Friday.

    Digg this story   Add to del.icio.us  
Comments Mode:


Privacy Statement
Copyright 2010, SecurityFocus