The National Telecommunications & Information Administration launched its second stakeholder process to come up with voluntary privacy standards and best practice codes of conduct for facial recognition technology, and it was clear that the participants were faced with a lot of sticky issues.
It is the second in a series of efforts to put meat on the bones of the Obama Administrations' privacy bill of rights.
The first was on mobile applications, and the codes developed and announced last July are currently being tested in the marketplace. That process was a little like herding cats, with complaints about the process as well as the result, and not everybody signing on.
The Feb. 6, four-hour discussion was not meant to result in standards by the end of the day—it will be a months-long process if past is prologue—or even start the process of drafting codes. Instead, it was an informational meeting about the state of the technology and its potential uses and/or abuses, and to tee up the issues the code will need to address.
The goal, ultimately, said NTIA chief Larry Strickling, was to come up with voluntary standards, not to impose NTIA's views on the technology on stakeholders. "We are not regulators. We do not bring enforcement actions. Instead, we are in a unique position to encourage stakeholders to come together, cooperate, and reach agreement on important issues," he said.
"I think we all understand that facial recognition technology has the potential to improve services for consumers and support innovation by businesses," said Strickling. "However, the technology poses distinct consumer privacy challenges. Digital images are increasingly available in the commercial context, and the importance of securing faceprints and ensuring consumers’ appropriate control over their data is clear. And it is critical that privacy safeguards keep pace with innovation."
That will be a tough order if the stakeholder panelists were any gauge.
It was generally agreed that the time when there is effectively no longer any anonymity is coming. Stores are using the technology to identify shoplifters and keep them out of their stores, researchers—like Nielsen—are using it to insure that the people on their ratings panels are who they say they are; other marketers are using it to gauge the reaction to ads—to the degree of reading pupil dilation; casinos are using it to identify the "whales" they want to greet with a smile, or card counters they want to greet with a boot toward the door; the government is using it to try to identify terrorists, or check against a database of millions of perps.
Cleary it is a genii that is not going back in the bottle, although Jeff Chester, executive director of the Center For Digital Democracy, warned against simply bypassing the hard questions of whether such scanning and marketing should be done.
Facial recognition definitely has an upside, said panelists, including fraud prevention, authentication—bank accounts security—or even helping those who are face blind to function more normally.
But the potential for sharing and storing that information to target or discriminate (or stalk) are big downsides, as is the lack of privacy and the possibility for mis-identification, although one panelist said the accuracy with a good photo is over 99%.
Howard Fienberg, director of government affairs for the Marketing Research Association, said he saw benefits in helping people like him, who have trouble identifying faces. He also said that anonymity was a fairly new concept—the rise of big cities and travel that took people away from towns where, to paraphrase Cheers, everybody knew their name.
He cited Internet pioneer Vint Cerf's observation to that effect, although Chester pointed out that Cerf now worked for one of the biggest collectors of data (Google, where he is the aptly named Chief Internet Evangelist).
Among the issues the guides will likely need to address is how the info could be used for predictive judgments—is someone ill or drunk; how can people get off watch lists if they are incorrectly identified; how the technology can be used for marketing—a pay-per-emotion model, say for ad effectiveness; how long data can/should be stored; and what technical safeguards can be employed and how they can enhance transparency, control and security.