Microsoft wants to make sure you trust it with your face.

That’s at least part of the impetus behind the software giant’s call for better corporate citizenship and government regulation over facial recognition technology. In the July 13 letter, Microsoft’s president Brad Smith tries to plot out the company’s course from a tense present through an imperfect future. Along the way he touches upon everything from Google’s internal dissent over a Pentagon contract to accuracy thresholds for permissible facial recognition use.

To explain why, specifically, Smith is calling for the government to regulate his company’s industry, Smith points to the auto, air safety, food and pharmaceutical industries as examples of where regulation has played a major role in ensuring universal safety. It is one thing to compete with peers under the same terms to deliver a safe product; it is wholly another to compete against companies that are happy to seek competitive advantage by forgoing such safety features when not compelled to have them.

“There will always be debates about the details, and the details matter greatly,” Smith writes. “But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.”

The harms Smith lists as stemming from unregulated use of facial recognition technology are many: governments (think: militaries) monitoring the political and private activities of individuals, chilling freedom of speech and freedom of assembly. Law enforcement (think: the intelligence community or the Department of Justice) relying on flawed or biased technology to track, investigate, or arrest for crimes. Private companies using facial recognition to filter how people get credit, land jobs and make purchases. It is the the stuff of heavily monitored and individually tracked dystopia that cyberpunk was built to warn about.

How to prevent this? Smith doesn’t cleanly nail down any specific reforms besides a bipartisan commission to create the regulation itself. But he offers hints: Confidence intervals that technology needs to pass before it can be trusted could prevent some abuses. Some form of human oversight and human-in-the-loop controls that ensure accountability and leave open the possibility for humans to stop harmful uses of tech in progress. Affirming legal rights to people over their own visage that would help mitigate fraud and identity theft, and provide a path for recourse. And there’s a lot of hypotheticals about how much consent an individual needs to give or be given before a camera records their face and a computer processes it.

But Smith posits that lawmakers – and maybe even technology executives – need to do a gut check.

For example, will Congress act to regulate facial recognition technologies, and other categorizing software such as AI? Are tech companies comfortable producing these technologies no matter how they are used? And if there are contracts (again, think Department of Defense) are they specifically not worth it?

Smith quitely touches on Google’s internal dissent over the Pentagon’s Project Maven tool, which sought to develop an object-recognition AI that could process drone footage. But the more telling part of his statement is how it skirts Microsoft’s work for ICE, the agency created in 2003 and tasked largely with deportations. Was Microsoft’s facial recognition software being used to facilitate deportations, as the terms of a contract discussed in an announcement from Microsoft, published online in January, suggested it was?

In a blog post boasting about how government agencies use Microsoft products, Microsoft specifically listed the ways its software and services can facilitate ICE agents making informed decisions faster, noting that Azure Government enables them to “utilize deep learning capabilities to accelerate facial recognition and identification.” As internment camps for migrant children and others awaiting deportation made national headlines, Microsoft employees and coders at Github, which Microsoft recently acquired, called for the company to cancel its contract for ICE.

Smith says that ICE is not using Microsoft efforts for facial recognition, clarifying “the work under the contract instead is supporting legacy email, calendar, messaging and document management workloads.” And then Smith muses, lightly, on the fact that there have been calls for the company to end the contract and cease support for the agency altogether. Smith doesn’t have a response to that call here other than to argue that these are questions faced across the technology sector, including at Microsoft and Google. Hence his call for collective action, and for a legislative solution that may bind all technology companies in this space to the same rules and the same norms, ideally leaving the political questions to the work of politics, and not to individual companies.

“A government agency that is doing something objectionable today may do something that is laudable tomorrow,” Smith writes. “We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.”

While the politics around a given technology, or a given agency, may change from administration to administration, setting public terms, regulations and norms at least gives the public and industry a baseline for when that baseline changes. Because someday, facial recognition may become as commonplace and familiar a tool as email, calendars, messaging, and word documents – or even facilitating deportations.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In IT & Networks