Microsoft Calls For Facial Recognition Tech Regulation

Microsoft and the AI Now Institute are both calling for regulation as facial recognition software picks up popularity.

As facial recognition continues to gain traction in public use cases, Microsoft on Thursday called for regulation of the technology, citing heightened concerns around privacy and consent.

Over the past year, facial recognition technology has started to pop up in various government-related applications across the country – from police departments to airports. Most recently, this week the Department of Homeland Security unveiled a facial recognition pilot program for surveilling public areas surrounding the White House.

However, Microsoft president Brad Smith said in a Thursday post that the race for developing facial recognition software in the tech space is forcing companies to “choose between social responsibility and market success.”

“We believe it’s important for governments in 2019 to start adopting laws to regulate this technology,” he said. “The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”

A new report on artificial intelligence and facial recognition by the AI Now Institute has further highlighted the dangers of facial recognition software and called for more regulation and testing of the technology.

“Facial recognition technology poses its own dangers, reinforcing skewed and potentially discriminatory practices, from criminal justice to education to employment, and presents risks to human rights and civil liberties in multiple countries,” the report said.

The White House pilot program is only the latest government use case for facial-recognition technology.

In 2017, U.S. Customs and Border Protection launched a “Traveler Verification Service” that applies face recognition to all airline passengers, including U.S. citizens, boarding flights exiting the United States. Earlier this year, it was disclosed that the Orlando Police Department and the Washington County Sheriff’s department were using Amazon’s Rekognition system.

It’s not just the U.S. – China has also launched several alarming artificial intelligence-enabled surveillance applications this year.

Meanwhile, more tech giants are also tapping into the lucrative applications that facial recognition technology has to offer.

Microsoft itself has been developing “Face API” facial recognition software that enables devices to offer face verification. Facebook has dipped its toe into facial recognition, acquiring a company called Face.com that designs facial recognition software.

Perhaps most well-known and utilized, Amazon’s Rekognition platform has utilized the technology to sniff out large numbers of people in a single video or still frame (Amazon did not respond to a request for comment on facial recognition tech regulation from Threatpost).

The boost in facial recognition is partially due to the growing use of sensor networks, social media tracking, facial recognition, and “affect recognition” – which lets facial recognition not just sense users’ faces, but also their emotions.

Microsoft’s “Facial Recognition Principles”

Amid these advancements, roadblocks still exist in facial recognition technology. Smith said that the government needs to address three overarching issues: potential bias and discrimination in facial recognition tech, privacy concerns, and the potential encroachment of the tech on democratic freedoms.

Privacy issues in particular have been at the center of the spotlight for potential facial recognition technologies.

On the heels of the DHS’ White House pilot program, for instance, Jay Stanley, senior policy analyst at the American Civil Liberties Union, this week urged policymakers to think carefully about the dangers of facial-recognition technology as the tech continues to grow in popularity.

“The program is another blinking red light for policymakers in the face of powerful surveillance technologies that will present enormous temptations for abuse and overuse,” said Stanley. “Congress should demand answers about this new program and the government’s other uses of face recognition. And it should intercede to stop the use of this technology unless it can be deployed without compromising fundamental liberties.”

In order to address concerns around privacy, Smith stressed that legislation is needed to ensure that tech companies both clarify consent from potential people who will be impacted by facial recognition tech, and ensuring advanced notice that these services are being used.

Clarifying consent in particular is a tricky situation. In the case of the White House pilot program, the Department of Homeland Security said that the public cannot opt-out of the facial recognition pilot, except to avoid the areas that will be filmed as part of the program.

Simply avoiding areas using facial recognition technology seems to be the consensus when it comes to building consent into regulations: “In effect, this approach will mean that people will have the opportunity to vote with their feet – or their keyboards or thumbs,” said Smith. “They’ll be informed, and they can ask questions or take their business elsewhere if they wish.”

The AI Now Institute said in its report that lawmakers should take issues of consent around facial recognition a step further beyond “mere public notice.”

“Such regulation should include national laws that require strong oversight, clear limitations, and public transparency,” the report said. “Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.”

Moving forward, new laws are also needed that will enforce the testing of facial recognition services for “accuracy and unfair bias,” as well as enforcing transparency by big tech companies.

“New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias,” Smith said. “A sensible approach is to require tech companies that make their facial recognition services accessible using the internet also make available an application programming interface or other technical capability suitable for this purpose.”

Suggested articles