After a hellish year of tech scandals, even anti-government executives began to stand for legislation. But Microsoft President Brad Smith went a step further on Thursday urging governments to regulate the use of face recognition techniques to ensure that they do not invade privacy or become a tool for discrimination or surveillance.
Technology companies are often forced to choose between social responsibility and profit, but the effects of facial recognition are too bleak for the business, as usual, Smith said. "We believe the only way to protect ourselves against this race is to create a sphere of responsibility that supports healthy market competition," he said in a speech at the Brookings Institution. "We have to make sure the year 2024 does not look like a page from the novel 1
To combat bias, Smith said that legislation must oblige companies to provide documentation of what their technology can and can do that customers and consumers can not understand. He also said that laws "should require a meaningful review of the results of human facial recognition before making final decisions" for "conclusions", such as: B. Decisions that could cause physical or mental harm or that could affect privacy or fundamental rights. As a further measure to protect privacy, Smith said that when recognizing faces is used to identify consumers, the law should "require a flashy message clearly stating that these services are being used" for the use of face recognition technology. He noted a decision of the US Supreme Court in June requesting the authorities to obtain a search warrant to obtain the records of the mobile phone showing the location of the user. "Do our faces have the same protection as our phones?" He asked. "In our view, the answer is a definite yes."
"The only way to protect against this race down the road is to build a sense of responsibility that supports healthy market competition."
Microsoft President Brad Smith  Smith said companies and governments that use facial recognition should be transparent about their technology, including outside review. "As a society, we need legislation that enables impartial test groups like Consumer Reports and their peers to test face recognition services accurately and impartially for accuracy and unfair bias," said Smith.  Smith's speech on Thursday repeated a call to regulate facial recognition technology, which he first made in July, but which contained new peculiarities. He listed six principles that he said should guide the application and regulation of facial recognition: fairness, transparency, accountability, non-discrimination, dismissal and consent, and lawful oversight. He said Microsoft will release a document next week with proposals to implement these principles.
As governments and businesses increasingly use facial recognition technologies in areas such as criminal justice or banking, critics and engineers have raised concerns. Amazon Rekognition, the company's face recognition technology, is used by police in Orlando, Florida. The ACLU tested the tool from Amazon and found that members of Congress were misidentified.
Also on Thursday, the research institute AI Now published a new report highlighting the urgency of companies to open their algorithms for auditing. "AI companies should forego business secrets and other legal claims that prevent algorithmic accountability in the public sector," the report said. "Governments and public institutions need to understand and explain how and why choices are made, especially when people's access to health, housing, welfare and employment is at stake."
Co-founders of AI Now, Kate Crawford and Meredith Whittaker, said their focus on trade secrecy went back to a symposium with leading lawyers held earlier this year. "They're currently sueing algorithms if you like," said Crawford. "It was extraordinary to hear dozens of lawyers tell stories about how hard it is to find basic information."
Her report also discussed the use of affect analysis, which employs facial recognition technology to detect emotions. The University of St. Thomas, Minnesota is already using a system based on Microsoft tools to monitor students in the classroom using a webcam. The system predicts emotions and sends a report to the teacher. AI Now says this raises questions about the ability of the technology to capture complex emotional states, the ability of a student to deny the results, and the way in which they could affect the teaching material. In addition, there are data privacy concerns, particularly since "it was not decided to inform students that the system will be used for them."
Microsoft did not immediately respond to a request for comment on the university's system.
New Michael Posner, a professor at York University's Business School, director of the Center for Business and Human Rights of the school, familiar with the framework proposed by Microsoft, has worked with technical companies on fake news and Russian intervention. In his experience, companies were reluctant to work with governments and consumers.
"They do not like the government to regulate their work in any way. They were not so exciting and too reserved with disclosures to give people an overview of what's going on. They were also very reluctant to work together, "says Posner.
Nonetheless, he is confident that leading a" mature "company like Microsoft could foster a more open-minded approach, and Amazon did not answer any questions about his facial recognition guidelines. 19659013] Other great WIRED stories