Governments and private companies are using AI systems at a fast pace, but the public lacks the tools to hold them accountable if they fail. This is one of the key conclusions in a new report from AI Now, a research group that hosts technology company employees like Microsoft and Google and is affiliated with New York University.
The report examines the social challenges of AI and algorithmic systems, and refers to what researchers call a "responsibility gap" as this technology is "integrated across core social areas." They made ten referendums, including the state regulation of facial recognition (something that Microsoft President Brad Smith also advocated for this week) and "truth in advertising" laws for AI products, so that companies do not simply have the reputation of technology for the sale of their services.
Large tech companies were caught in an AI Gold Rush and entered a wide range of markets, from recruitment to healthcare, to sell their services. As Meredith Whittaker, co-founder of AI Now, leader of Google's Open Research Group, The Verge, explains, "many of their claims about benefits and benefits are not supported by publicly available scientific evidence. "
Whittaker cites the example of IBM's Watson system, which gave" unsafe and incorrect treatment recommendations "during diagnostic diagnoses at the Memorial Sloan Kettering Cancer Center, according to internal documents. "The allegations that their marketing department had approximately magical properties of [their technology’s] were never substantiated by peer review," says Whittaker.
The authors of AI Now report that this incident is just one of several "cascade scandals" involving AI and algorithmic systems from governments and large tech companies 2018. Others range from allegations that Facebook has contributed to the genocide in Myanmar to the revelation that Google's military drone AI tools are available to the military as part of Project Maven and the Cambridge Analytica scandal.
In all these cases, there was public outcry and internal dissent in Silicon Valley's most valued companies. This year, Google employees quit the company's Pentagon contracts, Microsoft employees pushed the company to stop investigating Immigration and Customs Enforcement (ICE) and protesters from Google, Uber, eBay and Airbnb protested against sexual harassment.
Whittaker says these protests, supported by labor alliances and research initiatives such as AI Now, have become "an unexpected and gratifying force for public accountability."
But the report is clear: the public needs more. The danger to civil justice is particularly acute when it comes to the introduction of automated decision-making systems (ADS) by the government. These include algorithms for calculating prison sentences and for the allocation of medical assistance. Typically, according to the authors of the report, software is being introduced in these areas to cut costs and increase efficiency. But these results are often systems that make decisions that can not be explained or challenged.
AI Now's report cited a number of examples, including that of Tammy Dobbs, an Arkansas resident with cerebral palsy, who cut her Medicaid home-based care from 56 hours to 32 hours a week without explanation. Legal assistance successfully sued the state of Arkansas and the algorithmic allocation system was judged unconstitutional.
Whittaker and his colleague AI Now co-founder Kate Crawford, a Microsoft researcher, says the integration of ADS into government services has outstripped our ability to validate these systems. It is said that there are concrete steps to remedy the situation. These include the need for technology vendors to sell services to the government in order to avoid the protection of business secrets, allowing researchers to better investigate their algorithms.
"You must be able to say, 'You've been cut off from Medicaid, this is the reason,' and you can 'do it with black box systems,' says Crawford. "If we want public accountability, we need to be able to test that technology."
Another area where immediate action is required, the couple says, is the use of face recognition and recognition. The former is increasingly used by police forces in China, the US and Europe. For example, Amazon's recognition software has been used by Orlando and Washington County police, although tests have shown that the software can function differently in different races. In a test that used recognition to identify members of Congress, the error rate for non-white members was 39 percent, compared to just 5 percent for white members. And for the detection of emotions where companies claim that technology can scan someone's face and read their character and even their intent, AI Now's authors say companies often do pseudoscience.
Despite these challenges, Whittaker and Crawford say that the year 2018 has shown that technical employees, lawmakers, and the public are more willing to act than when the problems of AI's responsibility and bias are exposed.
Referring to the algorithmic scandals that have set up Silicon Valley's largest companies, Crawford says, "The ideology of their fastest and most disruptive things has broken a lot of things that matter to us the public interest. "
Whittaker says," What you see are the people who are aware of the contradictions between cyber-utopian technical rhetoric and the reality of the implications of these technologies in everyday life. "