قالب وردپرس درنا توس
Home / Innovative / When algorithms go awry, we need strength to fight back, researchers say

When algorithms go awry, we need strength to fight back, researchers say



Governments and private companies are using AI systems at a fast pace, but the public lacks the tools to hold them accountable if they fail. This is one of the key conclusions in a new report from AI Now, a research group that hosts technology company employees like Microsoft and Google and is affiliated with New York University.

The report examines the social challenges of AI and algorithmic systems, and refers to what researchers call a "responsibility gap" as this technology is "integrated across core social areas." They made ten referendums, including the state regulation of facial recognition (something that Microsoft President Brad Smith also advocated for this week) and "truth in advertising" laws for AI products, so that companies do not simply have the reputation of technology for the sale of their services.

Large tech companies were caught in an AI Gold Rush and entered a wide range of markets, from recruitment to healthcare, to sell their services. As Meredith Whittaker, co-founder of AI Now, leader of Google's Open Research Group, The Verge, explains, "many of their claims about benefits and benefits are not supported by publicly available scientific evidence. "

Whittaker cites the example of IBM's Watson system, which gave" unsafe and incorrect treatment recommendations "during diagnostic diagnoses at the Memorial Sloan Kettering Cancer Center, according to internal documents. "The allegations that their marketing department had approximately magical properties of [their technology’s] were never substantiated by peer review," says Whittaker.

The authors of AI Now report that this incident is just one of several "cascade scandals" involving AI and algorithmic systems from governments and large tech companies 2018. Others range from allegations that Facebook has contributed to the genocide in Myanmar to the revelation that Google's military drone AI tools are available to the military as part of Project Maven and the Cambridge Analytica scandal.

In all these cases, there was public outcry and internal dissent in Silicon Valley's most valued companies. This year, Google employees quit the company's Pentagon contracts, Microsoft employees pushed the company to stop investigating Immigration and Customs Enforcement (ICE) and protesters from Google, Uber, eBay and Airbnb protested against sexual harassment.

Whittaker says these protests, supported by labor alliances and research initiatives such as AI Now, have become "an unexpected and gratifying force for public accountability."


<img srcset = "https://cdn.vox-cdn.com/thumbor/AmzJXaDyarFv_bHS7xJ6hc9ER1Q=/0x0:5510×3707/320×0/filters:focal(0x0:5510×3707):no_upscale()/cdn.vox-cdn. com / uploads / chorus_asset / file / 13611631 / 503831564.jpg.jpg 320w, https://cdn.vox-cdn.com/thumbor/ThPJxMwZ6gZ4Cu4aQTtoaojJlgY=/0x0:5510×3707/520×0/filters:focal(0x0:5510×3707):no_upscale () /cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 520w, https://cdn.vox-cdn.com/thumbor/ZcTRIAc6tBw5kaJX0Sxgdy1CDXw=/0x0:5510×3707/720×0/filters : focal (0x0: 5510×3707): no_upscale () / cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 720w, https://cdn.vox-cdn.com/thumbor/pGIpvgqYpGkNlSXw31VuYh_MeJo = / 0x0: 5510×3707 / 920×0 / filters: focal (0x0: 5510×3707): no_upscale () / cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 920w, https: // cdn. vox-cdn.com/thumbor/e6HdLGJiGWTAyv-FrllhXzBqiyM=/0x0:5510×3707/1120×0/filters:focal(0x0:5510×3707):no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564. jpg.jpg 1120w, https://cdn.vox-cdn.com/thumbor/n6pyFNUPwgYT3OuTKTxBqVLkbnY=/0x0:5510×3707/1320×0/filters:focal(0x0:5510×3707):no_upscale()/cdn.vox-cdn.com/uploads /chorus_asset/file/13611631/503831564.jpg.jpg 1320w, https://cdn.vox-cdn.com/thumbor/5ijvlmDS8P0Xk8UCsJvNeYG2-um=/0x0:5510×3707/1520×0/filters:focal (0x0: 5510×3707): no_upscale ( ) /cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 1520w, https://cdn.vox-cdn.com/thumbor/rc8P8Jx9ExO8dEebzNgZvcC1NV0=/0x0:5510×3707/1720×0/filters: focal (0x0: 5510×3707): no_upscale () / cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 1720w, https://cdn.vox-cdn.com/thumbor/g0k5d92iuhXIakh0gMTzCIZ3nes= /0x0:5510×3707/1920×0/filters:focal(0x0:5510×3707):no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/13611631/503831564.jpg.jpg 1920w "sizes =" (min-width : 1221px) 846px, (min-width: 880px) calc (100vw – 334px), 100vw "alt =" The US is conducting an aerial war against the ISIL from a secret base in the Persian Gulf on [19659011] This year there were widespread protests against the use of AI, including Google's involvement in the construction of drone surveillance technology.

Photo by John Moore / Getty Images

But the report is clear: the public needs more. The danger to civil justice is particularly acute when it comes to the introduction of automated decision-making systems (ADS) by the government. These include algorithms for calculating prison sentences and for the allocation of medical assistance. Typically, according to the authors of the report, software is being introduced in these areas to cut costs and increase efficiency. But these results are often systems that make decisions that can not be explained or challenged.

AI Now's report cited a number of examples, including that of Tammy Dobbs, an Arkansas resident with cerebral palsy, who cut her Medicaid home-based care from 56 hours to 32 hours a week without explanation. Legal assistance successfully sued the state of Arkansas and the algorithmic allocation system was judged unconstitutional.

Whittaker and his colleague AI Now co-founder Kate Crawford, a Microsoft researcher, says the integration of ADS into government services has outstripped our ability to validate these systems. It is said that there are concrete steps to remedy the situation. These include the need for technology vendors to sell services to the government in order to avoid the protection of business secrets, allowing researchers to better investigate their algorithms.

"You must be able to say, 'You've been cut off from Medicaid, this is the reason,' and you can 'do it with black box systems,' says Crawford. "If we want public accountability, we need to be able to test that technology."

Another area where immediate action is required, the couple says, is the use of face recognition and recognition. The former is increasingly used by police forces in China, the US and Europe. For example, Amazon's recognition software has been used by Orlando and Washington County police, although tests have shown that the software can function differently in different races. In a test that used recognition to identify members of Congress, the error rate for non-white members was 39 percent, compared to just 5 percent for white members. And for the detection of emotions where companies claim that technology can scan someone's face and read their character and even their intent, AI Now's authors say companies often do pseudoscience.

Despite these challenges, Whittaker and Crawford say that the year 2018 has shown that technical employees, lawmakers, and the public are more willing to act than when the problems of AI's responsibility and bias are exposed.

Referring to the algorithmic scandals that have set up Silicon Valley's largest companies, Crawford says, "The ideology of their fastest and most disruptive things has broken a lot of things that matter to us the public interest. "

Whittaker says," What you see are the people who are aware of the contradictions between cyber-utopian technical rhetoric and the reality of the implications of these technologies in everyday life. "


Source link