Home / SmartTech / AI Weekly: Antitrust billing of big tech is a warning story for the AI ​​industry

AI Weekly: Antitrust billing of big tech is a warning story for the AI ​​industry



This week, as leaders of four of the largest and most powerful technology companies in the world sat before an antitrust hearing of Congress and had to answer for the way they built and operated their respective giants, you could see how far the heyday was The rose of the big tech has faded. It should also be a moment of prudence for those in the AI ​​field.

Facebook’s Mark Zuckerberg, once the mischievous dropout genius you liked to hate, still doesn’t seem to understand the scale of the problem of global destructive misinformation and hate speech on his platform. Tim Cook is struggling to defend how Apple is lowering some App Store developers’ revenue by 30% ̵

1; a policy he hasn’t even set, a remnant of Apple’s vice influence on the mobile app market in the mid-2000s. The brave young upstarts who founded Google are both middle-aged and have resigned their leadership positions. They quietly disappear as Alphabet and Google CEO Sundar Pichai run the show. And Jeff Bezos has the carefree face of the richest man in the world.

Amazon, Apple, Facebook and Google have developed new technical products and services that have undeniably changed the world and are, in a way, undeniably good. Because they all moved quickly and broke things, they also largely apologized for the burden of asking difficult ethical questions, the way they built their business empires, the impact of their products and services on people, that they use.

As AI remains at the center of the next wave of transformative technology, overcoming these difficult questions is not an option. It is a mistake the world cannot afford to repeat. And besides, AI doesn’t work properly without solving the problems associated with these questions.

The way of old big tech was clever and ruthless; But AI requires people to be smart and wise. Those who work in the AI ​​must not only ensure the effectiveness of their work, but also understand the potential harm to the people to whom the AI ​​is applied. This is a more mature and fairer way to build globally changing technologies, products and services. Fortunately, many prominent voices in AI lead the field in this way.

The best example this week was the widespread response to a service called Genderify, which promised to use natural language processing (NLP) to help companies determine the gender of their customers by name, username, or email only -Identify address. The whole premise is absurd and problematic, and when AI people picked it up to test it, they found it predictably terribly biased (i.e. broken).

Genderify was such a bad joke that it almost looked like some kind of performance art. In any case, it was laughed at from the Internet. Just a day or so after the launch, the Genderify website, Twitter account, and LinkedIn page were gone.

For many in AI, it is frustrating that such poorly designed and poorly executed AI offers keep cropping up. However, the rapid and extensive deletion of Genderify shows the power and strength of this new generation of AI researchers and practitioners with principles.

Now, in its last and most successful summer, AI is already getting the bill big tech has faced for decades. Other recent examples are an outcry over a paper that promised to use AI to identify crime from people’s faces (which is actually just AI phrenology), which led to its withdrawal from the publication. Pioneering studies on facial recognition bias have resulted in bans and moratoriums for use in multiple U.S. cities, as well as a number of laws to eliminate or combat potential abuse. New research shows unsolvable problems with distortions in well-established data sets such as 80 million small images and the legendary ImageNet – and lead to immediate changes. And more.

While stakeholders certainly play a role in driving these changes and answering difficult questions, the authority for it and the research-based evidence comes from those in the field of AI – ethicists, researchers who are looking for ways to improve AI techniques. and actual practitioners.

There is, of course, an immense amount of work to be done, and many more battles have to be fought as AI becomes the next dominant group of technologies. Look no further than problematic AI in surveillance, military, court, employment, police and more.

However, when you see technology giants like IBM, Microsoft and Amazon invest heavily in face recognition, it’s a sign of progress. It doesn’t really matter what your real motivations are, whether it’s a narrative cover for surrendering against the dominance of other companies, a calculated step to avoid possible legal punishment, or just a PR stunt. The fact is, for some reason, these companies find it more beneficial to slow down and make sure they don’t do any harm than to move quickly and break things.




Source link