Home / Gadgets / Facebook's AI boss says the field will soon hit the wall

Facebook's AI boss says the field will soon hit the wall

Jerome Pesenti leads the development of artificial intelligence in one of the world's most influential and controversial companies. As vice president of artificial intelligence on Facebook, he oversees hundreds of scientists and engineers whose work influences the direction of the company and its impact on the world.

AI is fundamental to Facebook. Algorithms that are learning to grab and hold our attention are helping to make the platform and its sister products, Instagram and WhatsApp, stickier and more addictive. And despite some notable AI flops, such as personal assistant M, Facebook continues to use AI to create new features and products, from Instagram filters to augmented reality apps.

Mark Zuckerberg has promised to use AI to solve some of the problems The company's biggest problems are monitoring hate speech, false news, and cyberbullying (a measure that has had limited success so far). More recently, Facebook has been forced to consider how AI-driven delusions can be stopped in the form of deep-fake videos that convincingly disseminate misinformation and enable new forms of harassment.

Pesenti joined Facebook in January 201

8 and inherited a study lab created by Yann Lecun, one of the biggest names in the field. Previously, he worked for IBM's Watson AI platform and Benevolent AI, a company that applies the technology to medicine.

Pesenti met with Will Knight, senior writer at WIRED, near his New York office. The conversation has been revised.

Will Knight: AI was presented as a solution to fake news and online abuse, but that could outrage its power. What progress are you really making there?

Jerome Pesenti: Moderating automatically or even together with people and computers on Facebook is a very challenging problem. But we have made great progress.

Early on the field made progress in seeing – understanding scenes and images. We've been able to use that in recent years to detect nudity, recognize violence and understand what's going on in pictures and videos.

Lately, great progress has been made in the field of language, allowing us much more sophisticated understanding of language interactions that people use. We can understand if people are trying to harass if it's hate speech or if it's just a joke. In no way is it a solved problem, but significant progress has been made.

WK: What about Deepfakes?

JP: We take this very seriously. We actually walked around and made new Deepfake videos so people could test deepfake detection techniques. It is a really important challenge that we want to tackle actively. At the moment it is not really important on the platform, but we know that it can be very powerful. We try to stay one step ahead of the game, and we've hired the industry and the community.

WK: Let's talk more generally about AI. Some companies, such as DeepMind and OpenAI, claim that their goal is to develop "artificial general intelligence." Is that what Facebook does?

JP: As a laboratory, our goal is to reconcile human intelligence. We are still a long way from that, but we consider it a great destination. But I think a lot of people in the lab, including Yann, believe that the concept of "AGI" is not really interesting and does not really mean much.

On the one hand, there are people who assume that AGI is human intelligence. But I think it's a bit disingenuous, because if you really think about human intelligence, it's not very common. Then other people on AGI project the idea of ​​the singularity: If you had an AGI, you would have an intelligence that can improve on itself and continue to improve. But there is no real model for that. People can not become smarter. I think people are somehow throwing it out to follow a specific agenda.

Source link