There is a surefire way to wince Vivienne Ming. It's a reaction to the bullish claims big tech companies like to make. When the federal investigation kept Facebook and global protests plagued Google, the mantra remains. Artificial intelligence will improve our lives. Poverty, mental health, climate change, inequality? All can be solved with AI.
A technologist, entrepreneur and theoretical neuroscientist in Silicon Valley, Ming could easily be mesmerized like her colleagues in the technology company. She firmly believes that AI will eventually become an increasingly powerful tool. And what could be more west coast than a human potential maximizer?
But Ming's attitude is different and maybe the life experience is the reason. Vivienne Ming was once Evan Smith, a miserable, restless student at the University of California at San Diego who got out, became homeless, and then returned to glittering success. It would be easy to call Ming a skeptic, but not unreasonable. What bothers her is not an AI. What makes them twitch is the people behind them.
"We put some of the deepest problems in the history of human experience to a bunch of very young men who have never solved a problem in their lives," she says. "They have never done anything from scratch to improve a person's life."
Evan Smith took his life with him in his 20s. After living in Mountain View and fighting demons that he found difficult to understand, he was admitted to Carnegie Mellon University in Pittsburgh, where he studied neuroscience. There he met his future wife, Norma Chang, who stayed with him when he confessed his desire to be a woman. The couple using the mash-up last name Ming has two children.
After a career in which companies and organizations were established to tackle education, health and work issues, the couple founded the Socos non-profit think tank in Berkeley, California, where they now work together. Ming calls the work "crazy science," but it's anything but crazy. She advises startups, American states, public institutions and the United Nations. They ask them for advice on how to use AI and neuroscience to control hiring practices, treat employees, and better ways to help students. On Tuesday night, Ming will join a group of world-leading AI thinkers in the Barbican in London for the Royal Society's final event in the You and AI series. Chaired by Brian Cox, a physicist and television presenter, the panel will ask questions on the impact of AI on jobs. the potential risks to society and its ability to make moral and ethical choices.
At the heart of the problem that bothers Ming is the training that computer engineers receive and their uncritical belief in AI. Too often, she says, her approach is to train a neural network with a lot of data and expect the result to work well. It abuses companies that have not dealt with the problem – first, what is already known about good employees and successful students – before they apply the AI.
"These are very smart men. They are not malicious. But we ask them, who should I hire, how should we deal with mental illness, who should go to prison for how long, and they have no idea how to solve these problems. AI is a really powerful tool for solving problems, but if you can not find the solution to a problem yourself, an AI will not do it for you.
Amazon is a typical example. Ming says the tech company once tried to recruit her as a chief scientist. The company would soon have one million employees, she was told, and it would be her job to improve her life. "It became very clear that Jeff Bezos' idea of the better was very different from mine," she says. The invention of the company of a bracelet that hummed when the factory workers resorted to the wrong package did not find their approval.
It did not end there. Ming learned of the company's hopes of developing an algorithm that could automate the hiring process, an idea she had criticized back then. In October, news broke that the company was discarding the project because it tended to target women. The algorithm scoured the applicants' CVs and ranked them on a five-star scale. However, as it trained on Amazon data, it learned that male candidates developed best in the workplace. It punished those with the word "women" in their CVs, such as "Women's Rowing Champion," and graduated from women's colleges.
"They thought they could throw an algorithm at the problem and it would find out". says Ming. "But if a company does not know how to solve the problem of bias, it will not solve an AI. After working on it for a few years, they dragged it behind the barn and shot it in the back of the head. "
Ming, who claims to have rejected offers from Uber and Netflix, has taken a different direction in a startup gilding. The company found that features such as resilience and what Ming calls the "sense of growth" – the flexibility to learn from mistakes – predicted better software engineers rated by human programmers. So the company built small AIs to search blogs, LinkedIn and social media feeds for the best candidates, whether they were looking for work or not.
Sometimes a single tweet had a tremendous weight. One read: "Celery is fantastic." Out of context, this sounds like "someone who is fundamentally wrong about a gross food," says Ming. However, "celery" was an indication of an obscure job queue tool written in the Python programming language. The tweet and the passion it contained were a "great predictor" of the candidate's coding abilities.
To get back on track, we need a revolution in AI, says Ming, one that goes beyond the correlation to causality. Throw a neural network onto a pile of data and find patterns that can predict a person's marks, their job prospects, or the likelihood of their being compromised again. But correlations can hide the most terrible prejudices, the kind that black men are at higher risk of offending again. We need KIs who know the true reasons for what makes a born again, a great employee, or a Class A student. "We need that knowledge, and I'm shocked by how rare it is in the AI," she says.
It is not the only problem. The benefits of AI flow disproportionately to the rich. Take educational apps. "The whole reason you build these systems is to help kids who are having trouble, but the biggest market is to get a few extra points for the sats for the elite performers, because in the end, the Market there, "says Ming. "Nearly every technology grows with increasing inequality, because the people who can best use it are the ones who need it the least: 99.999% of the world's population does not affect how this technology is used. We have to consider AI as a human right, just as we think it is of a judicial review and access to vaccines.
Every year, swarms of freshly graduated doctoral students flock to tech companies, without the slightest idea of world problems. That will not change that fast. A thorough ethics experience will help, but Ming believes it takes more than learning the rules from a book. "Ethics is like resilience, you can handle it well if you fail. To act ethically, you need to understand all the consequences of the solution you are proposing, "she says.
Given the great technology that is dominated by young, mostly white men, women and other marginalized people can propel the revolution says it needs. "I think it's incredibly valuable to people who have somehow suffered to have a voice. If you come from an environment like mine, you are skeptical. They recognize that technology increases inequality, and it only gets better if we act actively to avoid it. "