قالب وردپرس درنا توس
Home / Technology / Let's not accidentally build depressive robots

Let's not accidentally build depressive robots




In an attempt to keep up with the never ending flood of news about the imminent AI takeover of human society, a lecture at a recent conference gave me a break. Instead of thinking about whether the intelligent robots would free us or kill us (it's just one of those two options, I think), it took me a moment to worry about the machines themselves.

The conference, A Meeting of Computer Scientists and Neuroscientists at New York University, was called "Canonical Calculations in Brain and Machines". Zachary Mainen, a neuroscientist at the Champalimaud Center for the Unknowns in Lisbon, came on stage to talk about depression and hallucinations in machines

Right at the beginning of this lecture, he admitted that everything seems "a bit strange". But he had a solid reasoning. "I refer to the field of computer psychiatry, which assumes that we can learn from a patient who is depressed or hallucinating when studying AI algorithms such as reinforcement learning," he told Science [1

9659005 Magazine after the lecture. "If you reverse the arrow, why would an AI not be subject to things that go wrong with the patients?"

If you want to know how such moods work in machine brains, things get more abstract. "Depression and hallucinations seem to depend on a chemical in the brain called serotonin," Dr. Main Science . "It may be that serotonin is just a biological peculiarity. However, if serotonin helps to solve a more general problem for smart systems, machines could implement a similar function, and if human serotonin goes awry, the equivalent of a machine might go awry. "

There are many assumptions here, but it really made me think. We are full of stories about the AI ​​that will take over much of the human quest from automation that will threaten jobs to 19459009, for algorithms that could decide on medical care. or help judges to convict people in court. In the black-and-white world of new technology, we can get an idea of ​​the future from excerpts of these extreme scenarios. In virtually all the snippets we hear, AI is a cold algorithm. As powerful and "intelligent" as these machines are, they are just code – and they all have switches.

But what if an intelligent car was angry if it glowed red or was depressed when it hit a pedestrian? How would you wage a war with a smart drone that felt remorse for bombing innocent civilians? Would you feel right if you put a bomb-robber in the field and know that his code was somehow "scared"?

Emotional robots are not a radical departure from the world of history we all know – C3PO is timid and unfavorable. while R2D2 is brave and fearless. After getting the brain the size of a planet, Marvin is paranoid. Hal 9000 gets scared when it's turned off by Dave Bowman. The replicants of Blade Runner experience a combination of anger, hate and fear. We enjoy the stories of this emotional AI, just because these feelings help us to understand them.

There are many AIs in robots programmed to display emotions today – some modern "emotional" robots work by using the facial and body movements of people when they are sad, happy or angry and then that let the human mind fill in the gaps. If a robot is curled up in a ball, its face covered and trembling, we could interpret that as "fear," and then, based on our own emotions, we would be able to treat the machine accordingly.

Octavia, a humanoid robot, is designed to fight fires on US Navy ships. "She" can show impressive facial expressions, seem confused by tilting her head to the side, or appear alarmed by raising both eyebrows. Huge amounts of sensory input help Octavia to measure the world around her and she can respond accordingly. However, it is unlikely that she believes that she actually feels any of the emotions she shows.

The Japanese Robot Pepper has an "endocrine multilayer neural network". to help show emotions on a screen on the chest. Pepper sighs when he is unhappy and can also "feel" joy, surprise, anger, doubt and grief.

But what if, like Dr. of other complex processes that run in the code? If you do not know where these "feelings" come from in the AI ​​brain (and if this then becomes another "black box" in the code) Would you feel safe in the presence of this machine, and would need depressed AI counseling and time off from your job, and what about our responsibility to give the AI ​​treatments to make them feel better? I have no idea how to think about it, in part because it's difficult to fully understand what those new, artificial minds can feel, so let's hope we take this part of the AI ​​future just as seriously

This article was originally published by Alok Jha, a London-based science journalist, he was a science correspondent for the Guardian and ITV news and has several documentaries for BBC TV and radio presented. He is the author of three popular science books, including The Water Book (Headline, 2015).


Source link