CIMON, an AI robot developed by IBM and Airbus, has recently behaved normally in interactions with the crew of the International Space Station (ISS). A lot of journalists disagree with this assessment, but let us make the decision.
When a crew member tried to do something, it became confused, misinterpreted certain voice commands, and generally did not produce the expected results with any consistency results.
Yes, sounds like usual.
If you have a smart speaker, interact with a virtual assistant, or ever played Zork (okay, maybe not this one) You know exactly how it feels to interact with a common chatbot ̵
We love our AI-powered devices when they work, and they keep getting better. If not, they can be frustrating.
Watson's defense is one of the first AI solutions to be tested in space, and certainly the first floating chat bot on the ISS.
But as you can clearly see in the above video (starting at 3:30 am), the worst you can claim is not very useful.
Why Stop Does anyone else think it appropriate to journalists?
Did I miss a memo about a contest where we all try to come up with the most ridiculous headline and then give as much clues to fictional AI / robots in the article as If so, then one can work with a difficult challenge to all involved.
It is time we all stop reporting every little thing a machine-learning construct does, as if we were showing signs of robot sensitivity AI can do a lot, but even IBM's Watson – an AI we really like – can not be offended or hurt, and it certainly has not got roguish, belligerent or hassomehow defeated.
Just stop. They are all ridiculous. No wonder Elon gets so annoying.
Do not forget to check our section on Artificial Intelligence to learn more about what's really happening in the world of AI.