قالب وردپرس درنا توس
Home / Innovative / SMS writing of drivers is just the beginning for AI monitoring

SMS writing of drivers is just the beginning for AI monitoring



Last week, the Australian state of New South Wales announced a plan to crack down on drivers using their cell phones. The state-owned transport company said it integrated image processing into street cameras to detect criminals. The AI ​​automatically marks suspects, people confirm what's going on, and a warning letter is sent to the driver.

"It's a system that is changing culture," Deputy Police Commissioner of New South Wales, Michael Corboy, told Australian media. The police hoped that the technology would reduce the number of deaths by one third within two years.

It seems to be an admirable scheme, from top to bottom. The offense is clear, the desired result is incontestable, and there is even a human in the loop who prevents the machines from making mistakes.

But it also shows the slow advance of artificial intelligence into state and corporate surveillance – a trend that experts say could lead to some dark places: discouraging civil rights, automating prejudice and prejudice, and slowly pushing society toward authoritarianism.

Currently, AI is mainly used in the surveillance world to identify people. The horror stories from Xinjiang in China are known. Networks of face recognition cameras are used to persecute the oppressed Uighur minority in the region. In the United Kingdom, where a surveillance camera comes on ten citizens, private companies are using the technology much less to automate surveillance lists. You'll be able to spot familiar troublemakers as well as VIP customers entering a business. In the US, the Amazon team, behind the company's popular ring surveillance camera, was planning to create similar surveillance lists for individual neighborhoods, even though they eventually dropped the plans.

But as the street cameras of New South Wales show, the identification of humans is just the beginning of AI surveillance: the true power – and threat – identifies actions . This means that you create cameras that not only tell you who they are, but what they do. Does this person move things? Could you steal something? Are they just staying in a way that you do not like?

This type of trait is not yet widespread, but begins to seep away in everyday use.


Shots of an AI surveillance camera in Japan trained to detect shoplifters.
Image: Earth Eyes Corp

In Japan, for example, you can buy an AI surveillance camera that is claimed to automatically make shoplifters. (A feature that is functionally different from Amazon's go stores, where customers automatically use image processing to calculate the cost of items they take off the shelves.) In the US, a company develops "Google for CCTV" that lets users learn machine learning Browse the monitoring material for specific types of clothing or cars. In India, researchers have even developed software that can be loaded into drones to automatically detect struggles on the road.

Such applications are often treated with suspicion by AI experts. For example, with the drone system, researchers have pointed out that the software has questionable accuracy rates and is likely to misidentify incidents. As with other examples of algorithmic bias, experts say that feeding systems with data that addresses specific groups of people or ethnic groups is reflected in the software's findings.


Source link