Sony has announced the world’s first image sensor with integrated AI smarts. The new IMX500 sensor has both computing power and memory, enabling machine-learnable computer vision tasks without additional hardware. The result, says Sony, will be faster, cheaper, and safer AI cameras.
In recent years, devices from smartphones to surveillance cameras have benefited from the integration of AI. Machine learning can be used not only to improve the quality of the pictures we take, but also to understand videos as a human would. Identify people and objects in the frame. The applications of this technology are huge (and sometimes worrying), and can do everything from self-driving cars to automated surveillance.
However, many applications rely on sending images and videos to the cloud for analysis. This can be a slow and unsafe journey that exposes data to hackers. In other scenarios, manufacturers have to install special processor cores on devices to cope with the additional computing effort, as with new high-end phones from Apple, Google and Huawei.
However, according to Sony, the new image sensor offers an optimized solution than either of these approaches.
“There are several other ways to implement these solutions,” said Mark Hanson, vice president of business and innovation at Sony The edgeThis refers to edge computing, which uses dedicated AI chips that are not attached directly to the image sensor. “But I don’t think that if we send billions of dollars in image sensors, they’ll be nearly as cheap as we are.”
The enormous presence of Sony in the image processing market will surely pass this technology on to customers on a large scale. Hanson notes that the company has a market share of more than 60 percent and delivered around 1.6 billion sensors last year, including all three cameras in the Apple iPhone 11 Pro.
However, it is unlikely that this first-generation AI image sensor will at least end up in consumer devices such as smartphones and tablets. Instead, Sony is aimed at retailers and industrial customers, with Hanson describing Amazon’s cashless go-stores as a potential application.
In Amazon’s Go stores, the retailer uses numerous AI-enabled cameras to track buyers and bill them for the items they get off the shelves. “They use hundreds of cameras and process petabytes of data every day with a small convenience score,” says Hanson. The resulting hardware costs have been reported to slow the launch of these businesses. “But if we can miniaturize this ability and put it on the back of a chip, we can do all sorts of interesting things.”
In addition to cost savings, there are data protection benefits. If the AI chip is glued directly onto the back of the image sensor, the object can be recognized on the device. Instead of sending data to be analyzed either to the cloud or to a nearby processor, the image sensor itself performs the required AI analysis and simply creates the metadata instead.
So if you want to create a smart camera that detects whether someone is wearing a mask or not (a very real problem), an IMX500 image sensor can be loaded with the appropriate algorithm that allows the camera to quickly send “Yes” “Or “no” pings.
“Now we have eliminated a 4K video stream, typically 60 frames per second, to get just that one.” Hey, I recognize this object, “says Hanson.” This can reduce data traffic [and] It also helps things like privacy. “
Another major application is industrial automation, where image sensors are needed to help co-bots – robots designed to work in close proximity to humans – to beat up their flesh and blood colleagues. Here the main advantage of an integrated AI image sensor is speed. When a co-bot recognizes a person where they shouldn’t be and has to come to a standstill quickly, processing this information as quickly as possible is of paramount importance.
According to Sony, the IMX500 is much faster than many other AI cameras for this kind of task. It can apply a standard image recognition algorithm (MobileNet V1) to a single video image in just 3.1 milliseconds. For comparison, according to Hanson, competitors’ chips, such as those from Movidius from Intel (used in Google’s clip camera and DJI’s Phantom 4 drone), can take hundreds of milliseconds – even seconds.
The major bottleneck, however, is the ability of the IMX500 to handle more complex analytical tasks. At the moment, according to Hanson, the image sensor can only work with fairly “basic” algorithms. That means that more demanding and diverse tasks like driving an autonomous car for the foreseeable future will certainly require dedicated AI hardware. Instead, think of the IMX500 as a simple device with an application.
However, this is only the first generation and the technology will no doubt improve in the future. Cameras are currently smarter because they send their data to computers. In the future, the camera itself will be the computer and will be all the more intelligent.
Test samples of the IMX500 have already been delivered to early customers with prices starting at 10,000 yen ($ 93). Sony expects the first products with an image sensor to be launched in the first quarter of 2021.