Home / SmartTech / Adobe’s plans for a standard for the allocation of online content could have a major impact on misinformation – TechCrunch

Adobe’s plans for a standard for the allocation of online content could have a major impact on misinformation – TechCrunch



Adobe’s work on technical solutions to tackle large-scale online misinformation at an early stage is taking some big steps toward its ambition to become an industry standard.

The project was first announced last November, and the team is now releasing a white paper explaining how its system, known as the Content Authenticity Initiative (CAI), works. Beyond the new white paper, the next step in system development will be to implement a proof-of-concept that Adobe plans to release for Photoshop later this year.

TechCrunch spoke to Adobe CAI director Andy Parsons about the project, which aims to develop a “robust content allocation system” that embeds data in images and other media, right from the start in the industry standard Image editing software from Adobe.

“We believe that we can deliver a truly compelling, digestible story to consumers of fact-checkers who are interested in the accuracy of the media they are viewing,”

; said Parsons.

Adobe highlights system attractiveness in two ways. First, it gives content creators a more robust way to append their names to the work they do. However, the idea that the project could offer a technical solution for image-based misinformation is even more convincing. As we have already written, manipulated and even non-contextual images play a major role in misleading information on the Internet. One way to track the origin – or so-called “origin” – of the images and videos we encounter online could lead to a chain of custody that we currently lack.

“… At some point, you could think of a social feed or news site to filter out things that are probably fake,” said Parsons. “But the CAI avoids making decisions. We just want to provide this level of transparency and verifiable data.”

Of course, many of the misleading things that Internet users encounter every day are not visual content at all. Even if you know where a medium comes from, the claims or scene it takes are often misleading without an editorial context.

The CAI was first announced in collaboration with Twitter and the New York Times. Adobe is now working to build partnerships across the board, including with other social platforms. It is not difficult to arouse interest, and Parsons describes a “widespread enthusiasm” for solutions that can understand where images and videos come from.

Beyond EXIF

With Adobe’s involvement, CAI sounds like a twist on EXIF ​​data – the stored metadata that allows photographers to embed information such as the lens used and GPS information about where a photo was taken. However, CAI is said to be much more robust.

“Adobe’s proprietary XMP standard, which is widely used for all tools and hardware, can be edited, not reviewed, and is therefore relatively brittle for what we’re talking about,” said Parsons.

“When we talk about trust, we think about whether the data claimed by the person taking an image or taking an image is verifiable.” With traditional metadata, including EXIF, this is not because Any number of tools can change the bytes and text of the EXIF ​​claims. You can change the lens if you want … but when we talk about verifiable things like identity, origin and wealth history, [they] must always be cryptographically verifiable. “

The idea is that such a system will become ubiquitous over time – a reality for which Adobe is probably uniquely positioned. In the future, an app like Instagram would have its own “CAI implementation” that the platform could use to extract data about where an image came from and display it to users.

The final solution uses techniques such as hashing, a kind of pixel-level cross-checking system that is compared to a digital fingerprint. This type of technology is already widely used by AI systems to identify child exploitation online and other types of illegal content on the Internet.

As Adobe works to gain partners to support the CAI standard, a website is also created where the CAI data of an image is read to close the gap until the solution is widely accepted.

“… you can drag any asset into this tool and see how the data is displayed in a very transparent way. This kind of data temporarily separates us from any dependency on a particular platform,” said Parsons.

For the photographer, embedding this type of data is initially opt-in and somewhat modular. A photographer can embed data about his editing process while refusing to attach his identity in situations where, for example, it poses a risk to him.

Well thought-out implementation is the key

While the main uses of the project are to make the Internet a better place, the idea of ​​an embedded data layer that could trace the origin of an image creates Digital Rights Management (DRM), an access control technology best suited for its use industry is known in entertainment. DRM has many industry-friendly benefits, but it’s a hostile system in which countless people are persecuted by the Digital Millennium Copyright Act in the United States and all sorts of other cascading effects that stifle innovation and threaten people with disproportionate legal consequences for harmless acts.

Since photographers and videographers are often individual content creators, they would ideally benefit from the CAI proposals and not from a kind of corporate gatekeeper – but nevertheless from concerns that arise when talking about such systems, no matter how new they are. Adobe emphasizes the benefit for individual motifs, but it is worth noting that these systems can sometimes be misused by company interests in an unforeseen way.

Aside from due diligence, the misinformation boom makes it clear that the way we currently share information online is deeply broken. Since content is often separated from its true origins and made viral on social media, platforms and journalists are all too often in trouble to remove the chaos afterwards. Technical solutions, if implemented carefully, could at least be scaled to meet the scope of the problem.


Source link