Google says it uses AI and machine learning techniques to help identify breaking news about crises like natural disasters faster. Pandu Nayak, Google’s vice president of search, said it took the company’s systems minutes to detect breaking news, as opposed to 40 minutes a few years ago.
Faster detection of breaking news is likely to become critical as natural disasters strike around the world and US election day 2020 approaches. Forest fires like California and Oregon can change course in no time (and have changed), and timely, accurate voting information in the face of disinformation campaigns is key to protecting the integrity of processes.
“Over the past few years we have improved our systems to ensure we are returning the most authoritative information available,”
In a similar development, Google recently released an update on BERT-based language comprehension models to improve the correspondence between messages and available fact checks. (In April 2017, Google began including verifying publisher facts on public claims in addition to search results.) According to Nayak, systems now better understand whether a fact-check claim is related to the topic of a story, and dip those verifications into Google News more strongly on ‘Full coverage.
According to Nayak, these efforts go well with Google’s work of improving the quality of search results for topics susceptible to hateful, offensive, and misleading content. Progress has also been made in this area, since the Google systems can more reliably identify subject areas in which there is an information risk.
In the areas of search results that show snippets from Wikipedia, one of the sources that fuel Google’s Knowledge Graph, Nayak can now better prevent potentially inaccurate information from being displayed, according to its machine learning tools. If false information is leaked from destroyed Wikipedia pages, the systems can detect these cases with an accuracy of 99%.
The improvements have also had an impact on the systems that govern Google’s auto-complete suggestions. They automatically choose not to show predictions when a search is unlikely to result in reliable content. The systems were previously protected from “hateful” and “inappropriate” predictions, but have now been extended to elections. Google says it will remove predictions that could be interpreted as claims for or against a candidate or political party, as well as statements about voting methods, requirements, the status of voting venues, and the integrity or legitimacy of electoral processes.
“We have long-standing guidelines to protect against hateful and inappropriate predictions before appearing in autocomplete,” Nayak wrote. “We are designing our systems to automatically approximate these guidelines, and have improved our automated systems so that no predictions are made if we find that the query may not produce reliable content. These systems are not perfect or precise, so we enforce our guidelines when predictions fail. “