Home / Innovative / Google Search is getting new AI tools to help you decipher your terrible spelling

Google Search is getting new AI tools to help you decipher your terrible spelling



At its “Search On” event, Google unveiled a number of new enhancements it will be making to its core Google search service over the coming weeks and months. The changes mainly focus on using new AI and machine learning techniques to provide better search results to users. Most important of them, a new spell checker tool that Google promises will help identify even the worst-written queries.

According to Prabhakar Raghavan, Google’s search director, 15 percent of Google searches per day are those that Google has never seen before. This means that the company must constantly work to improve its results.

Part of this is due to poorly written queries. According to Cathy Edwards, VP Engineering at Google, every tenth search query on Google is misspelled. Google has long tried to help with its “Did you mean” feature, which suggests the correct spelling. A major update for this feature will be released by the end of the month, using a new spelling algorithm powered by a neural network with 680 million parameters. It takes less than three milliseconds after each search, and the company promises to come up with even better suggestions for misspelled words.

Another new change: The Google search can now index individual passages of websites instead of just the entire website. For example, if users search for the phrase “How do I know if my house windows are made of UV glass”, the new algorithm can find a single paragraph on a DIY forum to find an answer. Edwards said that with the algorithm rolled out next month, 7 percent of queries across all languages ​​will be improved.

Google also uses AI to break broader searches into subtopics for better results (e.g., to find home exercise equipment for smaller homes, rather than just providing general information about exercise equipment).

Eventually, the company is also starting to use computer vision and speech recognition to automatically tag and split videos, an automated version of the existing chapter tools it already provides. For example, cooking videos or sports games can be analyzed and automatically divided into chapters (which Google already offers developers by hand), which can then be displayed when searching. It’s a similar effort to the company’s existing work of showing specific podcast episodes on Search rather than just showing the general feed.


Source link