Andrew Moore, Google Cloud’s chief AI scientist, reiterated the company’s commitment to data protection today, pledging not to review the data that corporate customers share when training AI models or when storing in the cloud, ” without your use of the service having to be legitimately supported – and even then it is only with your permission. “As part of this effort, Google Cloud is introducing a tool that allows customers to remove personally identifiable information such as addresses, bank account numbers, or information about family members from text or medical images.
Removing such data will, in some way, help you comply with important data protection laws such as the European Union̵
“It was one of the listed reasons for concern when you talk to your large companies about what would prevent them from using AI in the cloud,” Moore told VentureBeat in a telephone interview. “It is a mechanism that ensures that it is both contractually and technologically clear that we cannot mix this data with anyone else’s data or use any of the machine learning models we have created to feed it into other Google products integrate or services. As you can imagine, it is really very important for Google that we are clear about what is happening to this data as the whole world is watching us. “
Moore said the editorial tool has been in development for nine months and the timing of its release is unrelated to an antitrust report released last week. After a 16-month investigation by a subcommittee of the U.S. House of Representatives, the report explained the monopolies of Amazon, Apple, Facebook, and Google and suggested that part of the solution might be to break up the companies.
The 449-page report found that internal communication shows that Google is covertly monitoring potential and actual competitors and “tracks real-time data across markets that – given the size of Google – provides near-perfect market intelligence”. In a separate development, the US Department of Justice is expected to initiate proceedings against Google in the coming days. Regulators in Australia, China, the European Union and the UK are also investigating Google to see if the company has breached anti-competitive standards in their respective markets. This could lead to corrective action.
Concerns about user privacy and potential curiosity from Google or other large cloud providers – such as AWS or Microsoft Azure – have led some corporate customers to turn down cloud services in favor of on-premises data centers.
Although Google uses compound learning and differentiated privacy, for example to provide Android smartphone users with personalized keyboards, Google currently does not offer any data protection-capable services.
Google Cloud teams that work with customers to develop AI solutions now only work with data that removes personally identifiable information. Customers using the Google Cloud editing tool can use it on their own systems before sending data to the cloud or via an encrypted cloud partition.
“As an academic, I was really concerned that machine learning was being shut down due to data protection regulations. By developing this suite of editorial technologies, we have shown ourselves and our customers that we are able to operate useful AI without ever having to keep track of personal information, ”said Moore.
Moore joined Google Cloud in 2018 from Carnegie Mellon University after Stanford Professor Dr. Fei-Fei Li had left.
In related news, Google Cloud launched its portfolio for confidential computers earlier this year, starting with Confidential VM (virtual machines).