Home / SmartTech / AI Weekly: Coronavirus chatbots use inconsistent data sources and data protection practices

AI Weekly: Coronavirus chatbots use inconsistent data sources and data protection practices



It has been widely reported that US hospital systems, particularly in hotspots such as New York City, Detroit, Chicago and New Orleans, are overwhelmed by the influx of patients affected by COVID-19. There is a nationwide shortage of ventilators. Congress centers and public parks were converted into community overflows. The waiting times in some call and test centers are several hours on average.

There is clearly a real and current need for test solutions that ensure that vulnerable people are treated quickly. Chatbots were offered as a solution – technology giants like IBM, Facebook and Microsoft have used them as effective information tools. However, it is problematic that there are differences in the way these chatbots collect and process data, which in the worst case can lead to inconsistent health outcomes.

We interviewed six companies that offer COVID-1

9 chatbot solutions to governments, nonprofits, and healthcare systems – Clearstep, IPsoft, Quiq, Drift, LifeLink, and Orbita – about the sources of their chatbots and their COVID-19 information Uncover review processes and their collection and handling of personal data (PII). [19659004] Quality and information sources

It is not surprising, but nevertheless worrying, that the procurement and verification processes at chatbot providers are very different. While some claimed to review data before it was made available to their users, others rejected the question and instead insisted that their chatbots should be used for educational purposes rather than diagnostic purposes.

Clearstep, Orbita, and LifeLink shared this with VentureBeat. Run all chatbots information from medical professionals.

  VB TRansform 2020: The AI ​​event for managers. San Francisco July 15-16

According to Clearstep, data from Disease Control and Prevention Centers (CDC) and protocols are used that "over 90% of call centers for nurses across the country trust". The company recruits senior medical and senior medical computer scientists from its client institutions, as well as local internal medicine and emergency medicine physicians, to review content with their clinical review teams and provide actionable feedback.

Orbita is also based on CDC guidelines and agreements exist to allow access to content from trusted partners such as the Mayo Clinic. The company's in-house clinical team prioritizes the use of this content, which it claims is reviewed by "leading institutions".

LifeLink also aligns its questions, risk algorithms and care recommendations with those of the CDC, supplemented by an optional question-answer module. It also clarifies that the hospital's clinical teams will still have to sign off all of its chatbot data.

In contrast, IPsoft says that its chatbot sources come from the CDC in addition to the World Health Organization (WHO). The content is not further reviewed by internal or external teams. (To be clear, IPsoft notes that the chatbot is not meant to provide medical advice or diagnosis, but "to help users assess their own situation based on verifiable information from authorized sources.")

Quiq says in similarly, that his bot is "passive". Releases unaudited information using undisclosed, approved sources for COVID-19 and materials from the local health authority, the CDC, and the White House.

The Drift chatbot uses CDC guidelines as a template and can be customized based on organizations' response plans, but also includes a disclaimer that it should not be used as a substitute for medical advice, diagnosis, or treatment.

Data protection

The verified COVID-19 chatbots are just as inconsistent with regard to data processing and collection as with sources of information we found. However, no one appears to violate HIPAA, the United States law that sets standards for protecting individual medical records and other personal health information.

Clearstep says that his chatbot does not collect information that would allow him to identify a specific person. In addition, all data collected by the chatbot is anonymized, and health information is encrypted during transport and at rest and stored on the HIPAA-compatible app hosting platform Healthcare Blocks.

According to LifeLink, all chatbot conversations are conducted in a HIPAA-compliant browser session. No PII are collected during screening. The only data that is stored is symptoms and travel / contact / special population risk. Medium and high risk patients take clinical appointments for appointments at which the chatbot collects health information that is transmitted directly to the hospital system for integration during the visit via the integration into their planning systems.

IPsoft is somewhat vague about its data collection and storage practices, but it is said that its chatbot does not collect private health information or record conversations or data. Quiq also says that it does not collect personal information or health information. According to Drift, users must register for a self-assessment and agree to the clinical conditions.

Orbita states that the premium chatbot platform – a HIPAA complaint – collects personal health information. but that his free chatbot doesn't.

Challenges ahead

The differences between the various publicly available COVID-19 chatbot products are problematic to say the least. While we examined only a small sample, our review found that few used the same information sources, review processes, or guidelines for data collection and storage. For the average user who is unlikely to read the fine print of every chatbot they use, this can be confusing. Test statistics conducted by eight COVID-19 chatbots showed that the diagnosis of a number of common symptoms ranged from "low risk" to "onset of isolation at home".

While companies are typically not willing to disclose their internal development processes for competitive reasons, greater transparency regarding the development of COVID-19 chatbots could help make the bots' responses consistent. A collaborative approach combined with disclaimers on the capabilities and limitations of chatbots seems to be the responsible way, as tens of millions of people are looking for answers to critical health issues.

Send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner for AI reporting – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel.

Thank you for reading,

Kyle Wiggers

AI Staff Writer


Source link