Home / SmartTech / Researchers are studying the ethical implications of AI in surgical environments

Researchers are studying the ethical implications of AI in surgical environments



A new white paper, jointly written by researchers from the Vector Institute for Artificial Intelligence, examines the ethics of AI in surgery. It is found that surgery and AI have similar expectations, but differ in terms of ethical understanding. The paper points out that surgeons are of course confronted with moral and ethical dilemmas, while ethical framework conditions have probably only taken shape in AI.

In surgery, AI applications are largely limited to machines that perform tasks that are fully controlled by surgeons. AI could also be used as a clinical decision support system, and in these circumstances, the responsibility lies with the human machine or AI system designers, the co-authors argue.

Data protection is a primary ethical concern. AI learns to make predictions from large amounts of data ̵

1; especially from patient data in surgical systems – and is often described as being in conflict with privacy practices. The Royal Free London NHS Foundation Trust, a division of the British National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. Regardless, Google, whose partnership to share health data with Ascension last November was the subject of an in-depth review, announced plans to publish chest X-rays because of concerns that it contained personally identifiable information.

State, local, and federal laws aim to make privacy a mandatory part of compliance management. Hundreds of bills dealing with data protection, cybersecurity and data breaches are still pending or have been issued in 50 U.S. states, territories, and the District of Columbia. Probably the most comprehensive of all – the California Consumer Privacy Act – was legally signed about two years ago. Not to mention the Health Insurance Portability and Accountability Act (HIPAA), which requires companies to get approval before disclosing individual health information. International framework conditions such as the EU General Data Protection Regulation (GDPR) are intended to give consumers better control over the collection and use of personal data.

However, the co-authors of the white paper argue that the measures taken so far are limited by the interpretation of the jurisdiction and offer incomplete ethical models. For example, HIPAA focuses on health data from patient records, but does not cover data sources that were generated outside of insured companies such as life insurance companies or fitness band apps. While the obligation for patient autonomy alludes to a right to explain decisions by AI, framework conditions such as the GDPR only prescribe a “right to information” and do not appear to have a language that contains clearly defined protective measures against AI decisions.

In addition, the co-authors raise the alarm about the possible effects of distortion on surgical AI systems. The distortion of training data, which affects the quality and representativeness of the data used for an AI system, could have a dramatic impact on preoperative risk stratification before the operation. Underrepresentation of demographics can also lead to inaccurate assessments and incorrect decisions, e.g. B. whether a patient is treated first or extensive resources are offered in the intensive care unit. And contextual biases that occur when an algorithm is used outside of the training context can cause a system to ignore non-trivial reservations, e.g. B. whether a surgeon is right-handed or left-handed.

There are methods to reduce this bias, including ensuring discrepancies in the data set, sensitivity to over-fitting training data, and the ability to examine new data during deployment. They advocate the use of this and transparency on the whole to prevent patient autonomy from being undermined. “An increasing dependence on automated decision-making tools has already reduced the possibility of meaningful dialogue between the healthcare provider and the patient,” they write. “If machine learning is still in its infancy, the sub-field that is responsible for making its inner life explainable is so embryonic that even its terminology is not yet recognizable. However, some basic properties of explainability have developed … [that argue] machine learning should be
at the same time, can be dismantled and is algorithmically transparent. “

Despite AI’s shortcomings, particularly in the context of operations, the co-authors argue that the harm AI can prevent outweighs the disadvantages of adoption. For example, with thyroidectomy there is a risk of permanent hypoparathyroidism and recurrent nerve injury. With a new method, thousands of interventions may be required to observe statistically significant changes that a single surgeon may never see – at least not in a short time. However, a repository of AI-based analytics that summarizes these thousands of cases from hundreds of locations would be able to recognize and communicate these significant patterns.

“Continuous technological advancement in AI will lead to a rapid increase in the breadth and depth of its tasks. From the progress curve, we can predict that machines will become more autonomous, ”wrote the co-authors. “The increase in autonomy requires a greater focus on the ethical horizon that we have to question. Like ethical decision making in current practice, machine learning will not be effective if it is carefully designed by the committee – it requires real-world engagement. ”


Source link