Artificial intelligence is fast taking over hospitals and operating rooms. There are undeniably great advantages to the use of artificial intelligence in hospital environments. However, questions of liability in the event of a medical error or patient harm must also be strictly defined.
Medical errors are one of the leading causes of unintentional death in the United States. An approximate 100,000 persons are killed annually as a result of medical errors.
The use of artificial intelligence has proved to be of major benefit in speeding up diagnoses and elevating the standard of care available to patients. The medical image reading systems powered by artificial intelligence can help diagnose hard- to- detect forms of cancer, provide for more accurate reading and interpretation of heart scans in order to detect heart attacks before they turn fatal, and help detect blindness- inducing forms of eye disease.
However, whenever these advanced medical systems are used, it is also important to determine how to handle issues of liability when errors or injuries occur. Reports of injuries involving artificial intelligence are not unheard of. Robotic surgeries involved in performing complicated surgeries have caused grievous harm to patients. Although technology can help mitigate the risk of human error in health care, no machine is error-proof.
There is a vital need to stringently define liability issues whenever patient harm occurs as a result of the use of artificial intelligence. While Silicon Valley’s “move fast and break things” policy might work very successfully in other fields, it cannot be the case where human lives and patient safety are concerned.
Robots are now positioned as doctor assistants, which means that they help physicians in making decisions. That means that the doctor involved in using the technology may be liable whenever there is any harm as a result of the use of this technology. For example, algorithms may read certain measurements, like blood reassure readings, wrong at certain times in a day, leading to a misdiagnosis. In such cases, doctors must use other forms of evidence, including those gathered by humans as well as the patient’s medical history, test results and other pieces of evidence, and ascertain whether the machine readings are correct. Failure to do so could place the doctor liable in a malpractice claim.
In conclusion, if artificial intelligence is used as doctor support, the doctor could be successfully held liable for errors that cause harm to the patient.
The Indianapolis medical malpractice attorneys at Montross Miller represent persons injured as a result of medical negligence across Indiana.