Rarely in the history of science does a new technology emerge and be as swiftly adopted—and scrutinized—as Artificial Intelligence. Indeed, though AI has been around for more than a half-century, it is comparatively recently that it has generated an unparalleled wave of interest, excitement—and concern. Most recently, the release of ChatGPT (generative pre-trained transformer) in November of last year is clearly a watershed in the evolution of information technology.
GPTs are large-language models built on artificial neural networks, and they have, in a breathtakingly short time, begun to reshape the way we think of appropriate uses and users of intelligent machines. This is especially the case in biomedical research and clinical practice. This in turn has led many organizations to develop policies, processes, and documents to govern or regulate the use of these tools. One such is the World Health Organization (WHO), which has included the University of Miami Frost Institute for Data Science and Computing (IDSC) faculty in the drafting of key international guidance documents.
“This has been a rare opportunity to contribute
to global efforts to identify best practices and
foster the ethical use of AI in healthcare.”
“This has been a rare opportunity to contribute to global efforts to identify best practices and foster the ethical use of AI in healthcare,” said Kenneth W. Goodman, Director of IDSC’s program on Data Ethics + Society and of the UM Miller School of Medicine Institute for Bioethics and Health Policy. The Institute is a WHO Collaborating Center in Ethics and Global Health Policy, one of 14 in the world and the only one in the U.S.
The WHO in 2021 published “Ethics and Governance of Artificial Intelligence for Health,” which concluded that “AI holds great promise for the practice of public health and medicine. WHO also recognizes that, to fully reap the benefits of AI, ethical challenges for health care systems, practitioners, and beneficiaries of medical and public health services must be addressed.
Many of the ethical concerns described in this report predate the advent of AI, although AI itself presents a number of novel concerns.” Goodman was a member of the 21-member External Expert Group, comprising scholars from 18 countries who came to a consensus in identifying six core concerns:
- Protect human autonomy;
- Promote human well-being, safety, and the public
interest; - Ensure transparency, explainability, and intelligibility;
- Foster responsibility and accountability;
- Ensure inclusiveness and equity; and
- Promote AI that is responsive and sustainable.
Most recently, and in response to the GPT phenomenon, the WHO is drafting a new guidance document, “The Use of Large Multi-Modal Models for Health-Related Applications: Governance and Ethical Considerations,” which is scheduled for publication in 2023. A year in development, the guidance document is likely to assume a central role in discussions and debates about the appropriate use of such tools, especially as they are embedded in electronic health records.
UM has long been a leader in research on issues at the intersection of ethics and health information technology. “That UM was included in this international initiative is a credit to our institution, which has for decades—indeed, long before AI was a hot topic—supported and fostered this research,” Goodman said.
In the middle of the Covid pandemic, UM in 2021 received a grant from the WHO to survey professionals about the use of computing tools to manage resource allocation in emergencies. That project, led by Goodman and Sergio G. Litewka, Director of Global Bioethics for the institute, led to a number of deliverables, including an extensive compilation of “AI and Big Data Resources,” which they continue to maintain.
Tags: AI in Healthcare, ChatGPT, Ethics and Governance of Artificial Intelligence for Health, Ken Goodman, Sergio Litewka, World Health Organization