Ethics and human rights should be at the core of using artificial intelligence for health, WHO says
Facebook
Twitter
Email
Print

Geneva/New York, June 28 – In its first move to provide guidance for the application of artificial intelligence in the vast field of healthcare, the World Health Organization said humans should remain in full control of healthcare systems and medical decisions.

The health organization headquartered in Geneva has just released a 165-page report called Ethics and Governance of Artificial Intelligence for Health, which it said is the result of a two-year consultations held by departments in the WHO Science Division and its own appointed panel of international experts.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General.

 “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

The report cited the benefits of AI in healthcare, which is already being used in some wealthy countries: “to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.”

It said AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

Dr Soumya Swaminathan, WHO Chief Scientist, said in the report’s foreword that if AI is employed “wisely” it could empower patients and communities to assume control over the healthcare systems.

“But if we do not take appropriate measures, AI could also lead to situations where decisions that should be made by providers and patients are transferred to machines, which would undermine human autonomy, as humans may neither understand how an AI technology arrives at a decision, nor be able to negotiate with a technology to reach a shared decision,” she warned.

“In the context of AI for health, autonomy means that humans should remain in full control of health-care systems and medical decisions,” Swaminathan said.

A press release issued by WHO summarized the report, parts of it are published in full in this article to reflect the organization’s true intents:

The report cautioned against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.

It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cybersecurity, and the environment.

For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.

 The report also emphasizes that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.

AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.

Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.

Six principles to ensure AI works for the public interest in all countries.

To limit the risks and maximize the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance:

Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well- defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.

Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

 Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.

Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.

These principles will guide future WHO work to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all.

United Nations correspondent journalists – United Nations correspondent journalists – United Nations correspondent journalists

United Nations journalism articles – United Nations journalism articles – United Nations journalism articles

EXECUTIVE SUMMARY

Scroll to Top