Saturday, February 8, 2025

Clinical Oncology Leaders ASCO Release Guidelines for Responsible AI Use

Similar articles

The American Society of Clinical Oncology (ASCO) has published its “Principles for the Responsible Use of Artificial Intelligence in Oncology,” underscoring the growing significance of AI in clinical oncology and scientific research. As AI rapidly integrates into healthcare, ASCO aims to guide its members and the broader cancercare community in harnessing AI’s potential while mitigating risks.

AI’s development and application in clinical oncology are advancing at a remarkable pace. Clinicians face the challenge of navigating the complexities of cancer care, interpreting vast datasets, and keeping up with new evidence and drug approvals. AI promises to address these challenges by improving efficiency, accuracy, quality, and accessibility in cancer care. However, the rapid adoption of AI also raises concerns about legal, ethical, and operational issues.

Subscribe Weekly Market Access News

* indicates required

ASCO recognizes several concerns, including the potential for AI to present fake information authoritatively, introduce bias in algorithms, erode patient trust, and blur the roles of clinicians. Conversely, AI has the potential to enhance medical literacy, improve decision support tools, and increase operational efficiency in healthcare settings.

ASCO Establishes Ethical Guidelines for AI in Clinical Oncology, Emphasizing Transparency and Patient Care

As the leading organization for cancer care professionals, ASCO is committed to fostering a multidisciplinary dialogue and promoting ethical and legal guidelines for AI use in oncology. To this end, ASCO’s Board of Directors has appointed a task force to explore AI applications in cancer care and make recommendations on ASCO’s role in evolving AI uses. The task force has developed six guiding principles:

  1. Transparency: AI tools and applications should be transparent throughout their lifecycle.
  2. Informed Stakeholders: Patients and clinicians should be aware when AI is used in clinical decision-making and patient care.
  3. Equity and Fairness: Developers and users of AI should protect against bias in AI model design and use and ensure access to AI tools.
  4. Accountability: AI systems must comply with legal, regulatory, and ethical requirements, with developers assuming responsibility for their systems.
  5. Oversight and Privacy: Institutional compliance policies should govern AI use, protecting clinician and patient autonomy and personal health information privacy.
  6. Human-Centered Application: AI should complement, not replace, human interaction in healthcare delivery.

AI is already demonstrating its potential in enhancing diagnosis, treatment, and patient outcomes in clinical oncology. Clinical AI tools are used to recommend treatments, aid in diagnosis through image analysis, predict health outcomes, guide surgical care, monitor patients, and support population health management. Machine learning and deep learning are pivotal in advancing precision clinical oncology. AI algorithms can analyze genomic data and medical imaging, providing actionable clinical insights and improving early disease detection and personalized treatment strategies.

Clinical Oncology

ASCO Sets Standards for AI in Clinical Oncology: Reducing Burdens and Ensuring Equity

AI is also being leveraged to reduce administrative burdens, optimize operational processes, and streamline clinical trials. However, the implementation and maintenance costs of AI remain a challenge, necessitating further economic analysis to assess cost-effectiveness. ASCO highlights the potential for AI to amplify existing health disparities. There is a need for ongoing research to understand AI’s impact on health equity and to develop strategies to mitigate algorithmic bias and misuse.

Ensuring the validity and reliability of AI tools is critical. ASCO advocates for rigorous prospective evaluation of AI interventions to demonstrate their impact on health outcomes. Efforts are underway to develop standards and guidelines for AI model evaluation and validation. Liability risks associated with AI may hinder its adoption by providers. ASCO emphasizes the need for clear legal standards and accountability frameworks to address potential malpractice liability issues arising from AI use.

ASCO’s principles for responsible AI use in clinical oncology provide a framework for safely integrating AI into cancer care. By embracing these principles, ASCO aims to ensure that AI serves as a driver of innovation and clinician empowerment, ultimately benefiting patients and the healthcare system. As AI continues to transform cancer care and research, ASCO remains committed to guiding its members and shaping policy to harness AI’s potential while safeguarding against its risks. This proactive approach will help create a more efficient, accessible, and equitable healthcare system for all patients with cancer.

 

You can follow our news on our Telegram, LinkedIn and Youtube accounts.

Resource: American Society of Clinical Oncology, May 31, 2024


This article has been prepared with the assistance of AI and reviewed by an editor. For more details, please refer to our Terms and Conditions. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author.

Subscribe to our newsletter

To be updated with all the latest news, offers and special announcements.

Latest article