Publications

Abstract representation of interconnected networks and data flow structures.

Publications

Screenshot of article text

Viewpoint:  Launching the Trustworthy and Responsible AI Network (TRAINTM)

A viewpoint in JAMA by Vanderbilt University Medical Center's Peter Embi, MD, MS, and Duke's Michael Pencina, PhD introduces the Trustworthy and Responsible AI Network (TRAIN). TRAIN is a coalition of health systems, academic centers, and technology providers dedicated to advancing the responsible adoption of health AI. Established in 2024 by founding members including Vanderbilt University Medical Center, Duke Health, Advocate Health, and Microsoft, TRAIN focuses on addressing critical challenges in ethical and effective AI implementation. With over 50-member organizations, TRAIN is committed to driving equitable and innovative AI solutions that benefit diverse populations and health care settings worldwide.

Read the viewpoint

Screenshot of the article

Viewpoint: Translating AI for the Clinician

A viewpoint in JAMA by Duke’s Michael Pencina, PhD, and colleagues highlights a paradigm shift needed to maximize the impact of machine learning (ML) and artificial intelligence (AI) in medicine. The authors propose a clinical framework for AI development and testing that focuses on aligning AI technologies with specific clinical indications and use cases. This approach ensures that AI adoption is rooted in evidence of efficacy, safety, and monitoring in real-world clinical use. By drawing parallels to drug and medical device development, the authors advocate for rigorous testing processes, risk-based adoption strategies, and continuous monitoring to build trust among clinicians and patients. 

Read the viewpoint

Screenshot of the article

Correspondence:  External validation of AI models in health should be replaced with recurring local validation

A correspondence in Nature Medicine by Duke's Michael Pencina, PhD, and colleagues challenges the reliance on external validation as the ultimate standard for evaluating machine learning (ML) models in health care. The authors argue that external validation alone cannot guarantee a model’s safety, reliability, or clinical usefulness, especially given the diversity of health care settings and the risk of data shifts. Instead, they propose a paradigm of recurring local validation, inspired by Machine Learning Operations (MLOps), which emphasizes site-specific and ongoing validation before and after deployment. This approach ensures that ML models remain reliable, safe, and effective over time, accommodating local variations in data and care practices while maintaining alignment with the principles of responsible ML.

Read the correspondence

Cover of White Paper: AI Governance in Health Systems: Aligning Innovation, Accountability, and Trust

White Paper: AI Governance in Health Systems: Aligning Innovation, Accountability, and Trust

Duke Health AI Evaluation & Governance Program’s Nicoleta Economou, PhD, and the Duke-Margolis Institute for Health Policy’s Christina Silcox, PhD, and Valerie J. Parker, MSc-GH, have collaborated on an important white paper, titled ‘AI Governance in Health Systems: Aligning Innovation, Accountability, and Trust.’ The paper examines the transformative potential of AI in healthcare and draws valuable insights from US health systems on AI governance, identifying both core governance principles and diverse implementation strategies. The paper also provides actionable guidance for health systems and policymakers on establishing and standardizing AI governance, aiming to democratize trustworthy AI across the healthcare landscape. The recommendations focus on ensuring that all health systems, including those with limited resources, receive structured guidance that promotes best practices, accountability, and equitable access to the benefits of AI advancements.

Read the white paper

Screenshot of article

Viewpoint: A Federated Registration System for Artificial Intelligence in Health

A viewpoint in JAMA by Duke’s Michael Pencina, PhD, Jonathan McCall, MS, and Nicoleta Economou, PhD, discusses the need for a federated registration system for AI technologies in healthcare. The authors argue that the transparency requirements set forth by the Office of the National Coordinator should extend to all health AI technologies that could be captured by local registries that share similarities with institutional review board portals. The data could then be integrated nationally, creating a system akin to ClinicalTrials.gov for health AI technologies. By establishing local registries that feed into a national system, the approach aims to enhance safety, effectiveness, and quality in AI deployment, while building trust and ensuring accountability in the use of these technologies.

Read the article

Screenshot of article

Paper: Empowering nurses to champion Health equity & BE FAIR

As AI becomes integral to healthcare, addressing the risks of algorithmic bias is critical for promoting health equity. Nurses, with their patient-centered focus and systems expertise, are uniquely positioned to lead these efforts. In this paper, a group of authors from Duke Health AI Evaluation & Governance Program, Duke University School of Nursing, Duke AI Health & North Carolina Central University led by Michael Cary, PhD, RN, introduces the Bias Elimination for Fair AI in Healthcare (BE FAIR) framework, which empowers nurses to champion health equity by design in clinical algorithms. Drawing on real-world governance examples, it outlines how the BE FAIR framework equips nurses to mitigate bias, prevent discrimination, and ensure the ethical deployment of AI in healthcare.

Read the paper

Screenshot of the article

Viewpoint: AI Could Improve Diagnosis, Treatment in Healthcare

A paper by Duke Health AI Evaluation & Governance Program's Nicoleta Economou, PhD, and University of Pennsylvania’s Lee Fleisher, MD, published in JAMA Health Forum, highlights the potential of AI to revolutionize healthcare decision-making by improving diagnosis and treatment, especially for rare diseases. Titled ‘AI can be regulated using current patient safety procedures and infrastructure in hospitals,’ the paper also underlines risks such as misdiagnoses and medical errors if AI is not properly tested, updated, and integrated into clinical practice. The authors emphasize the need for transparency, accountability, and continuous improvement in AI governance to protect patient safety. They urge hospitals to develop detailed policies for AI use, including qualifications and monitoring responsibilities. 

Read the viewpoint

Screenshot of article

Perspective: Use of artificial intelligence in critical care opportunities and obstacles 

A paper by Nicoleta Economou and colleagues, published in Critical Care Journal, delves into the potential and challenges of using artificial intelligence (AI) in critical care. The review highlights how AI-powered clinical decision support systems (CDSS) can assist clinicians in the fast-paced ICU environment. The authors examine hurdles such as the “black-box” nature of predictive algorithms, biases in existing datasets, and barriers to integrating real-time, multidimensional data streams into clinical workflows. They emphasize that while AI holds promise for improving acute care, its development and deployment require careful attention to situational awareness, fairness, and trustworthiness. As AI-based tools continue to evolve, the paper underscores the importance of responsible innovation to ensure these technologies effectively support clinicians and enhance patient outcomes.

Read the perspective

Cover of publication

Journal Article: Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare

A groundbreaking paper published in JAMIA by Duke Health AI Evaluation & Governance Program’s Nicoleta Economou, PhD, the Duke Algorithm-Based Clinical Decision Support (ABCDS) Oversight Committee and other Duke University co-authors introduces an Implementation Guide for the Evaluation and Governance of Algorithmic Technologies in Healthcare, aiming to address the challenges of regulating rapidly advancing AI technologies. By streamlining oversight processes and promoting education within the Duke community, Duke researchers have developed a scalable framework supporting the trustworthy implementation of algorithmic technologies in patient care and healthcare operations. Grounded in key ethical and quality principles, the guide provides standardized evaluation criteria, fostering objective oversight at Duke. The results showcase improved safety, effectiveness, and fairness in algorithms through evidence-backed principles.

Read the article

Screenshot of article

Comment: Implementing quality management systems to close the AI translation gap and facilitate safe, ethical, and effective health AI solutions

An important paper co-authored by Duke Health AI Evaluation & Governance Program’s Nicoleta Economou, PhD, and Michael Pencina, PhD, recently published in NPJ Digital Medicine discusses leveraging a Quality Management System (QMS) for AI/ML development intended for healthcare. The authors explain how a tailored QMS framework can bridge the AI translation gap, ensuring safe, ethical, and effective incorporation into patient care. This approach can accelerate the translation of AI research into practical clinical applications, prioritizing patient safety and fostering trust in healthcare innovation.

Read the comment

Screenshot of article

Review: Addressing Algorithmic Bias

A group of authors led by Duke’s Michael Cary, PhD, RN, has published a scoping review devoted to issues of algorithmic bias in healthcare. The article surveys a broad range of applications, frameworks, reviews, and perspective articles addressing ways to mitigate bias in algorithms used to guide clinical decision-making, particularly in the context of proposed federal regulations prohibiting algorithmic discrimination in healthcare. The review, published in a special issue of the journal Health Affairs that examines structural racism in healthcare, includes coauthors from Duke including Michael Pencina, PhD, Nicoleta Economou, PhD, and Sophia Bessias, MPH, MSA, as well as coauthors from Northwestern University, the University of Chicago, and the University of California, Berkeley.

Read the review

Cover of journal

Journal Article: Introducing Framework for Clinical Algorithm Oversight

A group of Duke Health researchers led by Nicoleta Economou, PhD, recently published an account of their approach to evaluating and monitoring the use of algorithmic predictive models at Duke Health hospitals and clinics. The article, titled “A framework for the oversight and local deployment of safe and high-quality prediction models,” was published in the Journal of the American Medical Informatics Association (JAMIA). It showcases the processes and procedures by which an expert group at Duke Health known as Algorithm-Based Clinical Decision Support (ABCDS) Oversight reviews, approves, and manages predictive models intended for use in patient care settings. ABCDS Oversight is a collaborative effort between the Duke University School of Medicine and the Duke University Health System.

Read the article