December 14, 2024

cjstudents

News for criminal justice students

NSF and Amazon award $1M for healthcare AI integrity

[ad_1]

Duke University and the University of Connecticut are two of 13 recipients receiving funding through the third round of Fairness in AI program funding from the U.S. National Science Foundation and Amazon. 

WHY IT MATTERS   

The accepted proposals set goals to improve access to underserved patients and to improve patient care in high-stakes hospital settings.

For this third round of Fairness in AI funding that totals $9.5 million in awards, researchers were asked to submit proposals by August 3, 2021. 

Applicants were asked to focus on what you might expect–theory and algorithms, principles of human interaction with AI, etc – along with applications for hiring decisions, education, criminal justice and human services, with the goal of building a more equitable society.

Research led by Duke University, An Interpretable AI Framework for Care of Critically Ill Patients Involving Matching and Decision Trees, was awarded $625,000. The research will introduce a framework to use almost-matching-exactly (AME) techniques in machine learning and interpretable policy design through “sparse” decision trees for doctors and others. 

According to the funding announcement, the framework arose from a challenge in how to treat critically ill patients in the hospital that are at risk for subclinical seizures. The project aims to address the release of the AME code in several formats at varying levels of expertise.  

University of Connecticut researchers will use a $392,994 award for a large-scale computational study of biased health information that focuses on bias reduction in the health domain. 

The goals of the Bias Reduction in Medical Information study include informing public policy and improving the well-being of historically under-served patients. BRIMI will work to develop novel AI approaches both to establish health information inequities empirically and create triage guidelines for public health officials and practitioners to reduce them.

The NSF and Amazon-funded joint program was launched in 2019 to promote fairness in artificial intelligence and machine learning around sensitive, legally protected attributes. Amazon provides funding toward the awards, but does not participate in the grant-selection process, according to the announcement. 

THE LARGER TREND

In the inaugural Fairness in AI grant announcement, NSF said it works closely with academic researcher grant recipients to address issues of fairness, transparency and accountability and to develop bias-free AI systems.

UConn’s project award “includes significant outreach efforts, which will engage minority communities directly in our scientific process; broad stakeholder engagement will ensure that the research approach to the groups studied is respectful, ethical and patient-centered,” according to the funding announcement.

In the past, it’s been thought that by adding in more fairness constraints to machine learning lowers the accuracy of the data. However, with continued AI modeling, some research shows we can minimize or avoid the tradeoff between fairness and efficacy

It is possible to build systems that are fair and equitable without sacrificing accuracy, say Carnegie Mellon researchers who found that by defining fairness goals upfront in the machine learning process, and then making design choices to achieve that goal, they could address slanted outcomes and keep data precise.

In healthcare settings, innocuous-seeming data can reproduce bias in AI, Chris Hemphill, VP of applied AI and growth at SymphonyRM told Healthcare IT News last year. 

For example, clinical measurements might fail to take into account hurdles – such as economic barriers or racial bias – that prevent patients from seeking care. But while sussing out the nuances with machine learning and user discussions takes additional work and effort, they said, it’s worth the investigation.

“Modeling means nothing if you don’t have the user experience; the user discussions; the training about how and why people should use it,” they explained.  

ON THE RECORD

“These awards are part of NSF’s commitment to pursue scientific discoveries that enable us to achieve the full spectrum of artificial intelligence potential at the same time we address critical questions about their uses and impacts,” said Wendy Nilsen, deputy division director for NSF’s Information and Intelligent Systems Division, in a statement.

Andrea Fox is senior editor of Healthcare IT News.
Email: af**@hi***.org

Healthcare IT News is a HIMSS publication.

[ad_2]

Source link