Praduman Jain is CEO and founder of Vibrent Health, a digital health technology company powering the future of precision medicine.

In a clinical research ecosystem that’s not known for its diversity, AI brings something meaningful to the table — but as powerful as smart tech is, it can also exacerbate the very equity problems it was made to solve.

Those problems were multifactorial and mighty even before Covid-19 exposed them globally. Most people now acknowledge the lack of diversity in study recruitment and participation, not just by race but also by other factors like disability, comorbidities and low income.

But other diversity concerns matter, too. One is the drop-off and non-compliance rates in underrepresented communities due to barriers that go unaddressed, such as a low-income patient’s inability to make time for site visits.

Worse still, research protocol design doesn’t often keep representation in mind — in part because even now, many principal investigators aren’t very diverse themselves. This creates a trial landscape that doesn’t reflect the broader population, further perpetuating inequities.

These research gaps are long overdue for a course correction. The U.S. Food and Drug Administration thinks so, too: The FDA released a guidance document in November 2020 making research equity a more pressing priority for all sponsors.

At this critical moment, AI can help — with one important caveat: It must be used ethically.

AI As The Enabler, But With Limitations

As a tool, AI stands to bolster greater diversity in many ways. For one thing, intelligent technologies can expand the net cast for recruitment so that more diverse participants are considered. This expansion is essential, considering that in 2019 a median of more than seven in 10 clinical trial participants were white.

As one example, an AI-enhanced trial at Cedars-Sinai identified 16 candidates in one hour, while a human-based approach found two people in six months. Platforms like these excel when they find candidates outside the usual circles of knowing the right people or living in the right place.

There’s also the argument that machines can eliminate human biases during trial execution. This point has merit, but only so far. After all, technologies have limitations in that they can perpetuate preexisting biases: Human beings who make the machines transfer their own implicit biases to the algorithms.

You can see how this gets sticky fast: Equity concerns arise when AI exacerbates disparities in marginalized communities. And if you deploy imperfect machines built by imperfect humans, you’ve got problems. That’s why ethical AI is so important.

Driving Equity With Ethical AI

How to make healthcare AI more ethical in service to more people is a million-dollar question that has no single answer. Together, several factors can help, though:

Data Integrity

You’ve heard it before: Garbage in, garbage out. If the data used to create algorithms reinforces preexisting biases or disparities, AI can’t be the solution it claims to be. Preserving data integrity matters throughout the life cycle of clinical research, but particularly during recruitment and engagement.

Sometimes, this is more challenging than it seems: How do you know if your inputs promote health inequity? Often, this starts with identifying gaps in data collection — in determining where white space exists and finding out why. Consider what type of data you’re collecting and whether it’s collected at the right frequency.

Specifically, markers of disengagement are key and often overlooked. These are the indicators that someone might drop out because of an unaddressed barrier, such as lack of child care or language differences. When you can preempt drop-offs with predictive intelligence, you achieve greater equity, but that requires data integrity.

Training

Anyone can positively or negatively influence technology. By encouraging clinical research coordinators and other study stakeholders to recognize and flag instances where bias unintentionally prevails, everyone has the potential to help reverse-engineer AI platforms to function more equitably.

This feedback loop requires the help of everyone across the research ecosystem, technology companies included. After all, they’re the ones who can calibrate and improve systems by training AI models with unbiased datasets, then testing algorithms on real-world data to validate it (or not).

Awareness, Collaboration And Standards

The industry needs more awareness, and that responsibility lies with members throughout the research continuum. Together, we should create more understanding of how these technologies can both help and hurt diversity so that sponsors, coordinators, investigators and patients are more mindful of them.

Similarly, the sector needs more collaboration — much as it has already done with AI advancement and innovation. When we unite across public and private industries to define this problem of AI bias and work to resolve it together, there’s no stopping what we can accomplish. The AIM-AHEAD program from the National Institutes of Health is a gold-star example of just that.

Completing this important triad is the need to open up algorithms and standards for the public good, as IBM has done with its open source initiatives. When organizations share their progress with others — and, importantly, when the government standardizes regulatory oversight of that technology — it advances equity across industries, from financial accounting and employment to healthcare.

A Tech-Forward Path For Bias Reduction

The ongoing disparities in clinical research negatively impact underserved communities, perpetuate inequities and hold back research worldwide. While disproportionate representation in trial participation is one consequence of these disparities, they’re far more deep-seated — affecting adherence, drop-offs and even research protocol design.

Fortunately, AI offers the extraordinary potential to remove human biases, but not without ongoing supervision and maintenance to ensure it automates tasks ethically and responsibly. This starts by ensuring data integrity, promoting more training and generating more awareness and collaboration sector-wide.

I have no doubt we’ll get to a point where smart machines can finally catch the research landscape up to the diversity goals set out by the FDA and others. But first, we’ve got work to do — together.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on LinkedInCheck out my website