DataRobot’s State of AI Bias Report Reveals 81% of Technology Leaders Want Government Regulation of AI Bias

Issued in collaboration with the World Economic Forum and global academic leaders, new research finds 1 in 3 organizations faced direct business impact due to an occurrence of AI bias

AI Cloud leader DataRobot released its State of AI Bias Report, conducted in collaboration with the World Economic Forum and global academic leaders. The report offers a deeper understanding of how the risk of AI bias can impact today’s organizations, and how business leaders can manage and mitigate this risk. Based on an exploration of over 350 organizations across industries, the research findings reveal that many leaders share deep concerns around the risk of bias in AI (54%) and a growing desire for government regulation to prevent bias in AI (81%).

Marketing Technology News: Backblaze Strengthens Leadership With Two New Board Members

“The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

AI is an essential technology to accelerate business growth and drive operational efficiency, yet many organizations struggle to implement AI effectively and fairly at scale. More than one in three (36%) organizations surveyed have experienced challenges or direct business impact due to an occurrence of AI bias in their algorithms, such as:

  • Lost revenue (62%)
  • Lost customers (61%)
  • Lost employees (43%)
  • Incurred legal fees due to a lawsuit or legal action (35%)
  • Damaged brand reputation/media backlash (6%)

“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long,” said Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum. “The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

Marketing Technology News: MarTech Interview with Damien Mahoney, Co-founder and CEO at Stackla

While organizations want to eliminate bias from their algorithms, many are struggling to do so effectively. The research found that 77% of organizations had an AI bias or algorithm test in place prior to discovering bias. Despite significant focus and investment in removing AI bias across the industry, organizations still struggle with many clear challenges to eliminating bias:

  1. Understanding the reasons for a specific AI decision
  2. Understanding the patterns between input values and AI decisions
  3. Developing trustworthy algorithms
  4. Determining what data is used to train AI

“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his report Predictions 2022: Artificial Intelligence. Purcell continues, “Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”

DataRobot’s Trusted AI Team of subject-matter experts, data scientists and ethicists are pioneering efforts to build transparent and explainable AI with businesses and industries across the globe. Led by Ted Kwartler, VP of Trusted AI at DataRobot, the team’s mission is to deliver ethical AI systems and actionable guidance for a customer base that includes some of the largest banks in the world, top U.S. health insurers, and defense, intelligence, and civilian agencies within the U.S. federal government.

“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place,” said Kwartler. “Organizations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted and explainable.”

Marketing Technology News: Diversity Isn’t Just Important, It’s Good for Your Brand

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy