Thomas Strohmer

Artificial Intelligence: Can We Trust Machines to Make Fair Decisions?

Data and Computer Scientists, Ecologists, Pathologists, and Legal Scholars Study AI’s Biases

Insight from CeDAR Director, Thomas Strohmer was recently featured in a UC Davis news article about the bias of artificial intelligence.

Unraveling the tangled roots of bias

While there’s growing awareness of bias in artificial intelligence, there’s no simple solution. Bias can be introduced in a number of ways, beyond the software engineer developing a new technology. Artificial intelligence and machine learning algorithms rely on data, which is not always representative of minority populations and women. That’s because behind the data, the decisions — about which data to collect and how to use it — are still made by people.

“We cannot address bias and unfairness in AI without addressing the unfairness of the whole data pipeline system,” said Thomas Strohmer, director of UC Davis’ Center for Data Science and Artificial Intelligence Research, or CeDAR.

CeDAR is a hub for research activity focused on using AI for social good, from better healthcare to precision agriculture and combating climate change. Fighting bias and standing up for privacy is a natural part of that mission, Strohmer said.

“Things like racial profiling existed before these tools. AI just enhances an existing bias. If you feed a biased data set into an algorithm, the result will be a biased algorithm.”

Because new technologies are often adopted at scale, Strohmer noted, biases can quickly become widespread, and they’re not always easy to detect. To determine if there’s bias in a data set or an algorithm, you need access to the data and the algorithm.

Facial recognition, video analytics, anomaly detection and other kinds of pattern matching are being used in law enforcement — often out of public view.

Read the article in it's entirety.