Insight Diversity: Detecting bias in AI customer interactions

Submitted on Thursday, 28/03/2024
Insight’s Ed Curry, Ihsan Ullah, Andy Donald, and MA Waskow (pictured) at the University of Galway recently published the results of a survey considering the development and quality of Machine Learning and AI tools to detect bias in customer interactions.
The definition of bias is an act of supporting or opposing a particular person or thing in an unfair way, allowing personal opinions to influence one’s judgment. With the increase in usage of machine learning models within many different aspects of customer interactions it has become very clear that bias detection within associated customer interaction datasets requires a critical focus on issues such as the identification of bias prior to model building, lack of understanding and transparency within models and ultimately the prevention of biased predictions or classifications.The purpose of the survey, done in conjunction with industry partner Genesys, was to establish an understanding of how established customer interaction-based use cases can utilise these techniques. The focus is primarily on tackling the bias in unstructured text data as a pre-process prior to the machine learning model phase.

There are many specific types of bias that can make their way into the machine learning/AI model. Identifying and labelling these is an important first step in developing software to detect them. The Insight team cite historical bias, popularity bias, social bias, linking bias, behavioural bias among the various bias labels. An example of a linking bias would be the linking of particular occupations with a particular gender. Large language models (LLMs) of the sort that support systems such as Chat GPT may inherit such bias from the datasets they are built on. It’s vital that these biases can be identified and removed prior to the machine learning phase.

Researchers at Insight have, through their work examining different datasets through this lens, identified a need for a schema of models to mitigate bias and the development of a tool to detect bias using sentiment analysis or emotion recognition software.

Subsequent work: Towards a Semantic Approach for Linked Dataspace, Model and Data Cards within which the team presents schemas to mitigate bias in datasets, models, and dataspaces.