Artificial Intelligence (AI) or Machine Learning (ML) algorithms, in the past few years, have been widely implemented in various industrial applications. Sometimes, these ML algorithms exhibit significant bias, or referred as Algorithm Bias, to certain groups. By the definition, Algorithm Bias refers to the inequality brought by the application of algorithms regarding personal features like socioeconomic status, race, gender, etc. For example, in health care systems, sometimes, for White and Black patients at equal risk level, the predictions by AI algorithms can be racially discriminative where Black people may be cared with less resources.
One of the most challenging problems combating Algorithm Bias is that, some of non-parametric ML models like Support Vector Machines (SVM) and models with excessive number of parameters (hundreds of thousands or even hundreds of billions) like deep Neural Network (DNN) are usually treated as black-box and difficult for interpretation. Thus, it’s difficult for fair decision-making when humans don’t understand the process at all. One of the ways researchers solving Algorithm Bias is through Explainable AI (XAI) and Interpretable AI where the process of decision-making must become transparent and easily understandable by humans.
Insurance industry is embracing the wave of AI and given the fact that insurance companies have extensive impact on personal lives, it’s essential for insurance companies to justify their decision-making and impose fairness on ML algorithms. As an educational project, we will focus on taking literature review on the problem of Algorithm Bias and understand its importance in the business environment. And we will also survey on the current developments in solutions to Algorithm Bias, especially the developments in interpretable AI and think how to incorporate those methods in the context of insurance applications.
Supervisors: Zhiyu (Frank) Quan
Graduate Supervisor: Panyi Dong