Systems based on Artificial Intelligence (AI) are often employed nowadays to make decisions that may have far-reaching consequences for individuals and society. Researchers developing these systems therefore have a special responsibility as far as fairness is concerned. However, given the complexity of current systems being developed as well as the uncertainty around their future impact, responsible innovation faces challenges. Consequently this field of research is especially called upon to adopt an interdisciplinary approach. A research group led by Katharina Kinder-Kurlanda at the University of Klagenfurt’s Digital Age Research Center is exploring the interface between social sciences and computer science.
“The decisions made by AI systems can affect anyone, anywhere and at any time and can involve risks, such as the refusal of a loan, a job or medical treatment. In the worst case, AI can result in human rights violations when people are treated unfairly”, says Katharina Kinder-Kurlanda, who heads the “Digital Culture” research group at the Digital Age Research Center at the University of Klagenfurt. Among other things, she is part of the EU project NoBIAS, where researchers are developing new methods for unbiased AI-supported decision-making.
Katharina Kinder-Kurlanda goes on to explain: “This type of project brings together a wide variety of approaches, methods and roles. For instance, when we first started, the individual players had very different ideas about what they considered to be fair and how fairness could be achieved.” This is the point where computer science reaches its limits: By using methods to mitigate algorithmic biases, we can achieve an individual goal and thus a narrow concept of fairness. But when it comes to meeting the requirements of collaborative conceptualisations of fairness, these approaches fail. “Collaborative concepts of fairness require the strategic capacity to allow different approaches to co-exist,” Katharina Kinder-Kurlanda continues.
This example illustrates the considerable complexity of taking an interdisciplinary approach to Artificial Intelligence. The challenge definitely deserves the effort: “Not only should AI algorithms be optimised technically, but we must also incorporate ethical and legal principles into the training, development and use of AI algorithms. Only in this way can we ensure social welfare and simultaneously to benefit from the vast potential of AI.”
.
Photocredit: © Daniel Waschnig