A compliance advisor’s perspective: developing and deploying compliant and ethical AI models
In this interview, one of the compliance advisors in KBC’s Ethics unit sketches the ins and outs of developing, assessing, deploying and monitoring AI. He explains how AI can be used ethically in financial institutions and how these tools can remain compliant with changing regulations.
In this interview, one of the compliance advisors in KBC’s Ethics unit sketches the ins and outs of developing, assessing, deploying and monitoring AI. He explains how AI can be used ethically in financial institutions and how these tools can remain compliant with changing regulations.
How does your financial institution integrate responsible behaviour into its AI strategy and ensure compliance with the AI Act?
Our strategy as a financial institution is based on the principles of responsible behaviour. Principles like acting with integrity and transparency and putting our customers at the centre are essential to the foundation of our organisation. We strive to treat our stakeholders and customers fairly and transparently when using AI. That is especially important in areas where algorithmic models can impact outcomes on an individual level, for example credit decisions and personalised financial services. In all of these cases, we apply the same principles of responsible behaviour that we would display in any face-to-face interaction.
We aim to integrate responsible behaviour into the way of working and output of our AI models. It is the key to building added value and trust in our AI solutions. The AI Act has become a regulatory necessity and we have already taken steps to prepare, starting with the 2020 EU White Paper on AI, which is based on the European strategy for AI initially outlined in 2018. We have integrated these principles into a trusted AI framework that allows to assess and mitigate risks in order to implement responsible AI.
Why does the KBC Group insist on overdelivering, compared to what is required from a regulatory standpoint?
Long before the AI Act came into force, we were already developing a minimum viable product that follows the principles outlined by the EU, putting us at a distinct advantage. We are committed to avoiding potential issues and misuse of technology, even in the absence of regulation.
Beyond complying with legislation, our approach is also focused on applying responsible behaviour to the output of our AI models. That is why we believe it is important to conduct impact assessments, which include identifying and addressing risks relating to discrimination, transparency, proper data usage, safety and oversight. We strive for explainability of our AI models, as well as maintaining accountability for their usage.
Operating in a highly regulated sector, where regulators expect well-documented accountability, we strive for documented explanations on the different evaluation areas in our framework.
Can you elaborate on the decision-making process involved in developing and implementing the AI models that the KBC Group uses?
The trusted AI framework we apply helps us make clear and well-reasoned choices and ensures that the ultimate accountability lies explicitly with the business rather than the AI developers. Operating in a highly regulated sector, where regulators expect well-documented accountability, we strive for documented explanations on the different evaluation areas in our framework.
The approval process includes impact assessments. Whenever risks are detected during this process, it allows to reflect on possible mitigating measures to manage these risks.
This is not only applicable to machine learning but also to generative AI, where we can reduce certain inherent risks through advanced prompting.
How does your organisation provide responsible governance and risk management in AI projects?
The standard governance of AI projects begins with an idea formulated by the business. Then, we assess whether AI can provide a solution. During the scoping phase, we estimate the costs, benefits and added value, and we complete a high-level impact assessment on the different evaluation areas to identify potential risks.
We have also been adapting the trusted AI framework to reflect the applicable regulatory requirements of the AI Act. It now includes a check for prohibited AI, and more developments are expected to follow. On the basis of the high-level impact assessment, a preliminary advice is automatically provided and contains clear actions for the creators.
Next, the modelling and piloting phase begins, an interactive process where AI models are tested and possibly adjusted. Throughout this process, the final impact assessment of trusted AI is completed, with more detailed questions to help assess the impact and address any identified risks. A final advice is drafted by the legal and compliance departments before the final decision on deployment is taken. This makes sure that we can present a coherent and documented narrative to regulators and external parties.
Evaluation areas for new AI models
- Data protection and privacy: This dimension focuses on GDPR and privacy, ensuring compliance with local and international data protection regulations.
- Diversity, fairness and non-discrimination: Here, we look at fair treatment and prevention of discrimination and bias. We conduct statistical checks to see if there are deviations in treatment of, for example, age groups. In the case of any deviations, we perform causal checks to determine whether there might be logical explanations.
- Accountability and professional responsibility: This dimension includes assessing the quality of the AI models, data quality, documentation, quality monitoring and accountability. We make sure that there is clarity on which business line is accountable for the AI model.
- Safety and security: We assess how robust the AI models are, their technical vulnerabilities, interactions with third parties and the potential for internal manipulation. This helps us prevent security risks.
- Transparency and explainability: Here, we ensure that AI models are explainable and that there is human oversight where necessary. This is especially important for models with a potentially high impact on customers.
Do you have any concrete examples?
One example of our approach is CV screening. The KBC Talent Acquisition team use AI for support, but always maintain the human touch to ensure fair treatment of all applicants. AI will never autonomously decide on a recruitment. That decision remains one made by the recruiter together with the manager.
During the modeling and piloting phase, technical fairness checks are performed. Before an AI model is deployed, the final advice and the trusted AI impact assessment are reviewed. This document contains the benefits of the AI model, the risks, and the advice from the legal and compliance departments.
All information about the AI model is documented in the same tool, providing an overview of the impact assessments, advice and mitigations. After deployment a monitoring process follows, to ensure maintained model performance. By addressing any issues early on, we can avoid future problems.
The entire process is outlined and scheduled for review by the AI steering committee, where specific points can be further debated and the process formally receives approval and confirmation through thorough and proper documentation.
How does your organisation address the friction between innovation and ethical use of AI?
Surprising as it may seem, I see friction as an opportunity. By integrating the ethical aspect from the beginning and applying the five dimensions of responsible behavior, we can identify and mitigate many risks early on.
Although this initially takes more time, it pays off in the medium and long term. It creates trust, which is crucial when working with AI and Generative AI. People are sometimes wary of AI models, but by ensuring transparency and ethical considerations, we can carefully build trust.