What should a regulated financial services business consider when deciding whether to use an AI tool?

What should a regulated financial services business consider when deciding whether to use an AI tool?

02 January 2024 | Nicola Crowell

Introduction

2023 saw the use of Artificial Intelligence (AI) become more and more common within financial services and other professions. AI tools already range from the use of ChatGPT, or using AI to automate some administrative tasks through to AI being used to undertake image recognition and reach diagnoses from radiology scans and beyond. And we have barely started to learn what AI may be capable of.

It would be interesting to understand whether financial services businesses are comfortable that they understand the existing level of AI use by their personnel. Many people are already using ChatGPT and other chatbot tools on a daily basis but we’ve yet to see signs of businesses communicating to their staff how AI may or may not be used within the business - for example - is the use of chatbots acceptable and, if so, which chatbots may be used, for which purposes and what controls or checks should be in place?

Obviously the potential impact of a new AI tool will be determined by its use – your colleagues using a chatbot for tasks such as research or to produce first drafts of documents will likely have relatively little impact; their using chatbots to produce documents which will only be subject to a cursory review before being issued to clients will likely be more risky and have a higher impact; whilst at the other end of the spectrum - introducing  an AI system in your business’ control framework will be a significant decision that will require careful consideration to ensure the business receives the benefits of the new technology whilst at the same time minimising the risks and ensuring that you maintain compliance, risk management, and the overall effectiveness of your control framework.

Factors to be considered

A business should consider the following (frequently interconnected) key factors:

Regulatory compliance

A business must understand and comply with any legal and regulatory requirements governing the use of AI in its jurisdiction and industry. Regulations may vary by jurisdiction, and it's essential for the business to satisfy itself and document how it ensures that the AI system aligns with applicable laws and standards.

Many regulators, quite rightly, focus on the potential impact on clients and any systems which may impact on the provision of services and/or information to clients are considered particularly important. They recognise that AI may bring important benefits to clients such as improved outcomes, more effective matching of clients to products and services, increasing financial access and an enhanced ability to identify and support vulnerable clients. However, they are also concerned to avoid the potential for negative impacts such as the harmful targeting of consumers’ behavioural biases, making discriminatory decisions, increasing financial exclusion and hence reducing trust.

Transparency and explainability

AI systems, especially those involving machine learning, can be complex and opaque and so the transparency and explainability of more complex systems in particular can be a real problem. Businesses should consider how transparent the algorithms they use are and ensure they understand, to the extent possible, how the AI has reached a “decision”. Due to the sophistication of some AI however, it may the case that a business is simply not able to determine how the AI reached the decision, for example it may have spotted a statistical trend beyond the capabilities of the average human, and AI typically is not capable of explaining its rationale. In such circumstances, a business must be absolutely robust in ensuring that a vast and varied data set is used for training and testing the algorithm and that the business is comfortable with the approach used.

Without such a level of understanding, the business will not be able to assess the use of the algorithm, in particular the risks arising from its use, or to exercise effective control over the AI. In addition, it’s possible that regulators may require businesses to provide clear explanations of AI-driven systems and decisions.

Risk management and model validation

As part of understanding an AI system, a business should conduct a thorough risk assessment to identify potential risks associated with the system, including the operational, legal, and reputational risks. The business will then be able to implement risk mitigation strategies and controls to manage these risks effectively.

This risk assessment should likely include a rigorous process of validating and testing the AI model to include assessing its accuracy, fairness and reliability. In addition, the business should ensure that the AI system aligns with the business’ values and ethical standards and that it does not result in biased or discriminatory outcomes.

Once introduced, the business should monitor and validate the AI on an ongoing basis.

Human oversight and accountability

A key element will be to determine how human (senior management) oversight of the AI system will be established and maintained. This should include clearly defined roles and responsibilities within the business and access to external expert advisors if the business doesn’t have any such experts in-house. The business should consider establishing accountability mechanisms for decision-making relating to the system and ensure that there are procedures in place to address any issues that may arise.

Documentation and auditing

As with most controls, a business using an AI tool should maintain comprehensive documentation of the AI tool, how it is used within the business, any process that has been followed to implement the tool or tailor it for the business, and its algorithms and decision-making logic.

This can then be used as the foundation for the governance of the tool or system, including ongoing monitoring and any internal or external reviews of its effectiveness.

Training and skill development

Businesses should of course invest in training and skill development for those employees who will be working with or overseeing the AI system/tool with the aim of ensuring that staff members have the necessary expertise to understand and manage the technology effectively.

As noted above, businesses may need to consider engaging external AI experts if such expertise is not available in-house.

Continuous monitoring and improvement

Any business adopting an AI system into its control framework should ensure there is a process for the continuous monitoring and improvement of the AI system. The business should regularly assess its system’s performance, make updates as necessary, and ensure that key personnel stay informed about relevant advancements in AI technology. Reports should be submitted to senior management to allow them to exercise effective governance and oversight.

Legal and contractual considerations

Depending on how the AI tool or system is being used within the business, it may be necessary for the business to address any legal and contractual considerations related to its use. This may include contractual agreements with AI vendors, intellectual property rights, and legal liabilities. Financial services business in Jersey should also consider whether the tool or system falls within the scope of the JFSC’s Outsourcing Policy and take any appropriate steps.

Data privacy and security

Finally, where applicable, business should address data privacy and security concerns associated with the use of AI. Any relevant AI system must comply with data protection regulations, and the business must ensure that it implements robust security measures to safeguard any sensitive information which is processed by the system.

These measures could include something as simple as reminding your colleagues that chatbots are not secure and so anyone using these tools to, for example, produce minutes or correspondence, should not input any private or confidential information such as client names, details of their structures, perhaps even details of a transaction, into the chatbot.

Conclusions

By carefully considering these factors, a regulated financial services business should be able to integrate AI tools and systems into its control framework in a responsible, compliant and effective manner. Consulting with legal and compliance experts with expertise in AI and financial services may also be beneficial.

Needless to say that this is an incredibly fast-moving area and it’s difficult to keep up with developments. In the UK, the Financial Conduct Authority and the Bank of England have been proactively considering the potential impact of AI and other technologies on the businesses they regulate and on how they should therefore regulate those businesses. The Guernsey Financial Services Commission is working with Guernsey Finance and has made recordings from its conference on “AI Finance in Guernsey: The Power and the Implications” available on its website. And the Jersey Financial Services Commission has established its Innovations Hub to support the development and adoption of new and innovative technology aimed at financial services and be a dedicated point of contact for firms to raise relevant queries.

We’d recommend keeping an eye out for relevant announcements from these organisations, as well as the information and events produced by Jersey Finance and Digital Jersey. As always - comprehensive documentation of your systems, implementation decisions, testing and ongoing monitoring are also key, as is being prepared to take the necessary time to decide on and implement a new system and seeking external expert support where needed.


Contact Us


5 Anley Street, St Helier, Jersey, Channel Islands, JE2 3QE