Regulation and Risk Management of AI in Financial Services 2022 - Highlights and key takeaways from the event
At the recent Regulation and Risk Management of AI in Financial Services Summit, several key themes ran throughout the day’s many discussions. When debating the challenges that currently face AI and how it could potentially change financial services, the panellists repeatedly returned to how government policies, company implementation, and ethics would shape AI’s future in the industry.
Here are our main takeaways from the day’s events:
Collaborate to create coherence and clarity
The resounding conclusion from any policy discussion was that, for AI to be regulated efficiently and effectively, a collaborative approach is required from Governments - on both a national and international level. While there will undoubtedly be a ‘tidal wave’ of regulation around the globe, if Government departments work together productively, it will be possible to create a coherent public policy to provide much-needed clarity.
And it is clarity that businesses and companies want.
Clarity will help the industry implement AI into their cultures and structures in an optimal manner. Therefore, through robust regulation, Government policy can help ensure the smooth introduction and usage of AI in both the short and long term - reducing risk and giving a clear framework for AI’s future.
Company policy implementation - the earlier, the better
Crucially, to ensure the smooth introduction of AI on a company level, C-suite execs need to be aware of the impact that AI can, and will, have. The earlier this awareness comes, the better.
Technological innovations have historically taken organisations by surprise, be it the internet, digitalisation or GDPR, businesses have underestimated how big an impact they would have. To prevent this from happening again, C-suite execs not only need to grasp AI’s potential power, they need to be steadfast in implementing regulatory-sound policy within their businesses.
The difficulty in doing so, and why so many companies have struggled to implement policy successfully so far, is that ensuring compliance with AI regulations currently falls a little between the gaps. Is it the legal department's or software engineers' responsibility, for example? Or would it be better to create roles or departments specifically for this problem, such as an AI ethics board? Many banks have created ethics teams that sit across multiple departments to monitor and advise on new technology: ensuring it is implemented responsibly and that bias is avoided where possible - a sure start.
Ethics - the art of regulating artificial intelligence
Every conversation at the summit touched on ethics. While all discussions mentioned how data ethics would make regulating AI highly complex, there was a common understanding that ethical considerations must be taken into account given how powerful and broad reaching AI could be.
In fact, some of AI’s power can already be seen. Common use cases from personalisation to enhance customer experience to sentence suggestions in Gmail which help craft better worded emails, are now part of the everyday. At the cutting edge of the spectrum, a recent boom in generative AI software is leading the surge in the demand, made up of unsupervised or semi-supervised algorithms that use existing content to create new content, such as Open AI’s GPT-3 and ChatGPT products.
However, while these seem harmless enough, having ethical regulation in place - before AI develops further - is needed. What will happen when AI technology develops to such a level that it could put people out of jobs and potentially increase wealth inequality in our society?
Or, what about algorithms on social media which push certain content? It’s already been seen that this can lead to echo chambers and radicalisation. In fact, AI has even been used to generate news articles and political policies - as evidenced by the ultimate example of the Synthetic Party in Denmark. The party follows a platform created by an AI, with a chatbot named Lars Leader as its frontman. But, having technology making automated Government decisions, arguably, has a great deal of risk attached to it. Effective regulation should ensure that we share a base level understanding of the parameters of this technology.
Keeping ethics discussions live and open will be vital in establishing a comprehensive regulatory environment. Doing so will make the most of AI’s potential, while simultaneously minimising its risks.