The Hazy team attended the Future Data and FS Summit last month, an insightful day covering the current status, latest developments and foreseeable future of data and AI within financial services.
Some key themes emerged on the day. The main ones were:
- Synthetic data is at an inflection point, with many financial services using it to unlock value from their data
- The regulation space is growing rapidly; partly due to macro-economic factors, partly due to the pace of technological development
- Building trust in AI and in those using and regulating it is critical to successful adoption of these technologies
We delve into these below.
Data and AI: fundamental for FS but scaling is a challenge
Financial services is at the cutting edge of adopting new ways to use its data. For most firms, data is a prevalent asset and of key importance to business activities.
Yet most aren’t using data to its full potential. Numerous factors must be considered to deploy and scale data projects which makes driving sustained value a challenge: governance, regulation, privacy and ethical considerations, as well as organisational change and embedding a data-first culture.
Before embarking on any data or AI project, banks should consider all of the data protection and business change requirements from the outset. John Edwards from the ICO kicked off the event by stating that ‘data protection by design and by default is necessary. It is not optional; it is the law’. This firm line set the tone for the day’s discussions: governance and regulation should underpin all that banks do in this space.
Sholthana Begum from the FCA echoed the point and used the analogy of a house: as with most transformation projects, weak foundations will impact the structure and success of the whole thing. This is particularly true in the shifting sectors of data and AI. Data harmonisation is essential to getting the most value from the asset.
Ian Phoenix, also from the FCA, highlighted that the fundamental success of AI relies on good data. As ever, ‘GIGO’ (garbage in, garbage out) was a term most of the presenters used, and interestingly, when discussing some of the recent large language models, Gail Crawford from Latham & Watkins stated that 1 million images need to be read to get 90% accurate output. As with any size of AI/ML model, you need large amounts of good quality data.
The FCA’s recently published discussion paper collates responses about current market practices, the challenges faced from current regulation and what generative AI means for governance and accountability.
Trust can be easily tarnished
Whilst financial services were heralded as being a sector at the forefront of innovation using data and AI, it also topped the charts as the sector that customers make the most complaints about, with over 40% relating to subject access rights under article 15 of the GDPR (source: ICO). A stat backed up by a review of the biggest fines issued since the birth of the GDPR.
The audience was reminded of the regulators’ role to empower financial services to use data safely, maintain trust of customers and stakeholders, and comply with law. And if financial firms can’t keep systems and data safe and secure, who is going to trust it with their business and money?
‘A fine from the ICO is nothing if your customers see you cannot be trusted’(John Edwards, ICO)
Synthetic data is no longer nascent; it is proven and being adopted
There was a noticeable shift in how financial services are viewing PETs, specifically synthetic data compared to a year ago. Synthetic data was discussed throughout the day and had a panel dedicated to it. No longer restricted to small projects or siloed teams, synthetic data is no longer an ‘if’ but a ‘when’.
Ian Phoenix touched upon the FCA’s exploration of synthetic data as a technology needing to be regulated and also as a tool they are using in practice. The recently formed Synthetic Data Expert Group will bring cross-sector industry experts to explore how to properly govern and regulate the technology as its adoption grows. The body is using synthetic data in its own organisation to enable data sharing and to build advanced AI models whilst protecting sensitive information. In practice that means using synthetic data to detect fraud and improve AML capabilities - just one potential use case of many.
These applications are beneficial to both financial services and end consumers.
In the panel ‘The value of synthetic data in the financial services. Other alternatives to “real data”, Frank De Jonghe, EY, addressed the audience directly with the statement ‘synthetic data should be a part of your data strategy’. Harry Keen, Hazy, touched upon the broad application of synthetic data: ‘everybody needs data. The great thing about synthetic data is it is very universal. [At Hazy] our customers are starting to embed it as a strategic initiative now, as a layer between end users and their core data lake, with agnostic use cases across the business.’ He highlighted that the drivers for this technology are partly compliance and partly value generation.
Belinda Joshi, Lloyds, emphasised the need to establish guardrails to measure the robustness and quality of the synthetic data but also spoke of her excitement about the software: noting the ‘multitude of good and powerful reasons to use it’.
The prediction of the day was that over the next 12 months, the first financial services organisations will move from siloed synthetic data projects to really spread synthetic data across their whole organisation.
The explosion of generative AI
The generative AI market was described as ‘volatile’ due to its speed of growth and unknown parameters. Legal firms like Latham & Watkins are receiving numerous questions about open source usage of generative AI. The uncertainty about the space was apparent and Gail Crawford touched upon some weighty legal questions circling generative AI at the moment: what does it mean for data subject rights? Where does the IP sit? What does it mean for commercial usage and contracts? How does it sit within ESG, ethics and bias? She also highlighted a big potential risk in fraudulent activity.
The sense of uncertainty was one shared by lawyers, regulators, vendors and FS firms. No one has a course of direction and there was a sense of firms muddling through. The key to the success of this approach will be muddling through collaboratively, not individually. John Edwards said that ‘rapid development and deployment of this technology requires regulators to examine the emerging risks and opportunities.’ Whilst the opportunities of generative AI are novel, data protection principles and the need to respect privacy stay the same.
Andy Thornley, techUK, pinpointed the pervasiveness of generative AI on the step change in public consciousness versus broad-term ‘general’ AI: ‘we are in a new salient time as the everyday person can use GPT; usually AI is hidden.’ Everyday corporate jobs, from marketing to development to customer service, are using ChatGPT frequently, and without friction. Considering the scale of adoption, this is a huge shift in less than a matter of months, and one that is increasingly embedded both inside and outside the corporate sphere. It will take proven generative technologies such as synthetic data to build trust in the space and enable use cases to be built out for more nascent applications of generative AI in enterprises.
Successful regulation demands a holistic approach
Many of the speakers reflected on the ever-increasing and sprawling nature of the regulatory space spanning data and AI (see our Q1 policy update and keep your eyes peeled for our Q2 release).
Julian Cunningham-Day, Linklaters, spoke of the omnibus data laws which are now pervasive to the data and AI space. He presented the global regulatory landscape into three core pillars: data regulation, platform regulation and fintech regulation.
Yet in order to manage the diversity and complexity of regulation, financial firms - especially those that operate globally - need to coordinate and efficiently assimilate all the different types of regulation in a holistic way.
‘If you’re a head of compliance, I’d say your game of whack-a-mole to control the regulatory risk in your business has just levelled up considerably.’Julian Cunningham-Day (Linklaters)
Running different approaches for assessing privacy risks, security risks, platform risks, model risk management of new AI systems against a project is near impossible and will hugely hinder project timelines. Julian suggested that firms should follow the lead of regulators: to undertake a holistic review of the ecosystem, especially as most of the same core principles are at the hearts of most of this regulation:
- Oversight and accountability of senior management
- To understand and explain what’s going on
- Validate initial findings
- Keep watching eye on products as they evolve
- Manage resources that are needed to build and manage responsible AI
- Manage the supply chain which will proliferate with the data and technology inputs
Gail from Latham & Watkins echoed Linklaters views by suggesting a much ‘less siloed approach’ to the regulatory landscape, and advised the audience to expect the unexpected, educate to board level and increase the cadence of updates on strategy, risk, opportunity and communication. The critical role of governance is crucial for the successful adoption of synthetic data - and ultimately any technology - in an enterprise.
Overall, regulated firms need a new universal approach to managing regulation to identify risks in new business processes - in particular AI - and ensure they work together. This should allow teams within organisations, including compliance and legal functions, to support the pace of development and growth. One suggested way was to frame GDPR and other privacy and data regulation as less ‘risk-based law’ and more of a ‘risk based prism’.
Lead from the top
Across all of the talks, a consistent theme was that executive and board members have to understand the technology - be it synthetic data, digital assets or AI tools and their surrounding regulation.
Gaining buy-in from leaders should also reduce individual risk. Instead of one C-suite role blocking a project, projects should be assessed holistically by a group of leaders that are educated on the subject and can rationalise its benefit vs risk.
The day solidified the excitement around the changing shape of data and AI but was underpinned by the importance of openness and collaboration in governing it effectively. Herd mentality in this instance is not acceptable: at best it risks slowed or failed projects, at worst it can result in total loss of customer trust. Leaders should facilitate transparency with customers, regulators and employees and develop rigorous but not restrictive governance frameworks to drive true value from these enabling technologies in a safe way.
If you'd like to learn more about synthetic data and what it can do for you, get in touch with our team of experts here.