Share

LMA Technology Newsletter | April 2026

April 2026

Welcome to the second edition of the LMA’s Tech Newsletteryour front-row seat to the regulatory shifts redefining technology in the loan markets, paired with insights designed to keep you informed, inspired, and one step ahead. It’s been a standout month for the LMA’s Technology and Innovation team, marked by an exciting series of firsts that goes to show how quickly this space is evolving.  

LMA’s inaugural AI Innovation seminar 

We proudly delivered our inaugural Innovation Seminar, titled “AI in Financial Services”. It was an outstanding success with an impressive 600 attendees, of which 50 attended in-person. The event featured insightful contributions from Clifford Chance, Ace Consulting, Sirion, Nammu21, Linklaters, and, of course, the LMA team (watch the recording here). The energy carried through to the networking drinks that followed, creating a great opportunity to connect and exchange ideas. A huge thank you to everyone who attended and to those who helped bring the event to life. We’re already looking forward to welcoming you to our next Innovation event, where we’ll be diving into AI Assurance in the Agentic Era, with a guest speaker Professor Carsten Maple from the Alan Turing Institute – Register here

AI in Financial Markets: Moving from Hype to Infrastructure  

We are also delighted to publish our very first technology paper, “AI in Financial Markets” where we explore how firms across financial markets are approaching the use of AI and where meaningful operational value is beginning to emerge. It considers how governance and regulatory expectations are shaping deployment, highlights areas where AI is delivering measurable benefits, and emerging capabilities in document-heavy and portfolio-level workflows. You can learn more about the long-term impact of AI within the financial markets here.

LMA’s FCA Innovation Roundtable 

We were thrilled to host the FCA Innovation roundtable, bringing together senior representatives from member firms to explore the FCA’s priorities around innovation, growth, and competitiveness, as well as its evolving regulatory approach to emerging areas such as AI. With attendees from over 20 institutions across the market, it was fantastic to see such a diverse range of perspectives shared. This was a Chatham House Rule meeting, so please contact your LMA rep for details, but all parties noted a strong appetite to keep the dialogue going, which is a great sign that we’re viewed as a constructive voice in this fast-moving space.

Image

Regulation Watch (In partnership with Perkins Coie)

Pathbreaking AI models – risks and legal obligations 

Earlier this month Anthropic announced Claude Mythos Preview, a general-purpose model that has the unprecedented ability to find and exploit security flaws at scale. Among other things, the model has identified thousands of high-severity vulnerabilities, including some in every major browser and operating system. Given its potential for misuse, the company doesn’t intend to publicly release it in its current form. Instead, through Project Glasswing, it intends to make it available exclusively to select partners as part of their defensive security work and to several organisations building and maintaining critical software infrastructure as a starting point. OpenAI has also recently announced that over the next few months it will be fine-tuning its models specifically to enable defensive cybersecurity use cases. With these developments, the emerging power of AI to simultaneously enhance and endanger cybersecurity is now more real than ever and should renew the impetus of organisations across the board to urgently strengthen their security protocols and vendor oversight. 

As noted by the UK’s National Cyber Security Centre, AI will increasingly be used to expose organisations who have failed to take appropriate steps to safeguard their cybersecurity, and organisations will be under more acute pressure to patch vulnerabilities more quickly. With this in mind, organisations should look at ways to enhance their cybersecurity posture (including whether they can harness AI tools to detect vulnerabilities) and update incident response frameworks to account for both AI-powered attacks and any AI-enhanced defense capabilities that they adopt. These enhancements will also reinforce compliance with existing legal obligations regarding cybersecurity, such as: 

  • The requirement under Article 32 GDPR for controllers and processors to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk” while taking into account the state of the art and cost of implementation, amongst other considerations.  
  • The requirement under Section II of DORA for financial entities to “continuously monitor and control the security and functioning of ICT systems and tools”, minimise ICT risks and detect anomalous activities. 
  • The requirement under Article 14 of the EU’s Cyber Resilience Act for manufacturers of software and hardware products with digital elements to report “actively exploited vulnerabilities” and “severe incidents” starting this September under Article 14 of the EU’s Cyber Resilience Act. 

Industry authorities are currently wrestling with how to deal with this new frontier – it was reported soon after Anthropic’s announcement of Claude Mythos Preview that the UK’s National Cyber Security Centre, Financial Conduct Authority and Bank of England have begun urgent discussions on how to address new AI-related threats. In addition, as AI-powered vulnerability detection enters the commercial domain and its adoption spreads, it is plausible that privacy regulators and industry authorities may come to expect organisations to use AI tools for defensive cybersecurity purposes. Regardless, it is now increasingly clear that organisations must remain one step ahead of those who may exploit those same tools against them.  

UK regulators intensify AI oversight in financial services 

The FCA launched AI Live Testing Cohort 2 this month, enabling firms to trial AI with actual consumers under FCA oversight. The January 2026 Treasury Committee report, which found that over 75% of UK financial services firms now use AI, called on HM Treasury to designate major AI and cloud providers as Critical Third Parties (no designations have yet been made under that regime) and on the FCA to publish practical guidance on the application of consumer protection rules to AI, and SMCR accountability for AI use, by end of 2026. LMA members deploying AI in credit risk, loan monitoring, compliance or client-facing functions should assess their governance frameworks now against these emerging supervisory expectations. 

On 1 April 2026, the Bank of England and PRA published their response to the Government’s January 2026 request that regulators set out plans for enabling safe AI innovation. The letter confirms that the BoE and PRA will maintain their technology-agnostic, principles-based approach and will not introduce AI-specific rules. However, AI adoption is now a named supervisory priority, meaning firms should expect direct questions on governance, model risk management, and AI oversight in supervisory dialogues. Separately, the BoE’s Financial Policy Committee has directed both the BoE and the FCA to undertake further work on the financial stability risks from frontier AI agents. These could include systems capable of executing trades, moving funds or making independent decisions.  

New cryptoasset regime in the UK 

The UK is introducing a financialservices regulatory regime for cryptoassets through the Financial Services and Markets Act 2000 (Cryptoassets) Regulations 2026 published on 4 February 2026. Under this new regime, any firm conducting in-scope cryptoasset-related activities in or from the UK will require authorisation from the FCA. The new regime will cover “qualifying cryptoassets” and “specified investment cryptoassets” and will regulate activities related to these assets. 

The regime is expected to come into force in October 2027. Firms that want to undertake these regulated activities will be able to apply to the FCA from 30 September 2026 to 28 February 2027 for authorisation. The FCA encourages firms to engage with it early before submitting their official applications. 

EU AI Act: Parliament votes for deadline extension – Council approval pending 

It seems increasingly likely that the compliance deadline for high-risk AI systems will be pushed back. On 26 March 2026, the European Parliament voted in favour of extending the highrisk AI compliance deadline to 2 December 2027 for standalone highrisk AI systems (Annex III), and to 2 August 2028 for AI systems embedded in regulated products (Annex I). The Parliament also backed an extension for AI watermarking obligations to 2 November 2026. These changes form part of the Digital Omnibus package. The Council of the EU has not yet formally approved the extensions, and a political agreement may be reached around 28 April, though the amended Regulation will not take effect until published in the Official Journal. If adopted, these delays should give firms additional breathing room to comply. However, because the original 2 August 2026 deadline remains legally in force until the delay is formally adopted, firms should still prioritise compliance efforts. 

Tech News Update

UK financial regulators rush to assess risks of Anthropic’s latest AI model 

UK financial regulators are urgently engaging with cyber security authorities and major financial institutions to assess and address potential vulnerabilities identified by Anthropic’s latest AI model, amid concerns over risks to the resilience of the UK financial system — read more to find out how regulators and firms are responding.

Bank of England and FCA commit to action on AI following warnings from MPs 

The Bank of England is exploring the use of AI agents in trading markets, including risks such as ‘herding’ behaviour, as part of its response to the Treasury Committee’s review of AI in financial services. With the FCA set to issue best practice guidance and ongoing debate around Critical Third Parties, read more to find out what this means for firms and financial stability.

U.S.-UK Financial Regulatory Working Group Winter 2026: Joint Statement 

UK and US regulators are strengthening coordination on AI, digital assets and financial stability, signalling growing focus on cross-border risks and innovation. With AI, resilience and critical third-party oversight high on the agenda, read more to find out what this means for firms.

75% of new code at Google is AI generated 

Sundar Pichai, CEO of Google, recently mentioned that AI now writes 75% of Google’s code. Read more to find out how this advancement is supporting their growth.

Upcoming Events

Know your Agent: AI Assurance in the Agentic Era

8 June 2026

Key Contacts

​Amandeep Luther​

Head of Technology and Innovation at LMA​Amandeep.luther@lma.eu.com​

​Stacy Young​

Tech & Data Lawyer at Perkins Coie​Stacyyoung@perkinscoie.com​

Related