Close

Financial Services Firms...

Get a FREE, Bespoke Review of Your QA & Testing

Start The Assessment

Resource

A shortcut best avoided: The dangers of using client portfolio data in your test environments

Yesterday

Across the private banking and wealth management industry, high-net-worth individuals’ most confidential data is too often used inappropriately – quietly and complacently copied into less-secure test environments to support system upgrades, digital transformation programmes and vendor-led delivery. Firms pursuing this practice aren’t just risking their reputation: they may also be sleepwalking into civil or even criminal prosecution. In this month’s Assured Thought blog, Dominic Tovey, our Head of Delivery, explains why using production data in testing is a regulatory minefield best avoided entirely – and why routine practice is likely to result in adverse consequences for your firm eventually, however carefully you tread.

Dominic Tovey

Dominic Tovey

Head of Delivery

In our industry, trust is far more than a marketing buzzword. In fact, it’s the foundation of every client relationship.

High-net-worth individuals entrust private banking and wealth management firms with some of their most sensitive and personal information – from identity and family details to investment portfolios, transaction histories and long-term financial plans. Despite GDPR stating that, even after data masking, such information should be heavily ring-fenced to prevent inappropriate or accidental internal sharing – let alone external sharing – for many organisations, it’s become normal practice to use this data in relatively insecure test environments.

At Assured Thought, we’ve even seen instances of firms using real, unobfuscated client data in test environments as a matter of course, seemingly without any awareness of potential negative consequences to their clients or themselves.

The uncomfortable reality, however, is that using such data in test environments creates a far greater level of regulatory exposure and reputational risk than many private banks realise; a level of risk they wouldn’t dare approve in production, even with the heavily controlled, monitored and audited systems production environments usually have – which test environments have only very rarely.

It’s no wonder this practice is attracting growing scrutiny from regulators, auditors and risk committees, who increasingly recognise it as a red flag: a sign of an organisation that perhaps doesn’t understand the consequences of its actions around client data.


A longstanding challenge

Sourcing data that’s appropriate and safe to test with is not a new challenge.

For as long as modern testing has existed, organisations have struggled to create realistic, usable test data that doesn’t rely on production systems. Over the years, our industry has seen a steady rise in tools designed to address this challenge, from data generation and obfuscation platforms to sophisticated test data management solutions and in-house frameworks built by banks themselves.

Today, more than ever before, regulatory scrutiny is sharper. Delivery ecosystems are broader. Cloud platforms, third-party vendors and distributed teams mean sensitive data now travels further and can be accessed by more people.

Yet despite these new possibilities and increased dangers, the default behaviour in many organisations has remained largely unchanged: when delivery pressure increases and timelines tighten, unanonymised production data is still often seen as the fastest and easiest option. Many organisations treat the aforementioned tools and solutions as optional enhancements rather than essential controls, failing to acknowledge that their routine use of sensitive production data in test environments is putting them in the middle of an increasingly dense regulatory and reputational minefield.

Why portfolio and KYC data are classified as high risk

Wealth platforms don’t just hold personal data. They also hold information that regulators increasingly view as ‘high impact’.

This includes full identity records, source of wealth and source of funds data, detailed transaction histories, beneficiary structures, investment strategies and sometimes even notes from relationship managers and advisors. Combined, this can create a highly detailed financial profile of an individual and their family.

Under GDPR, much of this data falls into categories that require enhanced protection, and failing to provide that protection might have regulatory consequences. From a business perspective, the impact can go even further: a breach involving high-net-worth client data might permanently damage brand reputation and client confidence.

If used in a typical, less-secure test environment, this data could be accessed by a wider range of people than under production protocols. Developers, testers, third-party vendors, systems integrators and offshore delivery teams may all have some level of access, and each additional access point increases exposure.

Navigating the regulatory minefield: UK GDPR

Under UK GDPR, unless personal data is truly anonymised, firms must justify a lawful basis for using it in testing – and this purpose must be compatible with the original reason the data was collected. Only the minimum data necessary for testing should be used, which must be properly pseudonymised, have rigorous access controls and a defined retention and destruction process, and not be retained for any longer than is strictly necessary.

But there’s another hurdle yet: UK GDPR also states that this personal data can only be used in test environments that have security standards at least equal to those used in production environments. In our experience, few organisations have test environments with security that can meet this standard. Using unanonymised personal data in such typically less-secure testing environments may also breach operational resilience and systems and controls obligations laid out by the FCA.

Firms using unanonymised live client data in their test environments must be able to demonstrate compliance with UK GDPR regulations and standards, or may be subject to fines or sanctions enforced by the Information Commissioner’s Office – or even, in certain circumstances, criminal liability under the Data Protection Act 2018.

True anonymisation of client data has its own inherent complications, and it’s clear that adhering to the regulations in order to use obfuscated, pseudonymised client data for testing is far from the simple activity many organisations seem to think it might be.

Though it’s certainly possible to safely navigate this regulatory and reputational minefield, it’s important to remember that test parameters, technology, and cybersecurity and regulatory factors can be subject to rapid change. The mines in this field move, so pursuing a long-term policy of using unanonymised client data for testing is perhaps less like crossing a complex minefield and more like sitting on a ticking time bomb.

The growing pressure from regulators and risk committees

Obviously, regulatory scrutiny is no longer limited to production systems.

In accordance with UK GDPR, auditors and regulators are increasingly asking where test data comes from, how it is handled, who can access it and how long it is retained. In many cases, organisations struggle to provide the clear, documented answers the regulations insist upon.

Internal risk and compliance teams are also raising concerns. As wealth platforms become more complex and delivery models more distributed, it becomes harder to track where client data is being replicated and who ultimately has access to it.

For private banks operating across multiple regions, this challenge is amplified by data residency requirements and cross-border data transfer rules. A test environment hosted in one jurisdiction but populated with client data from another can quickly create compliance issues that are difficult to unwind.

How third-party and offshore testing increases exposure

Few large wealth management programmes are delivered entirely in-house.

System upgrades, platform migrations and digital channels are often supported by external partners, cloud providers and offshore delivery teams. This creates an extended ecosystem in which sensitive data can move beyond the direct control of the bank.

Even with contractual safeguards in place, the operational reality is that once real client data enters these environments, visibility and traceability become harder to maintain. Simple questions, such as who accessed what data and when, can become difficult to answer.

From an audit perspective, this lack of transparency is often more concerning than the original decision to use production data in the first place.

Why masking alone often falls short

Many organisations rely on data masking as their primary control. While masking can reduce obvious identifiers such as names and account numbers, it often fails to address deeper risks.

In complex wealth platforms, data relationships matter. Portfolio structures, transaction patterns and behavioural data can still be used to re-identify individuals, especially when combined with external information.

Poorly implemented masking can also damage data quality. Test scenarios may no longer reflect real-world behaviour, leading to gaps in coverage, missed defects and false confidence in system performance.

The result is a solution that neither fully protects client data nor fully supports effective testing.

What a bank-grade test data compliance model looks like

Leading organisations are starting to treat test data as a governance and risk issue, not just a delivery problem.

A bank-grade approach typically includes clear ownership of test data across technology, compliance and risk functions. It defines where data can be sourced, how it can be transformed, who can access it and how long it can be retained.

Automation plays a critical role, particularly through synthetic data generation and controlled data provisioning. Instead of copying production data, teams can create realistic, compliant datasets that support multiple test types without exposing real client information.

Just as importantly, mature organisations maintain evidence. Access logs, approval workflows and documented data lineage allow them to demonstrate compliance during audits rather than relying on verbal assurances.

Turning test data from a risk into a control

Private banks and wealth managers are under pressure to modernise platforms, improve digital services and integrate new technologies. Testing will always be a critical part of that journey.

The question is whether test data remains an unmanaged risk in the background or becomes a visible, governed and auditable control that supports both delivery and compliance.

Firms that make this shift are better positioned to move faster, work more effectively with partners and demonstrate to regulators and clients alike that confidentiality is embedded into every layer of their operation.

Assess your test data risk posture before your next audit or platform migration
Here at Assured Thought, we advise our clients to use entirely synthetic data for testing as much as possible, steering them away from any type of explosive consequences – but we’re also adept at helping them to minimise risk if synthetic data can’t be used.

If you’d like to understand how your organisation’s test data practices compare to regulatory and industry expectations, we can help you evaluate your current model and define a compliant, scalable approach that supports secure delivery at enterprise scale.