Why insurers must rethink fraud and compliance in the age of AI
20 Feb, 2026

 

Chantelle Frier, National Sales Manager at SW360

 

A new customer applies for a policy online. All the relevant documents have been submitted and appear legitimate. The face on the selfie matches the person’s ID. The address exists. The bank account checks out. The policy is approved in minutes. Months go by, and this customer establishes an activity history before submitting a high-value claim. Nothing immediately stands out. But it turns out that the person behind the policy never existed.

 

This scenario may sound unbelievable, but it’s an example of synthetic identity fraud, a new type of financial crime that is fast becoming a major threat to the global insurance industry. Synthetic identity fraudsters use a combination of real and fabricated information to create a completely new, but entirely fake, digital person. Rather than stealing and using one real person’s full identity, criminals could use a real ID (which has typically been stolen or leaked) and combine it with a made-up name and surname, along with a false address, email, and phone number. Additionally, fraudsters are increasingly using AI-powered tools and deepfake technology to generate realistic faces, voices, documents, and videos that can bypass traditional verification systems. They might even “age” these synthetic identities over time or generate images of them in different locations to create the illusion of a genuine, active and evolving person.

 

When fraud becomes an identity problem

 

According to the Association for Savings and Investment South Africa (ASISA), South African life insurers and investment companies incurred losses of at least R131.6 million in 2024 due to fraud by criminals and dishonest individuals. ASISA members detected 16 520 cases of fraud and dishonesty in 2024, a 26 % increase from the previous year.

 

Traditionally, insurers will attempt to curb these crimes by focusing on claims behaviour – looking out for unusual patterns, inflated values, or suspicious timing. But with synthetic identity fraud, the insurer could have completed the entire onboarding process using seemingly valid documents and data, only to later discover that the insured person isn’t a real person.

 

Unsurprisingly, synthetic identity fraud presents a complex compliance challenge for insurers. As accountable institutions, insurers have certain obligations around customer due diligence, anti-money laundering (AML), counter-terrorist financing (CTF), and data protection. These frameworks are in place to make sure that insurers know who they are doing business with. But unfortunately, as insurers try to simplify and speed up their sales and onboarding processes for customers, many have inadvertently created new opportunities for fraud. Streamlined digital applications, reduced identity checks, and automated approvals can easily be exploited by fraudsters.

 

To strengthen onboarding processes, insurers should implement enhanced verification for higher-risk or suspicious applications. This could include requesting multiple documents as proof of identity, verifying contact details through independent channels, and using biometric checks such as facial recognition or liveness detection. Combining these measures with automated cross-checks against government records, credit bureaus, and fraud watchlists can help to confirm an applicant’s legitimacy and reduce the risk of synthetic identity fraud. It is equally important that insurers inform underwriters, claims handlers, and brokers about synthetic identity red flags and actively encourage reporting of suspicious applicants.

 

Today, modern insurers can’t afford to view proper compliance processes as a box-ticking exercise. Yes, streamlining processes to improve the customer experience is a must, but this cannot be achieved by taking verification shortcuts. Automated due diligence can help insurers detect and prevent synthetic identities by continuously verifying customer information against multiple trusted sources in real time. Additionally, these systems will immediately detect any inconsistencies or anomalies and send an alert that the identity may be fabricated.

 

If, for example, a person uses multiple addresses in different locations within a very short period of time, if their phone number fails validation, if their online footprint is non-existent or if they completed their applications unusually fast (which could mean that the paperwork was completed by a bot), an automated due diligence tool will pick up this suspicious activity. Individually, these inconsistencies may be harmless, but when multiple red flags appear together, they strongly suggest a fabricated identity. The beauty of automated compliance tools, like VOCA from SW360, is that they can cross-check hundreds of data points in real time, something humans can’t do at scale.

 

In a world where fraudsters can create fake identities, insurers have a responsibility to continuously verify their customers’ identities. If insurers want to curtail fraud, they must stay one step ahead of criminals and, in doing so, safeguard both their business and their reputation.

 

ENDS

Author

@Chantelle Frier, SW360
+ posts
Share on Your Socials

Share

Subscribe to the EBnet Daily Newsletter and WhatsApp Community for the latest retirement funding, financial planning, and investment news, along with market updates and special announcements.

Subscribe to

Thank You. You have been subscribed. Please check your emails for a confirmation mail.