This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Account takeover scams saw a dramatic increase of 250 per cent last year. Victims of these scams not only face financial losses, averaging about $180 per incident, but 40 per cent also suffer from subsequent identity theft. The use of deepfake technology and generative AI is also on the rise, compounding the threat range.
Ant International, a leading global digital payment and financial technology provider headquartered in Singapore, has been intensifying its integration of Artificial Intelligence (AI) technologies to enhance and secure millions of daily cross-border transactions for merchants across over 200 global markets.
In the contemporary digital world, the proliferation of deepfake technology and generative AI heralds an era fraught with online scam challenges, notably within the financial sector in Asia. Economic Ramifications of DeepfakeScams The global impact of impersonation scams can be far-reaching, and expensive.
The rise of AI The report also identifies artificial intelligence (AI) as a growing trend in the Indian cybercrime landscape. with attackers increasingly leveraging AI to make identity-based attacks more sophisticated and pervasive. The surge of deepfakes The rise of AI has also led to a surge in deepfake attacks.
Crooks are circulating AI-generated deepfake videos of Prince William and UK prime minister Keir Starmer on Facebook and Instagram to dupe viewers into scam cryptocurrency investments.
The article explores the growing threat of AI-enabled fraud in the payments sector and how firms can combat it with advanced technologies. It highlights the urgent need for payments firms to address AI-driven fraud to protect financial security, maintain customer trust, and comply with regulations. Why is it important?
Fraudsters are exploiting GenAI tools such as large language models (LLMs), voice cloning, and deepfakes to carry out increasingly sophisticated scams that are harder to detect and prevent. Romance fraud: Deepfake images and videos help fraudsters create convincing personas to manipulate victims emotionally and financially.
Singapore authorities are warning businesses of a rise in scam video calls which use deepfakeAI to impersonate business bosses with the aim of duping employees into transferring funds to criminal accounts.
“Using AI-driven tactics such as deepfake selfies and synthetic identities, organised fraudsters are testing traditional security measures like never before. It says future-proofing means adopting AI-driven validation and multi-layer defences to combat deepfakes, synthetic identities, and emerging threats.
Ant International is employing advanced artificial intelligence (AI) technologies to streamline and secure cross-border transactions for nearly 100 million SMEs across over 200 markets. One of its key innovations is an AI-powered foreign exchange (FX) model capable of predicting currency exchange rates hourly.
Generative AI, when used correctly, can be a great tool that aids innovation. In the wrong hands, however, it can be incredibly harmful, especially with the emergence of deepfakes. Seventy per cent believe these fakes using generative AI will have a big impact on their organisations. AI: friend or foe?
This includes a global, fourfold increase in AI-driven deepfakescams. ” Through the partnership, banks and fintechs working with Tuum will benefit from automated identity verification and AI-powered fraud detection and transaction monitoring. .
In recent years, the banking sector in the Association of Southeast Asian Nations (ASEAN) has witnessed a significant surge in scams and fraud activities. DBS isn’t the only bank making news for phishing scams. US$129,841). Losses exceeded S$13 million (US$9.59 Losses exceeded S$13 million (US$9.59
As much as 53 per cent of Brits have either never heard of the term deepfake or misunderstood its meaning, British bank Santander has revealed, as part of a new initiative to warn consumers about the dangers of AIdeepfakescams.
The advancement of generative artificial intelligence (gen AI) has opened up new commercial, social, and technological opportunities. The document, released in May 2024, outlines challenges associated with gen AI and shares a comprehensive gen AI risk framework to guide financial institutions in using the technology in a responsible manner.
This strain on resources has largely come as a result of generative AI according to the report. Specifically, 62 per cent of businesses cite generative AI as a key driver behind the surge in invoice fraud according to The Rise in AP Fraud report by Basware.
Artificial intelligence (AI) has emerged as a key component in the modern fight against fraud. In its sixth edition, the 2025 Identity Fraud Report found that attacks involving deepfakes happened every five minutes in 2024, and digital document forgeries increased by 244 per cent year over year. per cent of attacks globally).
Sophisticated scams dominate the fraud landscape BioCatch’s report uncovers a startling surge in financial cybercrime in Asia-Pacific. With scams accounting for 54 percent of all cases, there’s a 200 percent increase in voice scams from the previous year.
As financial institutions navigate a rapidly digitizing landscape, the rise of AI-generated deepfakes is no longer a fringe concern—it’s a growing enterprise risk.
AI in payments: The battle against fraud's evolving threat May 2 2025 by Payments Intelligence LinkedIn Email X WhatsApp Whats the article about? With fraud accounting for a significant portion of UK crime, understanding AIs role is critical for developing effective, future-ready defences. Why is it important?
The UK’s increasing fraud and scam problem, focusing on new regulations mandating automatic reimbursement for APP fraud victims. The UK’s fraud and scams problem is not going away. With a wealth of AI and deepfake technology at their fingertips, even the most novice of criminals can perpetrate sophisticated fraud.
In Myanmar and other Southeast Asian countries, cyber scam rings target victims with fraudulent schemes like fake jobs or investments. Fraud networks, however small they may seem right now, will gain prominence, just like AI-powered deepfakes. said Pavel Goldman-Kalaydin, Head of AI/ML at Sumsub.
From AI-driven scams to rising chargeback rates, the challenges are growing more complex and costly. 1) AI-driven fraud and deepfakes Fraudsters are increasingly leveraging Artificial Intelligence (AI) to conduct highly convincing scams. In 2024 alone, businesses lost $8.9
Financial institutions must adopt AI-driven solutions and collaborate closely to proactively combat evolving fraud threats. As fraud continues to rise, especially with the emergence of AI-powered scams, is this new regulation enough to tackle the ever-evolving threat of financial fraud? What’s next?
As digital payments increase post-COVID and scammers implement new solutions such as artificial intelligence (AI), the report takes a look at how regulators around the world are seeking to combat fraud.
Of the 18 distinct dimensions investigated in all markets, those that correlate most closely with the overall trust score were trust that new technology makes payment safer and trust in AI tools. This illustrates the inherent economic value of innovative payments and AI technologies.
Getting a grip on identity fraud Sumsub Growing prevalence of AI-driven deepfakes, digital forgeries and identity ‘spoofing’ to obtain valuable personal and business data is impacting industries across the board.
You can find Part 1 on impersonation scams here and Part 2 on money mules here. This is the third piece in an ongoing conversation between BioCatch Global Advisor Seth Ruden and BioCatch Threat Analyst Justin Hochmuth about how various fraud trends impact smaller financial institutions.
Banks are coming under an increasingly intense barrage of cybersecurity attacks, and many of these now use deepfakes and generative AI to make the initial breach. As deepfakes proliferate, a trickle of lawsuits has the potential to become a flood – and one which absolutely could sink the banks.
AI-powered fraud-as-a-service (FaaS) platforms have enabled these mass-scale attacks, contributing to APAC now holding the highest global fraud rate, with 3.27 Bots and deepfakes The rise of bots using deepfake technology to create convincing fake profiles poses an additional challenge.
From high-profile ransomware attacks and terrorist financing to scams that wiped out millions in savings, global crypto crime has become an urgent concern. In Asia, investment scams, Ponzi schemes, and romance fraudsalso known as “pig butchering” scamscontinue to target unsuspecting retail investors.
Artificial intelligence (AI) has emerged as a new fraud challenge finds ComplyAdvantage , the AI-driven fraud and AML risk detection firm, as it launches ‘The State of Financial Crime 2024’ report. Risks include deepfakes, sophisticated cyber hacks, and the use of generative AI to create malware.
With its mimetic capabilities, AI-generated fraud is harder to detect and occurs at unprecedented volumes and velocities. How AI-Generated Fraud Works: Understanding the Threats AI-generated fraud represents a significant evolution in malicious activities, leaning on advanced technologies to create and execute sophisticated schemes.
Corsound AI Corsound AI utilizes innovative technology to verify customers’ identities for financial institutions, leveraging over 200 patents to detect AIscams and voice fraud. Finerative Finerative is a Gen-AI native startup bringing generative technology to the financial space.
The cybersecurity world is witnessing a potentially new, dangerous threat: according to insurance firm Euler Hermes, one of its corporates fell victim to cyber fraud after attackers used sophisticated artificial intelligence (AI) technology to impersonate the firm’s chief executive officer by mimicking his voice on the phone.
We don’t know what AI will look like one year from today. The evolution of AI adds layers of complexity, presenting unprecedented opportunities and significant threats. On the one hand, AI’s capabilities enhance defense mechanisms, enabling the detection and counteraction of fraud with remarkable efficiency.
Leflambe continues: “In parallel, fraudsters leverage new technology very quickly (for instance, using deepfakes to circumvent liveness checks) and compliance teams must remain very vigilant about new controls not being outdated as a result.” Regular resilience testing and reporting further strain resources.
Additionally, the rise of AI and machine learning are introducing newly advanced, sometimes opaque, fraud detection systems based on blackbox machine learning , rendering teams without clear insights or customization options to tailor solutions to their business use case and best workflows.
The rise of artificial intelligence (AI), machine learning (ML) and automation have all added new layers of complexity to fraud prevention at an unprecedented scale. For instance, fraudsters now leverage innovative technologies to create deepfakes, bypassing traditional identity verification methods like document ID checks and biometrics.
Fighting deepfakes and fraudulent identities – Jumio’s holistic approach to building identity trust” with “Jumio Delivers Adaptive Verification as AI Fraud Projected to Hit US$40 Billion. Hong Kong police recently arrested 27 individuals linked to a deepfakescam that swindled victims out of $46 million.
Is it even a regular Tuesday if someone in Singapore hasnt fallen for a scam? Its starting to feel like scams are as common as bubble tea outlets in this country they’re everywhere, always popping up in new flavours, and somehow, people just keep going back for more. And lets be real for a second.
There has been a significant decline in consumer trust in the digital world, largely driven by the rise of AI-powered fraud and deepfakes, according to a recent study by Jumio. Globally, 69% of respondents said AI-enabled fraud now poses a greater threat to personal safety than traditional identity theft.
Since the pandemic, fraud and scams have surged significantly, with mature markets like Singapore and Hong Kong facing increasingly complex challenges, including authorised push payment fraud and deepfakes. In countries like Cambodia, Laos and Myanmar, organised criminal groups, primarily from China, operate cyber scam centres.
AI-driven fraud is evolving fastbanks must adopt adaptive AI models to detect and prevent scams in real-time. The ease with which criminals can now target victims on their trusted devices, combined with the rise of sophisticated AI tools, has made these attacks significantly more difficult to detect.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content