AI-Driven Scams: The New Face of Fraud

Written by CyberPro  »  Updated on: November 10th, 2024

AI-Driven Scams: The New Face of Fraud

Increasingly Convincing Scammers

Advancements in artificial intelligence are making it harder for people to recognize scams. Gone are the days of easily identifiable, poorly written messages. Today’s scammers are better writers and conversationalists, often indistinguishable from real people, according to bank and tech investigators who track these schemes. Tools like ChatGPT and other AI software enable fraudsters to create convincing imitations of voices and identities, sometimes impersonating senior executives to demand wire transfers.

Matt O’Neill, a former Secret Service agent and co-founder of the cybersecurity firm 5OH Consulting, warns that traditional instincts may no longer protect individuals from falling victim to scams. The old tactics remain, but AI allows scammers to target larger groups with more personalized information, making their schemes appear more legitimate. Modern AI-driven scams often evade traditional indicators like malicious links and poor grammar. Criminals now fake IDs and generate computer-crafted faces to bypass identity-verification processes, presenting significant challenges for fraud prevention officials.

To combat this, institutions like JPMorgan Chase have begun employing large-language models to detect identity fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, emphasized the bank’s efforts to educate customers about these evolving threats. Despite these measures, individuals remain the last line of defense. Experts advise against sharing financial or personal information unless absolutely certain of the recipient’s identity. Using credit cards for payments offers the most protection, as they can help recover lost funds. Lois Greisman, associate director of the Federal Trade Commission, cautions that requests for payment via crypto, cash, gold, wire transfer, or payment apps are likely scams.

AI and Tailored Targeting

AI has amplified the success of scammers, leading to record financial losses. In 2023, people reported losing $10 billion to scams, an increase from $9 billion the previous year, according to the FTC. Given that only about 5% of fraud victims report their losses, the true figure could be closer to $200 billion. AI enables fraudsters to gather detailed information from social media and data breaches, allowing them to craft highly personalized and convincing scams.

Joey Rosati, owner of a small cryptocurrency firm, learned this the hard way. In May, he received a call from someone claiming to be a police officer, informing him that he had missed jury duty. The caller knew Rosati’s Social Security number and recent address, making the scam appear legitimate. When asked to wire $4,500 to settle the fine, Rosati realized it was a scam and hung up.

Social engineering attacks, such as the jury-duty scam, have become more sophisticated with AI-driven scams. Scammers use AI tools to gather details and generate personalized messages, mimicking trusted individuals to persuade victims to part with money or sensitive information. In another instance, David Wenyu, desperate for a job, fell for a scam that appeared to be a legitimate job offer. He ignored red flags due to his emotional state and only realized it was a scam when asked to make purchases before funds were transferred to his account.

A survey by Biocatch in April 2024 found that 70% of fraud-management officials at banks and financial institutions believe criminals are more adept at using AI for financial crime than banks are at preventing it. Kimberly Sutherland of LexisNexis Risk Solutions noted a significant rise in AI-related fraud attempts in 2024.

Experts warn of AI-driven scams on the rise for 2024

The Battle Against AI-Enhanced Fraud

Financial institutions are stepping up their game by employing AI to protect against scams. Banks now monitor how users enter credentials, noting patterns like hand preference and IP addresses to build profiles. Unusual login attempts are flagged, prompting additional verification. They also detect coerced data entry through changes in typing patterns and recognize red flags in overly perfect voice verifications and text.

Consumers lost $1.4 billion to cryptocurrency scams in 2023, a significant increase from previous years. To safeguard against these threats, security officials recommend enabling two-factor authentication for account logins. Taking a moment to assess the situation can also help prevent falling victim to AI-driven scams, as many fraudsters create a false sense of urgency to manipulate their targets. Seeking a second opinion from a trusted contact can also be a crucial step in validating the legitimacy of a transaction or request. As Matt O’Neill advises, if the stakes are high, always validate the information.

Also Read: Understanding and Preventing Chargeback Scams: A Comprehensive Overview


Disclaimer:

We do not claim ownership of any content, links or images featured on this post unless explicitly stated. If you believe any content or images infringes on your copyright, please contact us immediately for removal ([email protected]). Please note that content published under our account may be sponsored or contributed by guest authors. We assume no responsibility for the accuracy or originality of such content. We hold no responsibilty of content and images published as ours is a publishers platform. Mail us for any query and we will remove that content/image immediately.