Get critical info on generative AI fraud threats, learn about ransomware tool LockBit, and more.
View in browser
Header

In this edition:

  • 2 generative AI threats you should prepare for now
  • Fraud-as-a-Service of the Month: LockBit
  • Our CEO André: AI will supercharge conversion rates in social engineering

 

Image_low

2 generative AI threats you should prepare for now

 

 

Most people fall into one of two camps when it comes to generative AI.

 

They either feel that “The sky is falling!” or “The sky’s the limit!”

 

In cybersecurity, most of us are probably somewhere in the middle. And in our industry, like in many others, there’s endless buzz on this topic right now.

 

We’ve all heard the alarms being sounded about potential future threats, like:

  • Generative AI being used to write malware from scratch
  • Chatbots staging social engineering attacks against everyday consumers
  • The use of prompt interjection to convince AI to divulge confidential information

But generative AI is not just a threat for the future. It’s a threat we have to face now.


There are two specific generative AI threats we think your team should be preparing for now:

 

Chatbots and deepfakes.

 

Let’s start with chatbots. Of course, they can do lots of helpful things like generating ideas, writing code, or helping you plan an event. But bad actors can use a chatbot for scaling fraud and improving their attacks.

 

Take phishing, as an example. It’s a numbers game for fraudsters. The more messages they send, the wider the net they can cast and the higher their chances of finding a victim. When fraudsters ask chatbots to draft phishing messages for them, it enables them to get scams off the ground with fewer resources, making it easier to scale. Not to mention the fact that when chatbots draft messages for fraudsters, the AI can write them with impeccable, compelling language.

 

So how can we guard against attacks that leverage AI chatbots?

 

A few key takeaways:

  • Have a phishing resistant authentication factor in place
  • Reduce the user friction of MFA as much as possible
  • Leverage passive authentication factors that don’t introduce friction 

Now what about deepfakes?
We’ve all seen the Tom Cruise deepfake videos. But the threat of deepfakes is a lot worse than some social media hoax videos.

BP2

 

As deepfakes get better, it’s going to get really hard to secure biometric authentication processes (like facial recognition) against them. Now add constantly improving AI into the mix? That means deepfake spoofing attacks are just going to get easier and cheaper.

 

So how do we stop deepfakes from destroying account security?
MFA is one way. But another important tool for fighting deepfake fraud is device integrity checks.

 

Device integrity checks help you catch suspicious devices before they do damage. Think about a situation in which a fraudster is trying to fool facial recognition technology. In order to do that, they may have to override the device’s camera hardware.

 

How do they do that? By using app tampering tools or emulators. So any device that has programs on it related to tampering or emulating is inherently risky. A device integrity check can detect the presence of these programs on a device and warn you not to trust it.

 

AI is developing at an insane speed, but if you focus on the right things you can adapt to protect against AI-enabled fraud.

 

Amid all of the noise around generative AI, deepfake spoofing attacks and chatbot phishing are two threats that you can and should prepare for now.

 

 

To go deeper on this topic, check out our CEO André's recent blogpost.

div_5

FaaS of the Month: LockBit

Fraud-as-a-Service: When cybercriminals sell their tools, services, and skills to help clients carry out fraud. Each month we highlight a FaaS tool that you should be aware of.  

lockbit_2

Fast facts about LockBit:

  • Using an affiliate model, LockBit writes and distributes malware to affiliates who launch attacks and then pay a portion of their earnings to the LockBit core group
  • LockBit was the most prolific ransomware group in the world in 2022 with at least 1,000 machines infected and nearly 100 million dollars in ransom extracted
  • The ransomware has a simple point-and-click interface, making it easy to operate even for those with less technical savvy

Deep dive on LockBit:

Think about the data and files you'd stand to lose right now if someone stole your computer and launched it to the moon. That’s a bit like how it feels to have your files encrypted by a ransomware program like LockBit.  Ransomware is malware that encrypts all of a computer’s files and demands a ransom payment in exchange for a decryption key. For victims, it’s a digitally encoded nightmare. For ransomware gangs like LockBit, it’s good business.


In 2022, LockBit was the most prolific ransomware group in the world. Their malware has infected over 1,000 machines globally, and they’ve extracted at least $91 million dollars in ransoms from American victims alone. LockBit’s business model involves licensing their malware to recruited affiliates who then use it to conduct attacks. These affiliates then pay a fee to the original LockBit group in exchange for any ransoms they collect. 


Losing business or personal files to malicious encryption is bad enough, but the stakes can get even worse: in the last week, hospitals across four US states had operations disrupted by ransomware attacks. 


Aside from typical security hygiene measures like not downloading suspicious files, installing patches and updates promptly, and running antivirus software, the best way to defend a computer against ransomware is to make frequent backups to the cloud or an external hard drive. 

div_5

Lead the Fight

Insights from fraud-fighting visionaries

André

André Ferraz

CEO and Co-Founder of Incognia

AI will supercharge conversion rates in social engineering

 

Today, social engineering is responsible for over 70% of data breaches and successful fraud attacks. It’s the primary attack vector used today. 


Why has social engineering become the go-to tactic? As the use of multi-factor authentication has grown more widespread, human error has become the weakest link in account security. At some point it just became easier to attack individual users than to attack the technological infrastructure.  
So far, social engineering has been time-intensive and hard to scale. But in this new age of AI, social engineering is much more scalable.


It’s now easy for fraudsters to develop highly personalized messages at scale. For example, instead of sending essentially one standard phishing message to 1,000 users and having a very low conversion rate, they can now create a tailored message for each user in nearly the same amount of time. Guess what happens to that conversion rate? It skyrockets.

 

Higher conversion rates with less work. Sales teams would do anything for that. And so will fraudsters.

 

For fraud businesses, AI-empowered social engineering is like a cheat code. Without a doubt, social engineering attacks will get way more effective as they leverage AI.

 

So what does this mean for us as fraud fighters?

 

This has always been true, but it’s more true than ever in this new age of AI: We can’t expect users to protect themselves. It's not their job to become security specialists.

 

That's our job. It’s time to double down and make sure fraudsters using AI-enabled social engineering tactics aren’t going to have the upper hand on your users.

 

Invest in finding data points that allow you to more uniquely identify your users without added friction. Keep them safe.

div_5

Other links you should check out:

 

Deepfakes and chatbots

AI21 Labs concludes largest Turing Test experiment to date | AI21 Labs

Why more people don't use simple two-factor authentication | CNET

How I Broke Into a Bank Account With an AI-Generated Voice | Vice

Study warns deepfakes can fool facial recognition | VentureBeat

 

LockBit ransomware

Understanding Ransomware Threat Actors: LockBit | CISA

The Unrelenting Menace of the LockBit Ransomware Gang | Wired

 

What did you think of this newsletter?

 

Love it 😍
Like it 🙂
Don't like it 😕
Incognia Logo

Incognia, a digital identity company, detects fake account creation and account takeover attempts for gig economy, marketplace, and financial technology applications. Benefits of using Incognia’s location-based digital identity include reduced false positives and a low friction user experience.

Sign up for a demo →
LinkedIn
Twitter
YouTube

Incognia, 555 Bryant St, Box 423, Palo Alto, CA 94301, USA

Unsubscribe Manage preferences