2 generative AI threats you should prepare for now
Most people fall into one of two camps when it comes to generative AI.
They either feel that “The sky is falling!” or “The sky’s the limit!”
In cybersecurity, most of us are probably somewhere in the middle. And in our industry, like in many others, there’s endless buzz on this topic right now.
We’ve all heard the alarms being sounded about potential future threats, like:
- Generative AI being used to write malware from scratch
- Chatbots staging social engineering attacks against everyday consumers
- The use of prompt interjection to convince AI to divulge confidential information
But generative AI is not just a threat for the future. It’s a threat we have to face now.
There are two specific generative AI threats we think your team should be preparing for now:
Chatbots and deepfakes.
Let’s start with chatbots. Of course, they can do lots of helpful things like generating ideas, writing code, or helping you plan an event. But bad actors can use a chatbot for scaling fraud and improving their attacks.
Take phishing, as an example. It’s a numbers game for fraudsters. The more messages they send, the wider the net they can cast and the higher their chances of finding a victim. When fraudsters ask chatbots to draft phishing messages for them, it enables them to get scams off the ground with fewer resources, making it easier to scale. Not to mention the fact that when chatbots draft messages for fraudsters, the AI can write them with impeccable, compelling language.
So how can we guard against attacks that leverage AI chatbots?
A few key takeaways:
- Have a phishing resistant authentication factor in place
- Reduce the user friction of MFA as much as possible
- Leverage passive authentication factors that don’t introduce friction
Now what about deepfakes?
We’ve all seen the Tom Cruise deepfake videos. But the threat of deepfakes is a lot worse than some social media hoax videos.