What You Should Know About Artificial Intelligence’s Risks to Your Data
October is National Cybersecurity Awareness Month, an annual collaboration between government and industry to raise awareness about the importance of cybersecurity.
During the month, the Exchange’s IT directorate will send to the workforce a weekly email newsletter detailing a particular cybersecurity-related topic. The Exchange Post will also publish stories about cybersecurity topics during October.
This week: The role of AI in social engineering
A few weeks ago, as western North Carolina reeled from the effects of Hurricane Helene and another storm that hit simultaneously, a poignant image circulated on social media: a distraught-looking little girl holding a puppy, by herself in a boat atop floodwaters.
It was emotionally moving—and later revealed to be a creation of artificial intelligence, or AI. Is it possible that the image was a cybersecurity risk?
“For something like that, there is some inherent risk,” said Aaron Foechterle, Exchange data security analyst. “It kind of gives a specific and nuanced spin on the information that’s conveyed. So if you were trying to make a point by spreading a false image, the authenticity of the point you’re trying to make could be undermined.
“But the same techniques can be used in more deceptive ways, such as celebrity endorsements of certain products that provide links,” he adds. “For example, there was deepfake video of Jennifer Aniston selling MacBooks along with a link. It was a convincing video, but it wasn’t real—and the link was fraudulent, so it hurt the data of the people who clicked it.”
Still, the AI photo of the little girl in the boat was pretty convincing. So how can people recognize what’s AI and what’s not? The answers aren’t simple, because AI comes in different forms.
“If it’s something like text generation, it can be a little more difficult to detect,” Foechterle said. “Text generation takes a prompt—for example, you say ‘Write an email about this topic’—and gives you something with perfectly good spelling and grammar.
“But for images, voice and video, there’s a little bit of an ‘uncanny valley’ feel for some of them,” he added, using a term that refers to a human reacting uneasily to a computer-generated figure looking a little too much like an actual human. “With image generation, AI is really good at what it does. But there are some inherent flaws. For example, there are some AI images out there of people, but they’ve been given an extra finger. Or sometimes their facial expressions don’t look exactly natural. AI has a difficult time getting those final human details accurate on some images.”
Other images might appear airbrushed, looking too clean. “Looking for those irregularities, looking at how natural an image looks, would be my best advice for trying to differentiate AI images from real images.”
Spotting AI-generated voice replication, which can be used to impersonate people during a voice-phishing scam, can also be tricky.
“As with images, there are often irregularities with voices,” Foechterle said. “To make a voice prompt, AI pulls from a sample of data, which could be as little as 15 seconds or could be hours of data. By sampling from that data, it forms the voice. But it’s not perfect. Some words or some intonations sound a little unnatural or robotic. It’s subtle, but if you listen closely, you can hear it.”
A lot of the same principals apply to deepfake AI-generated video, which can even be used to impersonate upper management or other associates. And the scams can be sophisticated. Foechterle cited on that made headlines this year when an employee British engineering firm Arup was duped out of millions in a deepfake fraud.
“A finance worker paid out $25 million to scammers after scammers deepfaked a Zoom video with the chief financial officer,” Foechterle said. “Not only was it the CFO, but other people were on the call as well that were recognizable—but were also deepfaked. It was very convincing, so even though there was a red flag of sending all that money in an unprompted call that the person wasn’t expecting, because the image of the CFO was very convincing, the worker wasn’t alarmed at the time.”
Foechterle said, however, that even deepfake videos include signs that they are AI-generated.
“Videos can be convincing, but again, with things like facial expressions, it can be unnatural or irregular—it almost looks like bad animation at times,” Foechterle said. “That’s one way you can take a close look. Also look at hands or other features too see if there’s something off.”
Along with being AI-aware at work, Foechterle said, associates should be aware of the possibilities of AI-generated scams in their personal lives, especially on social media.
“A couple of months ago, somebody created a deepfake video livestream of a funeral procession,” he said. “Someone’s relative did die, but scammers created a video of the procession. They shared it on social media, and people shared it with friends and family, and there was a link to donate to help with the funeral costs. The link was fraudulent—the money didn’t go to the people whose relative died, it went to the scammers.”
The risk of AI scams is still small—but it’s large enough that the Exchange Information Technology directorate wants associates to know about it. Although there is some government regulation of AI to prevent scams, it’s currently limited.
“All this is relatively new,” Foechterle said. “It’s still difficult to make fraudulent media with the tools that currently exist. Take ChatGPT, which has security policies and rules in place to make sure that it’s not used for fraudulent purposes. However, there are open-source programs with these tools for text, image, voice and video generation that don’t have regulations. They can be used for good purposes—and they can also be used for bad purposes.”
Next: Keeping your mobile devices safe from cyberattacks
Follow ExchangeAssoc Facebook, Instagram and X throughout October for posts on Cybersecurity Awareness Month topics. Hashtags: #CybersecurityAwarenessMonth and #SecureOurWorld.