AI and Deepfakes
In 1950, computer scientist and philosopher Alan M. Turing published a paper entitled “Computing Machinery and Intelligence”, in which he proposed to consider the question “Can machines think?”. Today, computers can mimic human voices, faces, gestures, and, yes – in some cases – thought processes.
Artificial intelligence (AI) refers to the science of simulating human intelligence in machines. AI machines attempt to determine the best way to achieve an outcome or solve a problem by analyzing enormous amounts of training data and then finding patterns in the data to replicate in their own decision-making. AI has been pivotal in making huge advancements in science, technology, medicine, politics, sports, security, and nearly every other aspect of our lives.
While its possibilities are exciting, it can also be scary as cybercriminals have begun launching deepfakes, a new generation of AI-generated cyberattack tools that are delivering disinformation and fraudulent messages at an unprecedented scale and sophistication.
What are deepfakes?
The term "deepfake" was coined in 2017 as a blend of “deep learning” and “fake”. Deepfakes are AI-created media – typically images, video, and audio – that are usually used to show events that never occurred, people doing or saying things that never happened, or to superimpose one person’s likeness onto another. A complex machine learning technique called Generative Adversarial Networks (GAN) allows users to generate 3D models from 2D photos or scanned images. For instance, in healthcare, GAN combines X-rays and other body scans to create realistic images of organs for surgical planning and simulation. Deepfake creators can use GAN technology to superimpose synthesized content over real ones or create entirely new highly realistic content. These deceptive creations can be used for various malicious purposes, such as spreading misinformation, damaging reputations, or perpetrating fraud.
Why do AI and deepfakes pose such a cybersecurity threat?
The primary cybersecurity threat deepfakes pose is their ability to compromise trust. In a world where information is disseminated at an astonishing rate, individuals and organizations rely on the authenticity of digital media. Deepfakes exploit this trust, causing confusion, doubt, and harm. The ramifications of deepfake attacks can be far-reaching, undermining the very foundation of cybersecurity, which is built upon the principles of authenticity and integrity.
Some of the most common threats AI and deepfakes pose are:
Political Disinformation
Deepfakes are often used to spread false narratives and manipulate public opinion in an attempt to create misleading representations of political figures. The first notable example occurred in 2018, when BuzzFeed released a deepfake of President Obama. Since then, many others have come to light, including a deepfake video of Ukrainian President Volodymyr Zelensky which falsely portrayed him as conceding defeat and urging Ukrainians to surrender to Russia. At the 2024 Olympic Games, deepfakes image, video, and audio simulations, were used in “influence campaigns” to disrupt the events. This deepfake content spread false claims about game outcomes and used manipulated audio to damage the reputation of coaches, athletes, teams, and officials with false, inflammatory statements. It also used fabricated images and videos to discredit competitors and potentially bar them from participation.Corporate Theft
In the corporate world, deepfakes have emerged as tools for fraud, causing substantial financial losses. Earlier this year a group of fraudsters used deepfake technology to trick a CFO at a Hong Kong-based multinational company to transfer money into their bank accounts. The fraudsters invited the employee to join a video conference and during the call, they directed him to make confidential financial transactions totaling $25.57 million, to certain bank accounts. Even if a company can prove it was the victim of a deepfake and recoup some of the financial loss, the damage to its reputation and potential loss of revenues could already be done.Personal Identity Theft and Harassment
Personal rights and privacy are, of course, highly susceptible to harm from fake media when it is used to commit identity theft and harassment.Financial Market Manipulation
Deepfakes can disrupt entire financial markets by swaying investor decisions and market sentiments with false narratives. In 2023, a deepfake image depicting an explosion at the Pentagon circulated on Twitter. The Arlington Police Department quickly debunked the image, but not before the dipped by 0.26 percent.Social engineering schemes
AI allows cybercriminals to automate many of the processes used in social-engineering attacks, as well as create more personalized, sophisticated and effective messaging to fool unsuspecting victims. This means cybercriminals can generate a greater volume of attacks in less time—and experience a higher success rate. In 2022, $11 million was stolen from individuals through thousands of impostor phone scams.Password hacking
Cybercriminals exploit AI to improve the algorithms they use for deciphering passwords. The enhanced algorithms provide quicker and more accurate password guessing, which allows hackers to become more efficient and profitable.Data poisoning
Cybercriminals can alter the training data used by an AI algorithm to influence the decisions it ultimately makes. In other words, the algorithm is being fed with deceptive information, and bad input leads to bad output. Data poisoning can be difficult and time-consuming to detect. So, by the time it’s discovered, the damage could be severe.
Some organized threat-actor groups are operating under a "deepfake as a service" model where they generate sophisticated deepfakes for anyone willing to pay their fee. This means the offenders aren’t limited to highly technical cybercriminals anymore. Instead, more content can be created more quickly by more people.
How do we defend ourselves and our organizations against this new technology?
Combating AI and deepfake technology requires a multi-faceted approach, combining training, authentication, and detection. These practices form an integral part of a comprehensive strategy to counter the growing sophistication of AI and deepfakes in the cybersecurity landscape.
Training
Educating individuals and organizations about deepfakes, how to spot them, and their potential impact is essential. Just like you can identify phishing scams by spelling or grammatical errors, there are certain characteristics you can look for in deepfakes. These are things such as:
Inconsistencies in audio or video quality, such as choppy sentences, varying tone inflection in speech,
Wording or phrasing inconsistent with how the speaker would normally talk
Background sounds that are inconsistent with the speaker’s presumed location
Mismatched lip-syncing or voice synchronization
Unnatural facial or body movements, such as unusual blinking patterns, a lack of natural eye movement, jerky or irregular body motions, and expressions that don’t align with how a person typically moves or reacts
Uncharacteristic behavior or speech patterns
Subtle visual differences or mistakes, such as a hand with more than five fingers, strange lighting or shadows, inconsistencies in skin tones or hair (especially around the edges), inconsistencies in background colors, inconsistent eye spacing, inconsistent blurring or blurring when the face is partially obscured, and backgrounds that don’t match the foreground (or vice versa)
Cybersecurity teams should develop and rehearse deepfake-focused prevention and response exercises just as they would for other incidents. An organization that experiences an AI-based social engineering or deepfake attack must be able to respond effectively. To properly prepare, workforce training programs should be established or expanded to educate employees about the risks of AI-driven content manipulation and what they can do to protect the organization.
Authentication
Because AI and deepfake technology is rapidly advancing, individuals and organizations should use tools, processes, and techniques to authenticate media. There are tools available, such as reverse image/video search, social media account verification, and browser extensions, that help authenticate the source of a communication.
It is also important to establish a verification process. For an individual, that might be a secret question and answer that you establish with your mom to ensure it’s really her on the phone. An organization may choose to use a secret passphrase, a word of the day, or rotating watermarks. Biometrics and multi-factor authentication (MFA) can also be effective tools in some circumstances. It is important to make sure all video, conference calls and webinars are password protected to ensure that only trusted individuals have access to them.
There are simple techniques that anyone can use to authenticate communications as well. When speaking to someone you know, ask them about something you did together. For video calls, ask them to turn their heads (because deepfake software currently isn’t very good at rendering ears).
Detection
Deepfake detection methods, which generally involve monitoring and analyzing corporate data sources to flag patterns, anomalies or other indicators of compromise, are continuously evolving as detection technologies improve.
Deepfake detection methods include:
Visually inspecting content for signs of manipulation
Analyzing metadata for signs of tampering
Performing forensic analysis on digital artifacts left behind by deepfake creation tools
Training machine learning algorithms to detect deepfakes by pattern analysis
Audio analysis (e.g., voice recognition, audio forensics)
Assessing and verifying the authenticity and credibility of source content
Traditional detection tools typically search for evidence of content alteration or manipulation, and then create an alert so someone can manually review the flagged content. The big problem with traditional tools is that as soon as a new capability is introduced, malicious actors will leverage AI to find ways around it. However, new tools are emerging that “fight fire with fire” by using AI to fight AI. The technologies once primarily used by cybercriminals to create deepfakes are now being harnessed to detect and combat them. These solutions provide both proactive and reactive approaches to protect against the creation and dissemination of deepfakes and include tools such as:
Deepfake Detection Algorithms
AI algorithms are trained to analyze audio and video content, looking for inconsistencies, artifacts, or anomalies that are characteristic of deepfakes. ML models can detect subtle discrepancies in facial expressions, voice modulation, or other cues that may reveal a deepfake’s true nature.Media Authenticity Verification
AI can be used to create digital signatures or watermarks for media content to verify its authenticity, which ensures the integrity of important files and prevent tampering. Blockchain technology, in conjunction with AI, can create immutable records of media content, providing providing “smart contracts” that can be used to verify the source and authenticity of media files and confirm whether they have been altered. Combined with AI that can flag media content as potentially inauthentic, a smart contract can trigger a review process or alert relevant authorities.Real-time Monitoring
AI and ML can be used to continuously monitor social media and other online platforms for the presence of deepfake content. Automated systems can flag potential deepfake content for further manual review, helping to mitigate the spread of disinformation.Training AI to Detect Deepfakes
To stay ahead of evolving deepfake technology, AI and ML models are trained on large datasets of known deepfakes, enabling them to recognize new, previously unseen variations. Ongoing training ensures that the AI remains up-to-date and can adapt to the ever-changing tactics employed by malicious actors.
It's no longer enough to defend against the threats we know; we must anticipate and neutralize the threats of tomorrow. This proactive approach to cybersecurity can seem daunting, but it's essential in a world where the lines between reality and digital deception are increasingly blurred.