
Islamabad : The evolution of AI is not only affecting various industries, but it has also transformed cybercriminals’ tactics. One alarming trend is the use of AI to enhance phishing scams, refining them, targeting specific individuals, and making these attacks almost impossible to recognize.
According to a recent Kaspersky study, the number of cyberattacks experienced by organizations in the last 12 months is reported to have increased by nearly half (49%). The most ubiquitous threat came from phishing attacks, with 49% of those questioned reporting this type of incident. With AI becoming a more prevalent enabler for cybercriminals, half of the respondents (50%) anticipate significant growth in the number of phishing attacks. In this text, we will examine how AI is used in phishing and why experience alone is sometimes not enough to avoid becoming a victim.
Previously, phishing attacks relied on a generic mass message sent to thousands, hoping one of the recipients would fall for the bait. AI has changed this into scripting highly personalized phishing emails in large numbers. Using publicly available information like that on social media, job boards, and companies’ websites, these AI-powered tools can generate emails tailored to an individual’s role, interests, and communication style. For example, a CFO might receive a fraudulent email that mirrors the tone and formatting of their CEO’s messages, including accurate references to recent company events. This level of customization makes it exceptionally challenging for employees to distinguish between legitimate and malicious communications.
AI has also introduced deepfakes into the phishing arsenal. These are increasingly being leveraged by cybercriminals to create fake but highly accurate audio and video messages, crafted to reflect the voice and appearance of the executives they seek to impersonate.As deepfake technology continues to advance, it is expected that such attacks will become more frequent and harder to detect.
Cybercriminals can also manipulate the script of traditional e-mail filtering systems with the use of AI. By analyzing and mimicking legitimate email patterns, AI-generated phishing emails can bypass security software detection..
Even experienced employees are falling victim to these advanced phishing attacks. The level of realism and personalization that AI can achieve may override the skepticism that keeps experienced professionals cautious. Moreover, AI-generated attacks often exploit human psychology, such as urgency, fear, or authority, pressuring employees into acting without double-checking the authenticity of the request.
To defend against AI-driven phishing attacks, Kaspersky recommends organizations to adopt a proactive and multi-layered approach that emphasizes comprehensive cybersecurity. Regular, up-to-date AI-focused cybersecurity awareness training is critical for employees, helping them identify the subtle signs of phishing and other malicious tactics. Kaspersky Automated Security Awareness Platform can help with such training. Alongside this, businesses should implement robust security tools, such as Kaspersky Next and Kaspersky Security for Mail Server, capable of detecting anomalies in emails, such as unusual writing patterns or suspicious metadata.
Sohail Majeed is a Special Correspondent at The Diplomatic Insight. He has twelve plus years of experience in journalism & reporting. He covers International Affairs, Diplomacy, UN, Sports, Climate Change, Economy, Technology, and Health.