Highlights:
- The phishing emails generated by AI were discovered to be nearly as effective as those created by humans.
- While humans narrowly clinched victory in the experiment, the study emphasizes that the emergence of AI in phishing should not be underestimated.
A recent study published by IBM X-Force highlights a significant rise in artificial intelligence-facilitated cyberattacks and their effectiveness compared to human-led attacks. This underscores the critical necessity for organizations to swiftly enhance and fortify their cybersecurity defenses.
At its core, the study involved a pivotal experiment where AI was pitted against seasoned human social engineers to craft phishing emails. Using OpenAI LP’s ChatGPT, the researchers provided five customized prompts to instruct the AI in developing phishing emails tailored for specific industries.
The outcomes were astonishing, as generative AI models could compose highly convincing and deceptive phishing emails within five minutes. In stark contrast, expert human social engineers were observed to take approximately 16 hours to accomplish the same task.
The phishing emails generated by AI were discovered to be nearly as effective as those created by humans. Human engineers employed open-source intelligence to gather information, which they then utilized to create emails imbued with a personal touch, emotional intelligence, and an authentic feel. The human-crafted emails included elements of urgency, yet despite these advantages, the AI’s performance in the test was remarkably close, highlighting its potential in this domain.
Stephanie Carruthers, the Global Head of Innovation and Delivery at IBM X-Force, noted in the study that the results were so astonishing that participants walked away.
Carruthers explained, “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive. In fact, there were three organizations who originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate.”
While humans narrowly achieved victory in the experiment, the study emphasizes that the emergence of AI in phishing should not be underestimated. The emergence of AI tools with phishing capabilities in various forums raises significant concerns about the future cybersecurity landscape.
The study offers several recommendations businesses should consider to enhance their digital defenses against the increasing threat of AI-generated phishing. The first is the need for confirmation, especially when employees get suspicious or unexpected emails. Instead of relying only on digital evidence, employees should call the sender to clear up doubts and stop potential breaches.
Another important recommendation is for businesses to revamp their training modules. The notion that identifying phishing emails primarily relies on detecting grammar and spelling errors, as they have often been the case in the past, should be substituted with more nuanced training. Including advanced methods like vishing, which is voice-based phishing, in employee training can enhance the overall defense strategy.
The study also recommends that businesses enhance their identity and access management systems, including implementing multifactor authentication mechanisms resistant to phishing attempts for added security.
Carruthers added, “The emergence of AI in phishing attacks challenges us to reevaluate our approaches to cybersecurity. By embracing these recommendations and staying vigilant in the face of evolving threats, we can strengthen our defenses, protect our enterprises, and ensure the security of our data and people in today’s dynamic digital age.”