Large Language Models (LLMs) in the Arsenal of Sophisticated Adversaries
Introduction
In the dynamic landscape of cybersecurity, the emergence of Large Language Models (LLMs) has elicited both anticipation and concern. While some predict that LLMs will unleash a deluge of new malware, others believe these tools hold the key to solving all security challenges. However, amidst the hype, it is crucial to ground our understanding in tangible evidence.
Recent insights from Microsoft and OpenAI offer a sobering perspective, revealing that sophisticated adversaries are leveraging LLMs not to revolutionise their tactics, but to refine and augment their existing methods. Rather than heralding a seismic shift in attacker behaviour, the utilisation of LLMs by threat actors largely serves to enhance their operational effectiveness while also offering valuable insights for threat intelligence.
The Real Use Cases of LLMs
According to Microsoft various threat actors, including APT28 (Fancy Bear, Sofacy, Strontium, Grizzly Steppe, Sednit, SIG40, Group 74, PawnStorm, Snakemackerel, TG-4127, Tsar Team, Blue Athena, IRON TWILIGHT, Swallowtail, Threat Group-4127, Forest Blizzard, FROZENLAKE), APT37 (Thallium, Reaper, ScarCruft, InkySquid, Velvet Chollima, Konni Group, Black Banshee, Group 123, RICOCHET CHOLLIMA, NICKEL FOXCROFT, NICKEL KIMBALL, SharpTongue, RedEyes, Emerald Sleet), TortoiseShell (Houseblend, CURIUM, TA456, Crimson Sandstorm), Charcoal Typhoon (ControlX, CHROMIUM, BRONZE UNIVERSITY, RedHotel), and APT4 (Maverick Panda, Sykipot Group, Wisp, BRONZE EDISON, TG-0623, Salmon Typhoon), are actively exploring the capabilities of LLMs to bolster their cyber operations. These adversaries employ LLMs as productivity tools, utilising them for tasks such as:
- LLM-informed reconnaissance: Interacting with LLMs to gain insights into subjects such as satellite communication protocols, radar technologies, and specific technical parameters, enhancing their understanding of potential targets.
- LLM-assisted vulnerability research: Leveraging LLMs to identify and exploit publicly reported vulnerabilities, strengthening their offensive capabilities.
- LLM-supported social engineering: Utilising LLMs to craft persuasive content for spear-phishing campaigns, targeting individuals with regional expertise or specific ideological affiliations.
- LLM-enhanced scripting techniques: Integrating LLMs into scripting tasks to streamline operations, automate tasks, and optimise technical processes.
- LLM-refined operational command techniques: Utilising LLMs for advanced command execution, enabling deeper system access and control post-compromise.
These insights underscore that while LLMs offer novel capabilities, their current usage by threat actors largely aligns with traditional tactics, albeit with greater efficiency and sophistication.
The Status Quo, but Better
Contrary to the notion of LLMs heralding a new era of cyber threats, their integration into the arsenals of sophisticated adversaries represents an evolution rather than a revolution. Threat actors are not fundamentally altering their strategies but rather leveraging LLMs to refine and amplify their existing methods. This highlights the importance for defenders to adapt their security measures accordingly.
For blue teamers, understanding how threat actors utilise LLMs provides valuable insights into potential attack vectors and vulnerabilities. Red teamers, meanwhile, can draw inspiration from these adversaries to refine their own offensive techniques and enhance their simulation exercises.
LLMs as a Source of Threat Intelligence
Furthermore, the specific ways in which threat actors utilise LLMs offer valuable intelligence for defenders. Looking at the Microsoft report:
- APT28 has been using LLMs to study satellite and radar technologies. In December 2022 it was reported that APT28 was hacking satellite communications providers. This interest has clearly persisted since 2022 and should be noted by defenders that look after satellite communications providers.
- APT37’s use of LLMs involved research into think tanks and experts on North Korea, and to understand CVE-2022–30190 (Follina). Think tanks and experts on North Korea should be aware that APT37 is in their threat model, and that they use unpatched vulnerabilities for remote code execution.
- TortoiseShell and Charcoal Typhoon are both looking to improve their social engineering. Defenders that know these groups are in their threat model should be prepared for social engineering.
- TortoiseShell was also seen trying to lure a prominent feminist to an attacker-built website, so people advocating for women’s rights, likely within Iran, should note that TortoiseShell is in their threat model.
- APT4 has been using LLMs to research U.S. and internal Chinese affairs, so this hints at possible targets.
Conclusion
While the integration of LLMs into the arsenal of sophisticated adversaries presents new challenges for defenders, it also offers opportunities for the security community to learn from and adapt to these adversaries. Ultimately, while LLMs may not herald a paradigm shift in cybersecurity, they undoubtedly represent a significant evolution in the tactics and capabilities of threat actors.