TNO and NCSC Explore Future Cyber Threats from Large Language Models

TNO and NCSC Explore Future Cyber Threats from Large Language Models

2024-05-24 data

TNO and NCSC have investigated how large language models like chatGPT could impact cyber threats over the next 3 to 5 years, aiming to make these impacts measurable and monitorable.

Understanding Large Language Models

Large Language Models (LLMs) are advanced AI systems trained on vast amounts of text data to understand, generate, and interpret human language. Originally developed for natural language processing and communication, these models have shown immense potential in various applications, from chatbots to content creation. However, their capabilities also pose significant risks, as cybercriminals can misuse them for sophisticated and hard-to-detect cyberattacks.

The Signposts of Change Method

TNO and NCSC utilized the ‘Signposts of Change’ method, a technique borrowed from the intelligence community, to explore scenarios of future threats. This method involves identifying indicators that suggest whether a particular scenario is becoming a reality. For instance, changes in Command and Control (C2) traffic behavior are expected when exploitation is automated through LLMs. By distinguishing between evolutionary and revolutionary changes, the study aims to determine whether LLMs will only alter existing threats or also introduce fundamentally new ones.

Capabilities of LLMs in Cyberattacks

To ground their exploration in real-world capabilities, TNO and NCSC created an overview of how LLMs can assist in cyber attack techniques using the MITRE ATT&CK framework. This comprehensive review of LLM literature provides a foundation for the scenarios mentioned and can serve as a starting point for sector-specific or organization-specific analyses of the future impact of LLMs on cybersecurity.

Identified Techniques and Threats

The researchers identified several techniques that cybercriminals could enhance using LLMs. These include injecting manipulated messages on servers, scanning online applications for vulnerabilities, and, most concerningly, generating highly personalized and realistic phishing emails, known as spearphishing. The ability of LLMs to quickly and efficiently create such emails increases the effectiveness, scale, and speed of these attacks. Additionally, LLMs can aid in generating malicious code, finding vulnerabilities in code and configurations, and providing misleading or harmful input to less advanced AI systems.

Implications for Cybersecurity

The deployment of LLMs poses a significant challenge to cybersecurity. The study concludes that there are six key trends in the use of LLMs that attackers might exploit in the near future: imitation and personalization for phishing and social engineering, generation of malicious code by non-technical individuals, use of stored information for influencing decision-making, automated exploitation of vulnerabilities, identifying vulnerabilities in various data sources, and misleading inputs to other AI systems.

Conclusion: A Call for Vigilance

As cyber threats evolve with the advent of LLMs, it is crucial for organizations and security professionals to stay vigilant and adapt their defenses accordingly. The insights from TNO and NCSC’s study highlight the need for continued research, monitoring, and development of innovative cybersecurity measures to mitigate the risks posed by these advanced AI systems.

Bronnen


tno ncsc