Highlights:

  • The Big Sleep model makes use of sophisticated variant-analysis techniques, which involve applying knowledge from vulnerabilities that have already been found to find comparable, possibly exploitable problems in adjacent code portions.
  • Big Sleep carried out root-cause analysis, which entails not only locating vulnerabilities but also comprehending the fundamental problems that give rise to them.

Google LLC uncovers a previously known susceptibility using AI. It is claimed as the world’s first to mark the beginning of AI to be used at the forefront of security susceptibility detection.

A large language model known as “Big Sleep,” which was developed in partnership with DeepMind and Google Project Zero, was used to discover the vulnerability, which was a buffer overflow problem in SQLite.

The Big Sleep model makes use of sophisticated variant-analysis techniques, which involve applying knowledge from vulnerabilities that have already been found to find comparable, possibly exploitable problems in adjacent code portions. Using this technique, Big Sleep was able to identify a problem that had escaped detection by conventional fuzzing techniques, which automatically generate and test a huge number of random or semi-random inputs to a program to find bugs or vulnerabilities by looking for unexpected crashes or behavior.

To find areas of possible concern, the system first examines particular modifications made to the codebase, such as commit messages and diffs. After that, the model examines these parts using its pre-trained understanding of code patterns and known vulnerabilities, which enables it to identify minute defects that traditional testing methods could overlook.

Big Sleep found that SQLite’s “seriesBestIndex” function had a flaw that prevented it from handling edge cases with negative indices correctly. This flaw might have resulted in a right operation that went outside the permitted memory constraints, opening the door to a potential exploit. By mimicking real-world usage scenarios and closely examining how various inputs interacted with the susceptible code, the AI was able to identify the issue.

Furthermore, Big Sleep carried out root-cause analysis, which entails not only locating vulnerabilities but also comprehending the fundamental problems that give rise to them. Google claims that the feature will allow developers to fix the fundamental issue and lessen the possibility of future vulnerabilities of the same kind.

It’s interesting to note that the vulnerability was found before it could be officially exploited, which may show how successful AI is at proactive defense.

“We hope that in the future this effort will lead to a significant advantage to defenders — with the potential not only to find crashing test cases but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future,” the Big Sleep team posted.