“AI-driven 0-day detection is here,” argues a new blog post from ZeroPath, makers of a GitHub app that “detects, verifies, and issues pull requests for security vulnerabilities in your code.”
They write that AI-assisted security research “has been quietly advancing” since early 2023, when researchers at the DARPA and ARPA-H’s Artificial Intelligence Cyber Challenge demonstrated the first practical applications of LLM-powered vulnerability detection — with new advances continuing. “Since July 2024, ZeroPath’s tool has uncovered critical zero-day vulnerabilities — including remote code execution, authentication bypasses, and insecure direct object references — in popular AI platforms and open-source projects.” And they ultimately identified security flaws in projects owned by Netflix, Salesforce, and Hulu by “taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing tools were ill-equipped to find…”
TL;DR — most of these bugs are simple and could have been found with a code review from a security researcher or, in some cases, scanners. The historical issue, however, with automating the discovery of these bugs is that traditional SAST tools rely on pattern matching and predefined rules, and miss complex vulnerabilities that do not fit known patterns (i.e. business logic problems, broken authentication flaws, or non-traditional sinks such as from dependencies). They also generate a high rate of false positives.
The beauty of LLMs is that they can reduce ambiguity in most of the situations that caused scanners to be either unusable or produce few findings when mass-scanning open source repositories… To do this well, you need to combine deep program analysis with an adversarial agents that test the plausibility of vulnerabilties at each step. The solution ends up mirroring the traditional phases of a pentest — recon, analysis, exploitation (and remediation which is not mentioned in this post)…
AI-driven vulnerability detection is moving fast… What’s intriguing is that many of these vulnerabilities are pretty straightforward — they could’ve been spotted with a solid code review or standard scanning tools. But conventional methods often miss them because they don’t fit neatly into known patterns. That’s where AI comes in, helping us catch issues that might slip through the cracks.
“Many vulnerabilities remain undisclosed due to ongoing remediation efforts or pending responsible disclosure processes,” according to the blog post, which includes a pie chart showing the biggest categories of vulnerabilities found:
53%: Authorization flaws, including roken access control in API endpoints and unauthorized Redis access and configuration exposure. (“Impact: Unauthorized access, data leakage, and resource manipulation across tenant boundaries.”)
26%: File operation issues, including directory traversal in configuration loading and unsafe file handling in upload features. (“Impact: Unauthorized file access, sensitive data exposure, and potential system compromise.”)
16%: Code execution vulnerabilities, including command injection in file processing and unsanitized input in system commands. (“Impact: Remote code execution, system command execution, and potential full system compromise.”)
The company’s CIO/cofounder was “former Red Team at Tesla,” according to the startup’s profile at YCombinator, and earned over $100,000 as a bug-bounty hunter. (And another co-founded is a former Google security engineer.)
Thanks to Slashdot reader Mirnotoriety for sharing the article.
Read more of this story at Slashdot. Read More