
AI Agents Find 77% of Software Vulnerabilities in Automated Attacks
Criminal hackers and state actors now deploy AI to discover and exploit security flaws
AI Agents Revolutionize Cyberattacks
Criminal hacking groups and state-sponsored actors have operationalized the use of AI agents to automatically discover and exploit software vulnerabilities. A new scientific report documents that artificial intelligence can now identify 77% of real security flaws without human involvement.
The research report from February 2026 shows that AI systems independently scanning software, identifying weaknesses, and developing exploits to abuse them is no longer science fiction but operational reality. The technology represents a paradigm shift in cybercrime, where the speed from vulnerability discovery to exploitation has accelerated dramatically.
From Manual Hacking to Automated Exploitation
Traditionally, identifying software vulnerabilities has required highly specialized security experts with deep technical knowledge. The process has been time-consuming and resource-intensive. With AI agents, this process has been automated and scaled to a level where attackers can scan and exploit systems at an unprecedented pace.
The AI agents work by analyzing source code, testing various input combinations, and identifying patterns that indicate potential security flaws. Once a vulnerability is found, the system can automatically develop and test exploits—programs designed to abuse the security weakness.
Criminal Groups and State Actors Lead the Way
Both organized criminal networks and state-sponsored hacking units have already implemented this technology, according to the report. For ransomware groups, this means faster victim identification and more efficient attacks. For state actors, it represents a significant upgrade to cyber espionage and sabotage capabilities.
The 77% success rate for real vulnerabilities is alarmingly high and indicates that AI systems are already mature enough for operational use. This contrasts with previous attempts at AI-driven hacking, which have primarily been experimental.