A security researcher has systematically audited several core MCP (Model Context Protocol) servers and disclosed multiple vulnerabilities in quick succession. The findings span SQL injection in the SQLite server, path traversal concerns in the filesystem server, and memory handling issues — raising important questions about security practices in the growing MCP ecosystem.
These issues are currently open. If you're running affected MCP servers, review the linked issues for mitigation guidance. Patches are expected soon.
Issue #3314 — The SQLite server, designed to let AI agents query local databases, contains SQL injection vulnerabilities that could allow malicious queries to bypass intended restrictions.
Issue #3317 — The filesystem server, which provides file access to AI agents, has two security findings. Path traversal is a common concern in file-serving code.
Issue #3315 — The memory server, used for persistent agent state, has a security finding that could affect data integrity or confidentiality.
MCP has exploded in popularity since Anthropic open-sourced it. With over 78,000 stars on GitHub, the reference servers are now running in thousands of development and production environments. These aren't theoretical vulnerabilities in obscure code — they're in tools that developers are actively using to build AI-powered applications.
The attack surface is particularly concerning because MCP servers are designed to give AI agents access to external resources:
If an AI agent can be manipulated (through prompt injection or other means), vulnerabilities in these servers could escalate that access beyond intended boundaries.
SQL injection vulnerabilities typically occur when user input (in this case, AI agent input) is concatenated directly into SQL queries rather than using parameterized queries. For example:
// Vulnerable pattern
const query = `SELECT * FROM users WHERE name = '${userInput}'`;
// Safe pattern
const query = `SELECT * FROM users WHERE name = ?`;
db.run(query, [userInput]);
The SQLite server's exposure is particularly noteworthy because it's designed to let AI agents run arbitrary SQL queries. The question isn't whether agents can run queries, but whether proper input sanitization prevents them from escaping intended constraints.
Filesystem servers are classically vulnerable to path traversal attacks, where input like ../../etc/passwd could access files outside the intended directory. The npm ecosystem has also been dealing with a vulnerability in the qs query string parsing library — a separate commit on Feb 8 ran npm audit fix to address this.
This isn't the first round of security concerns for MCP. Last week, Issue #3313 reported 3 security findings in the Puppeteer server (browser automation). The pattern suggests that as MCP gains enterprise adoption, security researchers are paying closer attention — which is healthy.
The maintainers have been responsive: the qs vulnerability was patched within days. We expect similar turnaround for these new findings.
The MCP security findings highlight a broader challenge: AI infrastructure is growing faster than security practices can keep up. When a reference implementation has SQL injection in 2026, it's a reminder that even well-intentioned open-source projects need dedicated security investment.
For organizations building on MCP, this is a good moment to ask: what's our security posture around AI tool use? If an agent can call external APIs, run database queries, and access the filesystem, what's the blast radius of a compromised agent?
The good news is that transparency and responsible disclosure are working. These vulnerabilities were found, reported, and will be patched — before (as far as we know) being exploited in the wild. That's the ecosystem working as intended.