StructuredPrefill prefill, but better https://github.com/mia13165/StructuredPrefill
WHAT ARE STRUCTURED OUTPUTS?
Structured outputs are a response format where the model is forced to reply as valid JSON that matches a JSON Schema you provide.
Normally, this is used for "app stuff":
- extraction (turn messy text into clean fields)
- classification (pick labels / enums)
- tool inputs (guaranteed types)
StructuredPrefill repurposes this for roleplay/control by putting the actual assistant text into a schema field (ex: "value") and constraining it with a regex pattern, so the model's text must start with your prefill and then continue normally.
Docs:
- OpenAI Structured Outputs
- Google Gemini Structured Output
- OpenRouter Structured Outputs
- Anthropic (Claude) Structured Outputs

ABOUT
StructuredPrefill is a SillyTavern extension that recreates assistant-prefill behavior using Structured Outputs.
Structured Outputs = the model replies as JSON that must match a JSON Schema.
JSON schemas can include a regex pattern, which means we can force the model's output string to start with a specific prefix.
That prefix is your "prefill".
TLDR?
- install the extension
- add an assistant-role message at the very bottom (your prefill)
- send a message like normal
- StructuredPrefill auto-activates (when supported) and the reply is forced to begin with your prefill
USECASE?
Models like Opus 4.6 and many more to come are REMOVING prefill. We can NOT have this, and so we have to find an alternate way to get the prefill functionality back in a way that may even be better than regular prefilling.
Also, for models like GPT 5.2 and GPT 5.1 this makes it monumentally easier to jailbreak as the model genuinely thinks its writing whatever is in your prefill.
ALL SUPPORTED MODELS: https://openrouter.ai/models?fmt=cards&supported_parameters=structured_outputs
REGULAR PREFILL VS STRUCTUREDPREFILL
Regular prefill
- ST appends an assistant message like:
Here is my response: - the model continues from it (when the API/model allows it)
StructuredPrefill
- your assistant prefill message is converted into a schema constraint
- the model is forced to output JSON like:
{ "value": "<your prefill>...<the real reply continues here>" }
- the model generates the prefix itself, so it's "real" output (not an injected assistant message)
HOW IT WORKS (SHORT)
- you add a final assistant message (prefill)
- StructuredPrefill removes it from the outgoing request
- it injects a structured-output schema that requires the returned string to start with that prefill (regex)
- the extension unwraps the JSON back into normal text so the chat looks/streams normally
EXTENSION SETTINGS
These are the settings you will see in SillyTavern > Extensions > StructuredPrefill.
Enabled StructuredPrefill
- turns the extension on/off
- when OFF: nothing is changed
- when ON: it still only activates if the current provider/backend supports OpenAI-style JSON-schema structured outputs (otherwise it behaves like a no-op)
Hide the prefill text in the final message
- display-only: it changes what you see in ST, not what the model is forced to output
- when ON: ST will show only the continuation (the text after your prefill)
- if you want to hide only part of your prefill: put
[[keep]]inside your prefill- everything before the first
[[keep]]is hidden - everything after stays visible
- everything before the first
Advanced
Minimum characters after prefix (example: 900)
- this is a hard constraint that prevents "prefix-only" replies
- higher = the model must continue longer after the prefill before it is allowed to stop
- too high can:
- increase token usage / cost
- make the model ramble to satisfy length
- make some providers reject the schema or hit output limits earlier
Newline token (encoded in schema) (example: <NL>)
- some providers reject schemas that contain literal newlines in strict structured outputs
- StructuredPrefill replaces real newlines in your prefill with this token when building the schema
- then it converts the token back into real newlines for display
- pick a token that does not already appear in your prefill text (default
<NL>is usually fine)
SLOTS / STUBS ([[...]])
You can put [[...]] markers inside your prefill. These are not "prompting". These become regex constraints.
Supported slots:
[[w:2]]/[[words:2]]-> exactly 2 words here[[w:2-5]]-> between 2 and 5 words here[[opt:yes|no|maybe]]-> choose one option[[re:<regex>]]-> custom regex (no literal newlines;/.../flagsok, flags ignored)[[free]]-> any non-empty text (lazy match)
Use-case: make the model "fill in" parts of your prefill template before it continues with the actual reply.
HIDE PREFILL TEXT (OPTIONAL)
If you don't want to see the prefill in the final assistant message, enable the "Hide prefill text" toggle.
Display marker:
[[keep]]-> when hide-prefill is enabled: hide everything before the first[[keep]], keep everything after
PREFILL EXAMPLES
Basic
When StructuredPrefill is active, the model's output is constrained to start with your prefill text. If the provider enforces JSON Schema patterns, the model cannot "start somewhere else" on successful completions.
Hide prefill + [[keep]] + optional ST variables
- With "Hide the prefill text in the final message" enabled, everything before
[[keep]]is hidden in the displayed message. - If your SillyTavern setup substitutes variables like
{{char}}in prompts, the model will be constrained to start with that substituted value.
Template + stubs (gives structure but still lets the model fill the blanks)
The [[...]] stubs are regex constraints, not "instructions". They let you force a shape/template while still giving the model freedom to fill in details.
RPG status + continuation (position / emotion / money / weather)
This forces the model to start every reply with a small "status block" (so the scene stays grounded), then it can continue with normal roleplay under [NOW].
Jailbreak prompt inside prefill:
As you can see you can include instructions like "keep it to 6 paragraphs" and the AI will follow it. You can also include your main prompt/jailbreak DIRECTLY and the ai will repeat it VERBATIM in the response.
COMPATIBILITY
StructuredPrefill only works on providers/backends that support OpenAI-style JSON Schema structured outputs for chat completions.
If your provider doesn't support that format, StructuredPrefill is a no-op (it won't break your prompt - it just won't activate).
LIMITATIONS
- Extension-only: StructuredPrefill cannot change how SillyTavern's server talks to providers. It can only modify the outgoing request that ST already supports.
- Claude structured outputs: Anthropic supports JSON-schema outputs, but the request shape is different (
output_config.format) and SillyTavern's current chat-completions path does not expose a compatible hook to extensions. This means StructuredPrefill cannot enable "real Claude structured outputs" without Cohee updating SillyTavern's source code. Docs: Anthropic Structured Outputs - Provider support varies: some "OpenAI-compatible" providers accept
json_schemabut do not actually enforce regexpatternconstraints reliably. In those cases StructuredPrefill may partially work or behave like a no-op. - Regex is not a full engine: JSON-schema regex support differs by provider. Keep slot patterns simple.
- Very large prefills: the schema pattern can become huge. Some providers reject big schemas, or performance/latency gets worse.
- Safety-first models: some models/providers will refuse, truncate, or return a refusal-style response even when structured outputs are requested. If you push too hard against safety, you may see strange failures or "garbage" output; don't rely on structured outputs to override safety.
- Streaming edge cases: if generation is interrupted mid-stream, you might briefly see raw JSON or partially formatted output depending on the provider + ST streaming behavior.