Phinlorian's Proxy Guide

You want to use a proxy for Janitor AI or other AI websites that allow it. Here is how.

IF YOU ARE LOOKING FOR FREE PROXIES/MODELS, THIS GUIDE IS NOT FOR YOU. THIS GUIDE ASSUMES THAT YOU HAVE MONEY THAT YOU ARE WILLING TO SPEND.


What Is a Proxy?

It's very simple. A proxy (or provider) is a middleman between you and an AI model. Here's what happens step by step:

  1. You type a message in Janitor AI (or whatever frontend/website you're using).
  2. Janitor AI sends that message along with your system prompt, bot personality + scenario, character def, and conversation history — to the proxy service.
  3. The proxy service takes all of that and forwards it to the actual LLM (Large Language Model). For example, DeepSeek, Claude, GPT, GLM, etc.
  4. The model generates a response and sends it back to the proxy.
  5. The proxy forwards that response back to Janitor AI, and it appears in your chat.

That's it. The proxy is just a middleman that routes your messages to the model and brings the response back.

What Is a Provider?

The terms "proxy" and "provider" get used interchangeably, but there's a subtle difference worth understanding.

A provider is any service that gives you API access to one or more AI models. This can mean two things:

  • First-party providers: These are the companies that actually made the model. If you go to DeepSeek's website, create an account, deposit money, and get an API key, you are using DeepSeek as a first-party provider. You're talking directly to them. There's no middleman. The same goes for OpenAI (for GPT models), Anthropic (for Claude), Google (for Gemini), Mistral, and so on.
  • Third-party providers (proxies/aggregators): These are services like OpenRouter, NanoGPT, Chutes, Fireworks AI, and others. They don't make the models themselves. Instead, they have arrangements to host or route to many different models from many different companies, all under one roof. You get one API key, one account, and access to potentially hundreds of models.

Why Use a Third-Party Proxy Instead of Going Direct?

There are several reasons:

  • Convenience. One account, one API key, one billing method for access to dozens or hundreds of models. Instead of making separate accounts on DeepSeek, OpenAI, Anthropic, Google, and Mistral, you just use OpenRouter or NanoGPT and pick whichever model you want.
  • Payment flexibility. Some first-party providers require minimum deposits, have complicated billing, or don't accept certain payment methods. Proxies often have simpler payment options, subscriptions, or low minimums.
  • Model discovery. Proxies let you easily browse and try different models without committing to any single company.
  • Censorship differences. Some proxies run models with fewer filters or different system-level configurations than the first-party provider does. This varies and is not guaranteed.

Why Would You Go Direct Instead?

  • Price. First-party is often cheaper since there's no middleman markup.
  • Reliability. You're not depending on a third party's uptime and infrastructure on top of the model provider's.
  • Speed. One fewer hop in the chain can mean lower latency.

It's all different flavors of the same thing. Whether you use OpenRouter, NanoGPT, or go directly to DeepSeek's API, the model you're talking to is the same model. The difference is just how you get to it.


Choosing a Proxy

There are many proxy providers to choose from. Here are some popular ones:

  • OpenRouter — Large selection of models, well-known, widely supported. Pay as you go. NO SUBSCRIPTION AND DOES NOT TAKE PAYPAL.
  • NanoGPT — Cheap subscription option with pay-as-you-go available. Good value.
  • Lite Router — Have heard this one is really good, but I haven't used it at all.
  • Chutes — Has a subscription, but uptime and speed (tokens per second) can be poor. I would personally recommend avoiding it.
  • Fireworks AI
  • Novita AI
  • DeepInfra

There are many, many more. I personally use NanoGPT. The reason is that NanoGPT has a very cheap subscription with the option of doing pay as you go. The subscription gives you access to a limited but good set of models, and the pricing is hard to beat for casual to moderate use.


What You Need

There are three things you will need when setting up a proxy on Janitor AI:

  1. The Endpoint URL: This is the web address that Janitor AI sends your messages to. Every proxy has its own endpoint.
  2. The API Key: This is a secret key tied to your account that tells the proxy "this person is allowed to use my service and has paid." Think of it like a password. Do not share your API key with anyone.
  3. The Model Name: This is the specific model you want to use (e.g.,deepseek-chat,glm-4, etc.). Each proxy has its own naming convention for models.

The process of getting these three things is broadly the same across every provider. You make an account, you find your API key (usually in settings or an API section), you pick a model from their model list, and you use their documented endpoint URL.

Below I'll walk through the exact steps for OpenRouter and NanoGPT.


Setting Up OpenRouter

  1. Set up an account at OpenRouter.
  2. Go to Settings.
  3. Look for the button that says "API Keys".
  4. Name your key and click Create.
  5. Copy the API key that it gives you, and put it down somewhere safe.
    • Explore the models that OpenRouter provides and find one you want to use. I recommend using DeepSeek V3.2 for starters and light roleplayers. It's very cheap and will last you a long time, but you are sacrificing roleplay quality. If you aren't worried about price too much, I recommend GLM 5. It's pricier but significantly better.
  6. Go to your Janitor AI chat.
  7. Click the three lines (hamburger menu at the top) and then click API Settings.
  8. Click the "+ New" button.
  9. Put in the name, your model, and your key.
  10. For the OpenRouter endpoint URL, paste in:
https://openrouter.ai/api/v1/chat/completions
  1. (Optional) If you want, you can look for a custom prompt or advanced prompt somewhere and put it in the appropriate section.

And you are done!

A note for OpenRouter users: While there are some free models, I believe there is a 50-request cap before you have to put some money in. From there you should be able to use more.


Setting Up NanoGPT

  1. Set up an account at NanoGPT.
  2. Either get the subscription or put money in to pay as you go.
    • Sidenote: The subscription has a limited set of models you are allowed to use.
  3. Click the API section.
  4. Under "API Keys", copy your API key. We will use this soon.
  5. The endpoint URL for NanoGPT is:
https://nano-gpt.com/api/v1/chat/completions
  1. Click "Models", go to "Text", and select a model you wish to use.
  2. Go to you Janitor AI chat.
  3. Click the three lines (hamburger menu at the top) and then click API Settings.
  4. Click the "+ New" button.
  5. Put in the name, your model, and your key.

And you are done!


Understanding Model Settings and Parameters

Now that you have your proxy set up, you'll see a bunch of settings and sliders in your API configuration. These control how the model generates its responses. Understanding them will help you fine-tune the output to your liking.

What Are Tokens?

Before anything else, you need to understand tokens, because almost every setting revolves around them.

A token is the basic unit of text that an AI model reads and writes. Models don't see words the way you do. They break text down into tokens, which are chunks of characters. Roughly speaking:

  • 1 token ≈ 3–4 characters of English text
  • 1 token ≈ ¾ of a word
  • 100 tokens ≈ 75 words
  • 1,000 tokens ≈ 750 words

For example, the word "hamburger" might get split into three tokens:ham,bur,ger. The word "the" is typically one token. A short sentence like "The cat sat on the mat" is about 6–7 tokens.

This matters because you are billed per token. When a proxy says a model costs X per million input tokens and Y per million output tokens, this is what they mean. Every message you send costs input tokens, and every response the model writes costs output tokens.

What Is Context Size (Context Window)?

The context window is the total amount of tokens the model can "see" at one time. Think of it as the model's short-term memory. The context window includes everything:

  • Your system prompt / bot personality + scenario / character card / instructions
  • The entire conversation history that gets sent
  • The model's response that it's currently generating

Different models have different context sizes:

  • Some models have 8K tokens (roughly 6,000 words)
  • Many modern models have 128K tokens (roughly 96,000 words)
  • Some go up to 200K or even 1M tokens

Here's the critical thing to understand: the model has no memory outside of the context window. Every single time you send a message, Janitor AI bundles up your system prompt, the bot personality and scenario, your character card, and as much of the conversation history as it can fit into the context window, and sends it all to the model. The model reads the whole thing from scratch every time and generates a response.

How the Context Window Fills Up and What Happens

As your conversation gets longer, it takes up more and more tokens. Eventually, the conversation history will be too large to fit inside the context window along with your system prompt and character card.

When this happens, the oldest messages get dropped. They are simply cut off from the beginning of the conversation. The model literally cannot see them anymore. They are gone from its perspective. This is why, in very long conversations, the AI might "forget" things that happened early on — those messages fell out of the context window.

This is not a flaw in any particular model. It is a fundamental limitation of how all current LLMs work. The context window is a hard boundary.

A larger context window means the model can "remember" more of the conversation before old messages start getting dropped. This is one reason why models with large context sizes (128K+) are desirable for long roleplay sessions.

However, be aware that most models degrade in quality when the context is very full. Just because a model supports 128K tokens doesn't mean it performs equally well at 128K as it does at 16K. Performance, coherence, and instruction-following tend to decline as you approach the upper limits of the context window.

Max Tokens (Max Output Tokens)

This setting controls the maximum number of tokens the model is allowed to generate in a single response. If you set it to 500, the model will stop writing after 500 tokens (~375 words) even if it wasn't done. If you set it to 0 tokens, you are removing the cap on how many tokens it can write. You will almost always want to keep this setting on 0

  • Lower values = shorter responses, faster generation, cheaper per message
  • Higher values = the model can write longer responses, slower, costs more

Important: this is a maximum, not a target. Setting max tokens to 1000 doesn't mean the model will write 1000 tokens. It might naturally finish its response in 800 tokens. The setting just determines where the hard cutoff is.

If you find the model's responses are getting cut off mid-sentence, set this value to 0.


Temperature

Temperature controls how random or creative the model's word choices are.

When a model generates text, it predicts the next token by calculating a probability for every possible token. Temperature adjusts how those probabilities are used:

  • Temperature = 0 — The model almost always picks the single most probable token. Output is very deterministic, repetitive, and "safe." If you ran the same prompt twice, you'd get nearly identical responses.
  • Temperature around 0.8–1.0 — A balanced range. The model is more willing to pick less obvious tokens, leading to more varied and natural-sounding output. This is where most people land for roleplay.
  • Temperature above 1.0 (1.2–2.0) — The model gets increasingly wild and unpredictable. It starts picking low-probability tokens more often. Output can become creative and surprising, but also incoherent, nonsensical, or full of gibberish if pushed too high.

For roleplay, most people find a temperature between 0.8 and 1.2 works best. Start around 0.8–1.0 and adjust from there.

Top P (Nucleus Sampling. ADVANCED ONLY)

Top P is another way to control randomness, but it works differently from temperature.

When the model predicts the next token, it has a ranked list of candidates with probabilities. Top P says: "Only consider tokens whose cumulative probability adds up to P."

  • Top P = 0 — Default / off.
  • Top P = 0.9 — Only consider the smallest set of tokens that together account for 90% of the total probability. The bottom 10% (the extremely unlikely tokens) are excluded.
  • Top P = 1.0 — Consider all possible tokens. No filtering.

Top K. ADVANCED ONLY

Top K is the simplest form of randomness filtering.

  • Top K = 0 — Default / off. Don't filter at all.
  • Top K = 50 — When choosing the next token, only consider the top 50 most likely tokens. Ignore everything else.
  • Top K = 1 — Only consider the single most likely token (functionally the same as temperature = 0).

Repetition Penalty. ADVANCED ONLY

Repetition penalty penalizes the model for repeating tokens it has already used. When the model is choosing the next token, the repetition penalty looks at what tokens have already appeared in the output (and sometimes the input). For any token that has appeared before, its probability gets reduced.

  • Rep. Penalty = 0 — Default / off. The model can repeat freely.
  • Rep. Penalty around 1.0–1.2 — Mild penalty. Reduces the tendency to repeat the same words and phrases.
  • Rep. Penalty above 1.3 — Strong penalty. The model will actively avoid reusing words, which can make output sound strained and unnatural as it scrambles for synonyms.

Frequency Penalty. ADVANCED ONLY

Frequency penalty is closely related to repetition penalty, but with an important difference: it scales based on how many times a token has been used.

  • Repetition penalty applies a flat penalty to any token that has appeared at all, regardless of whether it appeared once or fifty times.
  • Frequency penalty applies a penalty that increases the more times a token has been used. A word used once gets a small penalty. A word used ten times gets a much larger penalty.
  • Freq. Penalty = 0 — Default / off.
  • Freq. Penalty = 0.1–0.5 — Light to moderate. Gently discourages overuse of the same words.
  • Freq. Penalty above 1.0 — Aggressive. Strongly punishes repeated words.

Leave These Alone Unless You Know What You're Doing

Top K, Top P, Repetition Penalty, and Frequency Penalty all default to 0 (off). I recommend leaving all four of them alone unless you have a specific problem you're trying to solve and you understand what the setting does. Temperature and Max Response Tokens are the two settings most people will actually want to adjust. The rest are advanced tuning knobs that are easy to misuse, and bad values will make your output noticeably worse.

Prefill

Prefill is text that gets automatically inserted before every message you send. It's not something the AI writes, it's text that gets prepended to your input before the model ever sees it.

For example, if you set your prefill to Hello. and then you type My name is Bob. , the model would actually receive:

Hello. My name is Bob.

The model has no idea the prefill exists as a separate thing. It just sees one combined message.

Prefill is primarily used for jailbreaking. People use it to prepend instructions or phrases that attempt to bypass a model's content filters or safety guardrails. The idea is that by injecting certain text before every message, you can steer the model into behaving differently than it normally would.

I recommend not using prefill. It frequently leads to incoherent, awkward, or broken responses because the model is receiving text that doesn't naturally flow with what you actually typed. The prepended text can confuse the model, conflict with your system prompt, or cause it to misinterpret the context of your message. Unless you have a very specific and well-tested reason to use it, leave it empty.

Forbidden words

Self explanatory. Forbidden words let you tell the model "never use these specific words or phrases."

If you add a word to the forbidden words list, the model will be prevented from generating that word in its output. When the model would have chosen that token, it is forced to pick a different one instead.

Why Is This Useful?

Every model has certain words and phrases it tends to overuse. These are called slop. Cliché, repetitive language that the model defaults to. Common examples include:

  • "delicate" / "lithe" / "supple"
  • "sends shivers down"
  • "a mix of" / "a mixture of"
  • "the air crackles with"
  • "ozone"
  • "ministrations"
  • "oh god" (when overused)
  • "tears streaming down"
  • "audible pop"

If you notice the model using a particular word or phrase in every single response and it's driving you crazy, you can add it to the forbidden words list and the model will be forced to find an alternative.

Things to Keep in Mind

Be specific. Banning common words like "the" or "and" will destroy the output quality because the model literally cannot use them. Only ban words that are genuinely overused and annoying.
Banning works at the token level. (i think. dont quote me on this) This means banning the word "delicate" might not catch "Delicate" (capitalized) or "delicately" depending on how the tokenizer splits them. You may need to add multiple variations.
Don't go overboard. If you ban 200 words, the model has to constantly route around all of them, and the output will start to feel strained and unnatural, similar to cranking repetition penalty too high. A focused list of your biggest annoyances (5–20 words) is much more effective than a massive ban list.

How to Use It Effectively

Start with no forbidden words. Roleplay for a while. When you notice the same word or phrase appearing over and over across multiple responses, add it to the list. This is a reactive tool, use it to solve specific annoyances, not as a preventative measure.

Final Advice on Settings

Don't change everything at once. Start with the defaults, and if something bothers you about the output — it's too repetitive, too random, too short, too boring — adjust one setting at a time so you can tell what's actually making a difference. Small changes go a long way. Moving temperature from 1.0 to 1.05 is a nudge. Moving it from 1.0 to 2.0 is chaos.

Additional

I made this proxy guide for Janitor AI users specifically. If you want to follow me this is my profile.
https://janitorai.com/profiles/1cfcfd1b-c68f-4d2f-9f4b-56cd3e64182f_profile-of-phinlorian
I make bots occasionally. I think they're pretty alright.

Edit

Pub: 11 Apr 2026 05:42 UTC

Edit: 11 Apr 2026 05:58 UTC

Views: 472