A Guide to Choosing an LLM Model (ChatGPT, Mars/Mercury, Running Locally)

Before you Read further

Hi, I'm Statuo. This rentry is because I'm honestly tired of repeating myself in Chubs discord on 'what model should I use.' This isn't exactly a comprehensive list but it is meant to be a helpful guide to those of us who just want to get started using some bots and aren't sure what model they should use or why.

Long story short, you should still be doing your own research. I am not responsible for you dumping money into something you then do not enjoy.

That being said, this is the general recommended advice I and others spew out regarding model choices.

A Quick Explanation about Model Sizes.

If you've spent any time in a Bot discord, you might have heard people throw around things like: "13b" "33b" "70b" and wondered what that means. Those numbers are model sizes. In as ELI5 as I can, the number directly before the "b" is the model size and larger is generally better. The larger the number, the better the model usually is at being able to perform logical leaps. This isn't to say that a 13b model is bad, far from it, most people running locally use a 13b model and it's more than serviceable. It's just that you generally have to do less work when it comes to prompting the bigger the model size.

A Quick Explanation about Context Size/Tokens.

Tokens - in its simplest form - are just pieces of words that the model will use. Context Size refers to the maximum amount of tokens a model can process at any given time. Trying to go past the Context size results in errors. But if you're in a particularly long-running chat, the model will just dump out earlier posts in the chat to make room for new posts. In example, Mars/Mercury both have an 8k context size, meaning they can handle 8196 tokens at once.

ChatGPT (GPT)

Starting with the big dog in the house. Let's get this out of the way. ChatGPT offers the best quality currently in the market. As of time of writing, ChatGPT's owning company stated that it takes 700k a day just to keep ChatGPT running. At a 220b model it is incomparable to other models currently on the market. Anyone trying to tell you that their model can 'beat ChatGPT' is absolutely lying to your face or doesn't know what they're talking about.

That being said, using ChatGPT comes with negatives. So let's break this down:

Pros

  • Best quality in the market.
  • Has some of the largest Context Sizes available - up to 128k
  • Users state that once you use the high-level GPT models such as GPT4 or GPT 3.5 Turbo, it's difficult to downgrade to other models because of how good it is.
  • You're basically guaranteed to not run into compatibility issues with cards that were only designed using GPT

Cons

  • It is not NSFW by default. So if you want to see NSFW context such as things involving sexual scenes, violence, etc., you'll need a proper jailbreak.
  • Since the system changes so often, your jailbreak may need to change from time to time necessitating maintenance and frustration.
  • You can get banned for performing NSFW content. You get a refund, but this means you need to set up a new account. "Trial-scumming" where you set up a large number of accounts using paid-for phone numbers is something you'd have to learn how to do.
  • Users state that once you use the high-level GPT models such as GPT4 or GPT 3.5 Turbo, it's difficult to downgrade to other models because of how good it is.
  • ChatGPT usage can be very expensive, with users reporting that they spend sometimes hundreds of dollars a month simply using it. Your own usage can vary, but it's worth mentioning that you can end up spending more money than you want to if you're not careful.
  • You have a token-per-day limit. Meaning that once you hit the maximum generations, you can't generate any more unless you upgrade your plan.
  • ChatGPT does not natively cater to roleplayers and requires a good System/Main Prompt to get it to do what you want. But that's generally a one-time setup.

Final Thoughts

ChatGPT offers the best quality in the market, but its expensive nature means that it's something you only want to use sparingly, or if you have the funds to continually use it. Using it can also make switching to other models much more difficult if your financial situation changes, meaning that if you can't afford ChatGPT anymore you may be stuck until other models catch up to ChatGPT's quality which doesn't have a timeframe and may very well be years down the line.

Banwaves occur frequently as well, meaning you could have an account one day and none the next. Filters changing mean that you could end up with a jailbreak that doesn't work and instead of being able to just up and chat, you have to spend the day fighting the filters instead. This can be frustrating, especially since ChatGPT doesn't necessitate that you actively learn how the models process information so you can get around the filters.

Some of you might be wondering why ChatGPT would do banwaves for NSFW content even if you're paying for it. Short answer: Investors. No matter how much money you put in, it's not worth it to scare off investors who don't want to be associated with NSFW content.

Mars and Mercury

Mars and Mercury are hosted on Chub Venus and are models that are uncensored, with 8k context size as of their most recent update and have no limit to token generation. The biggest draw is that it comes uncensored, meaning you don't have to mess with any jailbreaks. Lore (The owner of Chub Venus) has also said that they can't access any of the chatlogs that you don't share publicly. Mars is a 70b model based on a Llama v2 Finetune. While Mercury is a 13b Model that uses MythoMax. Both are geared towards users who engage in written roleplays.

Pros

  • Comes uncensored. Meaning that you don't need a jailbreak to engage in NSFW content such as Violence/Sexual situations/etc.
  • 8k Context is more than enough for most roleplays.
  • Unlimited Token Usage means that you don't have to worry about pay-as-you-go which you have to worry about with other sites.
  • Subscriptions are relatively cheap per value. $5/month for Mercury (13b MythoMax) and $20/month for Mars (70b Llama v2 finetune) is very good value if you chat with bots a lot.
  • Two person dev team that engages with the Discord frequently.
  • Devs have reaffirmed their stance that they will continue to host uncensored content.
  • Users have stated that the quality of Mars/Mercury's writing can easily mirror that of ChatGPT. Which, to be clear, isn't it's logic capacity, but it's ability to output good quality writing for roleplayers.
  • Since Mars/Mercury run on Chub Venus, you can chat with your bots using your phone on the go which reportedly works very well.
  • Mars has two models to choose from - Asha and Mixtral - which allows you some variety in the bot you use and can help things from getting stale.

Cons

  • A two person dev team means that if one is sick or out, bugs or issues can go unresolved until they are available.
  • Chub Venus and Mars/Mercury as a result is beholden to the whims of how the server hardware is feeling today. If the service is down or overloaded, you may not be able to access the service you paid for.
  • You do have to learn how to get the most out of the models. While there are guides that you can read to make this easier in the Chub Discord, this does mean that you will have to spend time reading.
  • Cards designed for GPT specifically may (most likely) not perform well on these models, limiting some options for card usage, especially if you're using the Mercury model.
  • Only two options to subscribe currently are Paypal and Crypto. Depending on your country one or the other may not be available.
  • Servers can hit capacity for subs. If too many people are subbed it means you'll have to wait until someone elses sub drops off.
  • You can only use whatever models that the devs decide to put out. Which may get 'same-y' over a long period of time.
  • Compatibility issues with cards designed for GPT4, especially ones with heavy stat tracking.

So... Should I get Mercury or Mars?

We get asked this question a lot and my answer is always the same:

How much work do you want to put in?

You're going to have to put in work no matter what and the payoff is worth it. But Mercury requires more wrangling than Mars does and you'll have to read guides, use the settings (at least the default settings provided in the server FAQs if nothing else) and get used to some jank. With Mars, there's less jank to deal with. But some people still prefer Mercury because of Mercury's writing style.

Remember, if you sub to Mercury you can't upgrade to Mars until your sub runs out. But if you sub to Mars you get both Mars and Mercury included.

Final Thoughts

Mars/Mercury offers great value if you're lacking the hardware to run a 13b model or want to get better quality output from something like Mars. The relative cheap cost to subscribe (other sites charge $15/month for 13b models, whereas Chub Venus charges $5/month for Mythomax 13b) along with unlimited token generations is top of its class currently. If you're a heavy chatbot user, this is where I'd start.

That being said, the sites wonkiness due to site upgrades, constant development updates, and issues arising from only having a two person dev team can result in downtime. Which can suck and usually means you're just SOL for the day. The Devs do generally know what they're doing, but Chub Venus has also been known to be the target of DDOS attacks in the past.

Having to learn how to work around Mars/Mercury's quirks is also a necessity. Since it's not ChatGPT and can't cover for your mistakes as easily, it means you have to learn how to respond to the bot, what settings to use and what they do. It's a one-time learning cost, but it can take a few hours to read through available docs and actually get yourself up to speed. If you're not sure how to change the display settings on your PC without calling tech support, this may be prohibitively difficult for you.

OpenRouter

Oof. OpenRouter. It's basically everything we said about ChatGPT with one big difference. It does have a free model - a 7b Zephyr Model. So I'm not going to rehash everything I said in ChatGPT here. But the 7b model is - of course - a 7b model. They tend to struggle with logical leaps and you will find yourself fighting it.

Kobold Horde

Kobolds Horde is a community-run project to allow people to use models of (generally) up to 33b. The way it works is covered in their docs. But at a baseline, Kobolds Horde allows other users like yourself to host a model on their machine, then they post it up for other users to use. It's free and people generally host uncensored models. Kobolds devs stated that using this service does not open up things since as your Kobold Horde User ID or your IP address to the host machine.

Pros

  • Free. Hard to beat the price of free.
  • Not hosted using your own computer.
  • A wide variety of models, usually up to about 33b.

Cons

  • You ARE sending your data to someone elses server/computer. Meaning that anything you send can technically be accessed by that user. While this may not be your IP address, there is always a risk associated with having someone you don't know process your prompts.
  • Popular models can be flooded and due to a lack of extra workers, prompts can sometimes take longer to actually process.
  • You can only use whatever models that the workers decide to put out. Which may get 'same-y' over a long period of time.
  • You need SillyTavern or Kobolds own frontend: KoboldAI to actually use the Horde.
  • Compatibility issues with cards designed for GPT4, especially ones with heavy stat tracking.

Final Thoughts

Personally, I can't recommend using Kobold Horde if only because I don't like the idea of sending my personal prompts to another persons computer to process, even if Kobolds devs have stated that they can't see things like my IP Address. However, if you're just desperately jones-ing for a fix, Kobolds Horde is free and if you're fine with the above issues stated, it does allow you to gen stuff for free.

NovelAI

NovelAI is a primarily AI story-writing service. It has Text Adventure stuff and people use it for Roleplays as well. It does require a subscription starting at $10 a month for 3k context size, so most people will want to opt for the $15 tier with 6k context size. It does have a $25 tier for 8k context size as well. Its writing quality is said to be good. But it does require you to set up SillyTavern to use it with bot cards. You also need to mess with the settings for optimal usage. But people report being exceedingly surprised with NovelAI's ability to generate stories and roleplays while staying true to bot cards personalities. It is, however, a 13b model at the end of the day. Meaning for $25 you're getting a model that's the same size as Mercury for $5.

Pros

  • Great writing quality due to the fact that it's built on writing stories to begin with.
  • A wide variety of responses allows roleplays to feel fresh, even if using the same card repeatedly.
  • Users report being extremely happy with roleplay outputs if you're the kind of person who likes to tell a story with your roleplays.
  • They're developing a model specifically for chatbots
  • Comes with its own image generation service. By subbing, you get a large number of credits to use that as well, allowing you to generate images that include NSFW content. Novel AI v3 Anime is said to have some really great outputs which don't require a lot of user knowledge to get into. If you don't have the hardware to generate images on your own this can be a huge benefit. Especially if you don't want to learn a lot about image generation services in general.
  • Unlimited Text Generations once you sub.

Cons

  • It's a 13b model. Meaning you get all the quirks of a 13b model for three times as much as you would pay for Mercury on Chub Venus.
  • Requires a specific setup using SillyTavern to properly work, which can be intimidating for users who don't want to put in the effort to experiment.
  • Documentation on setting it up for chatbots is surprisingly sparse, meaning you will have to rely on the community to help you if you have issues.
  • Compatibility issues with cards designed for GPT4, especially ones with heavy stat tracking.
  • It's pretty expensive if you're just looking for short-term stories that only last a couple dozen messages.
  • It's a website, so if NovelAI has issues you can't use the service.

Final Thoughts

NovelAI is a pretty good service if you're looking to put in the time to set it up and learn it. Users report being happy with it's ability to roleplay despite not being designed for it. But its cost comparatively to other services can be off-putting. Still, the image generation service as a bonus is a great incentive to users.

Running Locally

Running Locally. The final boss of PC development. As someone who runs locally myself, I can say that it works but it does require you to have a decently mid-high end range PC. Nvidia GPU's (such as a Nvidia 3060) work best while AMD GPU's suffer and lag behind until AMD releases their own GPUs that are compatible. The biggest boon is that Running Locally is always free, since you're using your own hardware to run it. You're immune to banwaves that other services can run into. You don't have to worry about sites going down because - again - this is all dependent on your own hardware.

Running locally also means that you can test out new models that other people do not have access to because they can only use models that other people decide to host.

Pros

  • You can host any number of models you like, switching it up, adding in LORAs, and customizing your own outputs to a higher degree.
  • Immune to site outages and banwaves.
  • Free is a hard price to beat.
  • Since your output is staying locally, no one is going to see whatever degeneracy you get up to so long as they don't get access to your computer.
  • Most local models come uncensored. No jailbreak required.

Cons

  • You're limited by your hardware. If your hardware isn't up to the task of running at least a 7b model, it's not going to be worth it. Especially if you're coming off of ChatGPT it will feel like a huge step back.
  • AMD GPU users are left out in the cold here until AMD comes up with a solution to running AI on their GPUs and devs for frontends like SillyTavern and backends like Oobabooga and KoboldAI support it.
  • You're pretty much always going to be running quantized models which leads to some degradation in quality. Not enough to usually matter, but worth noting.
  • Most users are going to be able to run a 13b model at max. Those with Nvidia 3090/4090's can hope to run 33b models.
  • You have to set up everything. If you're not tech-savvy, this can be a time sink. That being said, SillyTavern and Oobabooga/KoboldCPP all have easy setups and guides on how to set it up.
  • Compatibility issues with cards designed for GPT4, especially ones with heavy stat tracking.
  • Running local models has been compared to running a mid-high tier load on your GPU.

Final Thoughts

I'm pretty biased here since I run 13b models locally myself. But running locally is the way to go if you're worried about privacy issues at all. It being free and your own ability to experiment is unparalleled compared to other options on this list. But if you can't really stand technical stuff or just don't have the hardware to run it, this is going to be off-limits to you. Others have reported using things like KoboldCPP to run higher tier models (up to 70b) but those run purely on your CPU and tend to take a very long time to process responses - usually upwards of 10 minutes the higher you go. Still, complete freedom from site issues is also a huge bonus. You'll never get on your computer one day and find yourself unable to use chatbots.

Edit
Pub: 01 Dec 2023 15:51 UTC
Edit: 28 Jan 2024 14:21 UTC
Views: 4669