KoboldAI 1.19 has now been officially released, this is the last update before our new UI work will make an apperance. With this release you can enjoy the following improvements.

Brand new API by VE Forbryderne

With this brand new API you can now use the power of KoboldAI within your own software, its a json based REST API and accessible by adding /api to any KoboldAI generated URL (The same applies for the documentation).

Not only can you use this as the generator behind your own projects, you can also use it to automate things like world info management trough the various options available.

We have already seen some cool community projects come out of this like the pay what you want card game Hypnagonia using it to generate new dreams for the players. The KoboldAI Horde that allows you to share your KoboldAI model with other people and the Adventure Bot AIPD uses for his streams.

This also marks the end of the old server API from the beginning of KoboldAI that was used to hook a local KoboldAI up to the old server colab's. Instead, you can now use this new API to do the same thing by connecting KoboldAI to KoboldAI.

KoboldAI Horde by DB0 and MrSeeker

One of the first projects to be built on our brand new API is the KoboldAI Horde, a system where you can share your KoboldAI Instances with other people. Horde has its own API and allows Kobold (and other programs) to be connected to random KoboldAI servers hosted by the community. Its an exciting alternative to Google Colab and can power things like chatbots, your own local KoboldAI and more!

For more information and instructions on how to both use the Horde and join the horde you can visit https://koboldai.net . With this release you no longer need United to use it.

New AI model menu and File Management by Ebolam (Not on Colab)

You now have the freedom to change the AI model at any time with this brand new AI model menu. Instead of having to pick in the console app you can now load the model at any time, even if you are not near your device (With remote mode).

Both the Softprompts and Userscripts now also have additional file management dialogs, so even if you do not have direct access to the machine because for example you are using a docker instance hosted elsewhere you can now manage the relevant KoboldAI files without having to leave KoboldAI.

Probability Viewer by one-some

We already had the Logit Viewer userscript to show you the chances of each token for your generations, this is now built in to the UI. Once turned on you see an extra panel with all the statistics.

Token Streaming (GPU/CPU only) by one-some

With Token Streaming enabled you can now get a real time view of what the AI is generating, don't like where it is going? You can abort the generation early so you do not have to wait for the full generation to complete. This is especially helpful for those of you generating large texts at a time especially on slower devices such as the CPU.

Full Determinism and Static Seed's by VE Forbryderne

Lets face it, the AI is random so it can be very hard to actually understand the changes you are making. You never quite know if you had a streak of luck or that changes you made had an impact.

Now with Full Determinism you will get the same generations for the same actions, so you can easily see which impact the changes you are doing are having. This helps a lot in trying to find the right settings for a model but also helps us during the testing.

There also is a hidden feature to define a static seed, you can do this by opening the .settings file for the specific model when KoboldAI (or the model in question) is not running. Changing the seed from null to a number. If you ever had moments where the AI was perfectly coherent one day and suddenly worse the next that might be because of the seed difference. With this option you can dial it down to a seed you know works for you. This is especially useful for us when testing, because now developers can try out the same seed and settings across different versions of KoboldAI to make sure that the AI quality either improved or stayed the same. This makes it much easier for us to diagnose reports related to the AI coherency.

Show Field Budget by one-some

Show Field Budget turns on a token counter for each input which allows you to see how much tokens you have left before overloading the AI. This does rely on a lot of communication between the client and the server, so its not recommended to use this feature in devices with lower browser performance.

Automatic Spacing by Henk717

This is an overhaul of the Autmatic Sentence Spacing we had prior, you no longer have to manually manage spaces between your words for Novel modes. The old version of KoboldAI would often fuse words together which could make new submissions frustrating to do. In this new mode you now automatically get the relevant spaces even if it is not at the end of a sentence.

Movable Repetition Penalty by VE Forbryderne

In the last update we introduced Sampler options, now Repetition Penalty has been added so you can adjust where repetition penalty is applied.

Accelerate Support by VE Forbryderne

This is not a feature you are going to notice much, but its a massive improvement behind the scenes. The old breakmodel implementation has been replaced (Still falls back when needed) with Accelerate integration. This means that in the future in most cases we no longer have to manually add support for softprompts and multiple GPU's / CPU & GPU splitting. It also allows the ability to cache models to disk, this is much slower but can help when you lack the ram completely.

With a bit of luck this is going to mean day 1 support on the GPU side for newly released huggingface models (Limited to text generation models). This has already proven itself by supporting NeoX 20B models, which in this release are now fully supported outside of Colab.

Logging overhaul by DB0

DB0 overhauled all the console logging messages, you now have more options on how verbose you wish KoboldAI to be, and most messages have been categorised.

New Official Docker by Henk717

KoboldAI now has an official standalone docker in the form of koboldai/koboldai:latest which can be used by hosting services such as Runpod , Vast.AI and more.

When designing this we colaborated with Runpod to make it as easy as possible for their service (The link above is automatically configured to use KoboldAI). You can pick the desired GPU and follow the steps mostly next next finish, once loaded you have the ability to directly connect to KoboldAI. Be sure to turn off or even delete your instances if you no longer wish to pay per second.

The docker will also automatically start KoboldAI with a cloudflare link. In providers where you have access to the docker logs you can grab the link from there. In other cases you may need to unblock port 5000 or you might be able to use a built in proxy like with runpod.

Because Docker setups and providers are so diverse we made sure to add a lot of flexibility, by default the docker will expect that the /content folder is mounted to your persistent volume. You only need one volume for this, everything else is automatically mapped. Because persistent volumes can cost a lot of money we do not store models persistently by default. Only your stories, settings, userscripts and softprompts are brought over.

With the argument KOBOLDAI_DATADIR you can specify alternative locations for this directory if for some reason your solution does not allow volumes mapped to /content. Alternatively you can specify KOBOLDAI_MODELDIR if you do wish to have your models stored on your persistent volume. For example with KOBOLDAI_MODELDIR=/content .

Lastly with KOBOLDAI_ARGS you can specify the startup arguments for KoboldAI if any are needed, for example to change the port or immediately load a model.

What is next for KoboldAI?

You may have noticed the stuff announced in this post is mostly backend related ensuring that KoboldAI is a better experience when hosted on remote hardware. There is a good reason for this because the bigger overhaul of the UI that has been shown before was being developed in parralel.

With this release out of the way KoboldAI is now ready to be used across the entire landscape we wish to support: Colab, Local Usage and GPU Rental Services allowing everyone who wishes to use KoboldAI to use the Official version on their platform of choice.

With these changes in the stable version the next focus of our United branch will be to bring in these new UI features so we can continue our work on getting those ready for the next release of KoboldAI.

KoboldAI United testers can expect to see these changes in the upcoming weeks, I do want to give extra caution these upcoming United changes break compatibility with the official version of KoboldAI released today. If you decide to test United expect that soon your settings and saves will no longer work on the official version. Make backups before testing or changing between them.

Closing Notes & Links

I hope everyone enjoys this new release as much as we enjoyed making it.
You can find the online Google Colab version at https://koboldai.org/colab
The latest Windows installer can be downloaded from https://koboldai.org/windows
And lastly the code, extra information and versions for other platforms can be found on https://koboldai.org

If you like to join our Discord community you can use https://koboldai.org/discord to talk with us directly. This is also the place where the community communicates their desires for which Horde models they would like to see hosted at that time, and where we collaborate on idea's for future updates.

  • KoboldHenk
Edit
Pub: 05 Oct 2022 18:05 UTC
Views: 816