SD RESOURCE GOLDMINE

character limit hit, move here for the next version, this will no longer be updated: https://rentry.org/sdupdates2

Warnings:

  1. Ckpts/hypernetworks/embeddings are not interently safe as of right now. They can be pickled/contain malicious code. Use your common sense and protect yourself as you would with any random download link you would see on the internet.
  2. Monitor your GPU temps and increase cooling and/or undervolt them if you need to. There have been claims of GPU issues due to high temps.

Links are dying. If you happen to have a file listed in https://rentry.org/sdupdates#deadmissing or that's not on this list, please get it to me.

There is now a github for this rentry: https://github.com/questianon/sdupdates. This should allow you to see changes across the different updates. There is also a WIP embedding directory here: https://github.com/questianon/sdupdates/wiki

If you know how to do stuff in markdown and html/can make a webpage easily/want to contribute in any way, contact me

NEWSFEED

Don't forget to git pull to get a lot of new optimizations + updates, if SD breaks go backward in commits until it starts working again

Instructions:

  • If on Windows:
    1. go to the webui directory
    2. git pull
    3. pip -r install requirements.txt
  • If on Linux:
    1. go to the webui directory
    2. source ./venv/bin/activate
      a. if this doesn't work, run python -m venv venv beforehand
    3. git pull
    4. pip -r install requirements.txt

10/31

10/30

10/29

10/28

10/27

10/26

10/21 - 10/25 (big news bolded, big thanks to asuka-test-imgur-anon-who-also-made-the-speedrun-tutorial for some info)

10/20

10/19

10/18

  • Clarification on censoring SD's next model by the question asker

10/17

10/16

10/15

  • Embeddings now shareable via images
  • Stability AI update pipeline (https://www.reddit.com/r/StableDiffusion/comments/y2x51n/the_stability_ai_pipeline_summarized_including/)
    • This week:
      • Updates to CLIP (not sure about the specifics, I assume the output will be closer to the prompt)
      • Clip-guidance comes out open source (supposedly)
    • Next week:
      • DNA Diffusion (applying generative diffusion models to genetics)
      • A diffusion based upscaler ("quite snazzy")
      • A new decoding architecture for better human faces ("and other elements")
      • Dreamstudio credit pricing adjustment (cheaper, that is more options with credits)
      • Discord bot open sourcing
    • Before the end of the year:
      • Text to Video ("better" than Meta's recent work)
      • LibreFold (most advanced protein folding prediction in the world, better than Alphafold, with Havard and UCL teams)
      • "A ton" of partnerships to be announced for "converting closed source AI companies into open source AI companies"
      • (Potentially) CodeCARP, Code generation model from Stability umbrella team Carper AI (currently training)
      • (Potentially) Gyarados (Refined user preference prediction for generated content by Carper AI, currently training)
      • (Potentially) CHEESE (some sort of platform for user preference prediction for generated content)
      • (Potentially) Dance Diffusion, generative audio architecture from Stability umbrella project HarmonAI (there is already a colab for it and some training going on i think)
  • Animation Stable Diffusion:
  • Stable Diffusion in Blender
  • DreamStudio will now use CLIP guidance
  • Stable Diffusion running on iPhone
  • Cycle Diffusion: https://github.com/ChenWu98/cycle-diffusion
    • txt2img > img2img editors, look at github to see examples
  • Information about difference merging added to FAQ
  • Distributed model training planned
    • SD Training Labs server
  • Gradio updated
    • Optimized, increased speeds
    • Git pulling should be safe

10/14

10/13

10/12

10/11

10/10

  • New unpickler for new ckpts: https://rentry.org/safeunpickle2
  • HENTAI DIFFUSION MIGHT HAVE A VIRUS confirmed to be safe by some kind people
    • github taken down because of nude preview images, hf files taken down because of complaints, windows defender false positive, some kind anons scanned the files with a pickle scanner and and it came back safe
    • automatic's repo has security checks for pickles
    • anon scanned with a "straced-container", safe
  • NAI's euler A is now implemented in AUTOMATIC1111's build
    • git pull to access
  • New open-source (?) generation method revealed making good images in 4 steps
    • Supposedly only 64x64, might be wrong
  • Discovered that hypernetworks were meant to create anime using the default SD model

10/9

Prompting

Google Docs with a prompt list/ranking/general info for waifu creation:
https://docs.google.com/document/d/1Vw-OCUKNJHKZi7chUtjpDEIus112XBVSYHIATKi1q7s/edit?usp=sharing
Anon's prompt collection: https://mega.nz/folder/VHwF1Yga#sJhxeTuPKODgpN5h1ALTQg
Tag effects on img: https://pastebin.com/GurXf9a4

  • Anon says that "8k, 4k, (highres:1.1), best quality, (masterpiece:1.3)" leads to nice details

Japanese prompt collection: http://yaraon-blog.com/archives/225884
GREAT CHINESE TOME OF PROMPTING KNOWLEDGE AND WISDOM 101 GUIDE: https://docs.qq.com/doc/DWHl3am5Zb05QbGVs

GREAT CHINESE SCROLLS OF PROMPTING ON 1.5: HEIGHTENED LEVELS OF KNOWLEDGE AND WISDOM 101: https://docs.qq.com/doc/DWGh4QnZBVlJYRkly
GREAT CHINESE ENCYCLOPEDIA OF PROMPTING ON GENERAL KNOWLEDGE: SPOOKY EDITION: https://docs.qq.com/doc/DWEpNdERNbnBRZWNL
GREAT JAPANESE TOME OF MASTERMINDING ANIME PROMPTS AND IMAGINATIVE AI MACHINATIONS 101 GUIDE https://p1atdev.notion.site/021f27001f37435aacf3c84f2bc093b5?p=f9d8c61c4ed8471a9ca0d701d80f9e28

Database of prompts: https://publicprompts.art/

Krea AI prompt database: https://github.com/krea-ai/open-prompts
Prompt search: https://www.ptsearch.info/home/
Another search: http://novelai.io/
4chan prompt search: https://desuarchive.org/g/search/text/masterpiece%20high%20quality/

Japanese prompt generator: https://magic-generator.herokuapp.com/
Build your prompt (chinese): https://tags.novelai.dev/
NAI Prompts: https://seesaawiki.jp/nai_ch/d/%c8%c7%b8%a2%a5%ad%a5%e3%a5%e9%ba%c6%b8%bd/%a5%a2%a5%cb%a5%e1%b7%cf

Japanese wiki: https://seesaawiki.jp/nai_ch/
Korean wiki: https://arca.live/b/aiart/60392904
Korean wiki 2: https://arca.live/b/aiart/60466181

NAI to webui translator (not 100% accurate): https://seesaawiki.jp/nai_ch/d/%a5%d7%a5%ed%a5%f3%a5%d7%a5%c8%ca%d1%b4%b9

Tip Dump: https://rentry.org/robs-novel-ai-tips
Tips: https://github.com/TravelingRobot/NAI_Community_Research/wiki/NAI-Diffusion:-Various-Tips-&-Tricks
Info dump of tips: https://rentry.org/Learnings
Outdated guide: https://rentry.co/8vaaa
Tip for more photorealism: https://www.reddit.com/r/StableDiffusion/comments/yhn6xx/comment/iuf1uxl/

  • TLDR: add noise to your img before img2img

SD 1.4 vs 1.5: https://postimg.cc/gallery/mhvWsnx
Model merge comparisons: https://files.catbox.moe/rcxqsi.png

Deep Danbooru: https://github.com/KichangKim/DeepDanbooru
Demo: https://huggingface.co/spaces/hysts/DeepDanbooru

Embedding tester: https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Euler vs. Euler A: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2017#discussioncomment-4021588

Seed hunting:

  • By nai speedrun asuka imgur anon:
    >made something that might help the highres seed/prompt hunters out there. this mimics the "0x0" firstpass calculation and suggests lowres dimensions based on target higheres size. it also shows data about firstpass cropping as well. it's a single file so you can download and use offline. picrel.
    >https://preyx.github.io/sd-scale-calc/
    >view code and download from
    >https://files.catbox.moe/8ml5et.html
    >for example you can run "firstpass" lowres batches for seed/prompt hunting, then use them in firstpass size to preserve composition when making highres.

Script for tagging (like in NAI) in AUTOMATIC's webui: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Danbooru Tag Exporter: https://sleazyfork.org/en/scripts/452976-danbooru-tags-select-to-export
Another: https://sleazyfork.org/en/scripts/453380-danbooru-tags-select-to-export-edited
Tags (latest vers): https://sleazyfork.org/en/scripts/453304-get-booru-tags-edited
Basic gelbooru scraper: https://pastebin.com/0yB9s338
UMI AI:

Random Prompts: https://rentry.org/randomprompts
Python script of generating random NSFW prompts: https://rentry.org/nsfw-random-prompt-gen
Prompt randomizer: https://github.com/adieyal/sd-dynamic-prompting
Prompt generator: https://github.com/h-a-te/prompt_generator

  • apparently UMI uses these?

http://dalle2-prompt-generator.s3-website-us-west-2.amazonaws.com/
https://randomwordgenerator.com/
funny prompt gen that surprisingly works: https://www.grc.com/passwords.htm
Unprompted extension released: https://github.com/ThereforeGames/unprompted

  • Wildcards on steroids
  • Powerful scripting language
  • Can create templates out of booru tags
  • Can make shortcodes
  • "You can pull text from files, set up your own variables, process text through conditional functions, and so much more "

Ideas for when you have none: https://pentoprint.org/first-line-generator/

PaintHua.com - New GUI focusing on Inpainting and Outpainting

I didn't check the safety of these plugins, but they're open source, so you can check them yourself
Photoshop/Krita plugin (free): https://internationaltd.github.io/defuser/ (kinda new and currently only 2 stars on github)

Photoshop: https://github.com/Invary/IvyPhotoshopDiffusion
Photoshop plugin (paid, not open source): https://www.flyingdog.de/sd/
Krita plugins (free):

GIMP:
https://github.com/blueturtleai/gimp-stable-diffusion

Blender:
https://github.com/carson-katri/dream-textures
https://github.com/benrugg/AI-Render

Script collection: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts
Prompt matrix tutorial: https://gigazine.net/gsc_news/en/20220909-automatic1111-stable-diffusion-webui-prompt-matrix/
Animation Script: https://github.com/amotile/stable-diffusion-studio
Animation script 2: https://github.com/Animator-Anon/Animator
Video Script: https://github.com/memes-forever/Stable-diffusion-webui-video
Masking Script: https://github.com/dfaker/stable-diffusion-webui-cv2-external-masking-script
XYZ Grid Script: https://github.com/xrpgame/xyz_plot_script
Vector Graphics: https://github.com/GeorgLegato/Txt2Vectorgraphics/blob/main/txt2vectorgfx.py
Txt2mask: https://github.com/ThereforeGames/txt2mask
Prompt changing scripts:

Interpolation script (img2img + txt2img mix): https://github.com/DiceOwl/StableDiffusionStuff

img2tiles script: https://github.com/arcanite24/img2tiles
Script for outpainting: https://github.com/TKoestlerx/sdexperiments
Img2img animation script: https://github.com/Animator-Anon/Animator/blob/main/animation_v6.py

Giffusion tutorial:

>git clone https://github.com/megvii-research/ECCV2022-RIFE
this is my git diff on requirements.txt to work alone side webui python environment
>-torch==1.6.0
>+torch==1.11.0
>-torchvision==0.7.0
>+torchvision==0.12.0
pip3 install -r requirements.txt
the most important part
>download the pretrained HD models and copy them into the same folder as inference_video.py
get ffmpeg for your OS (if you dont have ffmpeg it is good to have besides this app)
>https://ffmpeg.org/download.html
after this need to make sure ffmpeg.exe is in your PATH variable
then i typed
>python inference_video.py --exp=1 --video=1666410530347641.mp4 --fps=60
and it created the mp4 you see (i converted it into webm with this command)
>ffmpeg.exe -i 1666410530347641.mp4 1666410530347641.webm
Example: https://i.4cdn.org/h/1666414810239191.webm

Img2img megalist + implementations: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/2940

Runway inpaint model: https://huggingface.co/runwayml/stable-diffusion-inpainting
Inpainting Tips: https://www.pixiv.net/en/artworks/102083584
Rentry version: https://rentry.org/inpainting-guide-SD

Extensions:
Artist inspiration: https://github.com/yfszzx/stable-diffusion-webui-inspiration

History: https://github.com/yfszzx/stable-diffusion-webui-images-browser
Collection + Info: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Extensions
Deforum (video animation): https://github.com/deforum-art/deforum-for-automatic1111-webui

Aesthetic Gradients: https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients
Aesthetic Scorer: https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer
Autocomplete Tags: https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
Prompt Randomizer: https://github.com/adieyal/sd-dynamic-prompting
Wildcards: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/

Clip interrogator: https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb
2: https://github.com/pharmapsychotic/clip-interrogator

Inpaint guide: https://archived.moe/h/thread/6930399/#6930453

Anon:
By request, a very quick inpainting guide:

The key to good inpainting is understanding how "Inpaint at full resolution" actually works. The linked guides are obsolete and old at this. So I will tell you.

Inpaint at full resolution first determines the minimum rectangular box that fits all your mask. Then, it resizes the base image within that box into whatever your setting is for resolution. Note that when inpainting at full resolution, the resolution sliders determine THIS INPAINTING SPACE. In other words, you can inpaint for a 2048 by 2048 pixel image while having your sliders set for 256 by 256 pixels. There are some bugs with inpainting at full resolution with height > width https://pythontechworld.com/issue/automatic1111/stable-diffusion-webui/2524 so my recommendation is to just set it to 512 by 512 always.

Next, whatever is in your base image that would fit into the bounding box is rescaled and put into the inpainting space whose size is determined by your height and width sliders.

The padding for full resolution inpainting option ADDS ADDITIONAL PIXELS FROM THE BASE IMAGE TO YOUR BOUNDING BOX. It is extremely important to set this correctly. Essentially, it adds surrounding context from BEYOND your bounding box to the inpainting space. It MUST be set to a nonzero value if you want to match anything not interior to your mask. Set it very high if you want high context. The total input to the inpainting space is your window.

Next, what happens is essentially an img2img transformation on your window: which is the scaled image taken from the mask + original image bounding box. Set your prompts accordingly! Close-up is a very valuable tag to use in inpainting. Don't include prompts that are only relevant outside your window. DO include prompts that can determine composition within your window EVEN IF THEY AREN'T IN YOUR INPAINTING MASK.

Krita guide by anon:

  1. Get
    https://krita.org/en/
    https://github.com/Interpause/auto-sd-krita/wiki/Quick-Switch-Using-Existing-AUTOMATIC1111-Install
    https://github.com/Interpause/auto-sd-krita/wiki/Install-Guide#plugin-installation
  2. then you can run prompts in th app or pull one in then to inpaint like a boss, you add a new layer
    https://files.catbox.moe/xy6z32.png
  3. then use a white brush to brush the bits you want to change
    https://files.catbox.moe/esdqk7.png
  4. Turn off the layer off by hitting the eye icon but leave it selected
    https://files.catbox.moe/wzaiw9.png
  5. if you set everythign up right you have this section
    https://files.catbox.moe/n43yrh.png
  6. type what your after hit inpaint

Positive:
Biggest tip: just write what you want. the AI will generally understand and create it

  • NAI's default (generally good) positive prompts to add at the beginning of all prompts: masterpiece, best quality
    • can swap best for highest, high, etc.
  • Group the things that you want that are similar together (e.g. things relating to body type, things relating to clothing, etc.), and put these groups in order of most important to least important
    • Anon's order:

      the picture's quality
      the picture's subject
      their physical appearance
      their emotion
      their clothing
      their pose
      the picture's setting

  • "Anime screencap" creates scenes from an anime
  • from anon: to use character (franchise/series/show/etc.), you have to format it as character \(franchise\)
  • the tokenizer struggles to parse underscores, ymmv
  • img2img -> prompt gets you more consistency

Negative:

  • NAI's default (remove "nsfw" if you want nsfw outputs): nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry

Tags: https://danbooru.donmai.us/tags
Tag Groups: https://danbooru.donmai.us/wiki_pages/tag_groups
Most Popular Tags: https://danbooru.donmai.us/tags?commit=Search&search%5Bhide_empty%5D=yes&search%5Border%5D=count

Faces and heads:

Expressions:

Camera Angles:

Hair Styles:

Hands

  • Hands: writing "in the style of Serpieri" increased hand quality in SD v1.4

Colors:

Posture

Posing

Locations

Clothes

VAE:

Sex pose
https://litter.catbox.moe/las83s.txt

Booru tag scraping:

Wildcards:

Wildcard extension: https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards/

Some artists (may or may not work with NAI):

Anon's list of comparisons:

Creating fake animes:

1:1 NAI/Novel AI Cheatsheet:

  • 1:1 NAI cheatsheet by anon:

    • Use unpruned/full model
    • Load with ema weights (use .yaml config from base stable-diffusion, set use_ema to true) (minor)
      • Doubles ram
      • Anon: "I copied the one from this path (which is what voldy defaults to if one isn't specified):
        /repositories/stable-diffusion/configs/stable-diffusion/v1-inference.yaml

        And then on line 18 I set use_ema to True, and put that copy into the models folder with the correct name (name of model.yaml)."

    • CLIP layer = 2
    • Reset sigma noise / strength to the default value of 1 (no need to use 0.69 / 0.67)
    • Set eta noise seed delta to 31337
    • If using Euler a, eta noise seed delta = 31337
    • If prompt has weights, manually adjust the weight accordingly (voldy uses 1.1, NAI uses 1.05)
    • Use --no-half argument (minor)

Adding Variety
https://www.reddit.com/r/StableDiffusion/comments/yaziws/how_to_get_more_variety_in_your_ai_images_tutorial/

Photoshop Workflow Example
https://www.reddit.com/r/StableDiffusion/comments/wyduk1/show_rstablediffusion_integrating_sd_in_photoshop/

Tips to find a good img

anon: like i mentioned before, if using a non-ancestral sampler such as euler that trends to converge on the same image after a bunch of steps, you can roll seeds with 20 or less steps and very small resolution, then stop when you find a good seed and generate again with higher quality. saves scads of time when trying to make some really complicated 125 token prompt

so get a good image first with 512 rerolls
use the same seed but now with highres enabled

Anon's best output settings
[txt2img]

Positive: none
Negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, 3 legs, 3 arms
Sampling Steps - 47/51
Euler a
Width - 512 normal/768 2 characters or better landscapes
Height - 512 normal/768 full body
CFG Scale: 12/12,5/18

[img2img]

Positive: complete image/none
Negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, 3 legs, weird
Sampling Steps - 51
Euler a
Width - 512 normal/768 2 characters or better landscapes
Height - 512 normal/768 full body
CFG Scale: 18
Denoising strength: 0,5/0,68

Anon's workflow:
Artist list: https://rentry.org/anime_and_titties
Expressions and (STYLE): https://rentry.org/faces-faces-faces

Anon's order:
the picture's quality
the picture's subject
their physical appearance
their emotion
their clothing
their pose
the picture's setting

Lazy Prompting

go to *booru of your choice
pick an image you really like
click tags -> plain editor
copypaste to prompt
profit

Anon's Refinement Technique:

  1. generate a picture with the prompt that you want, be very precise. I personally generate pictures that are 512x512 initially.
  2. once you get a decent picture to come out of the generation, it will be used as the base "sketch" to feed to img2img.
  3. if you want to, increase the resolution but if you do so set the denoising to about .60
  4. once you have the resolution you want and everything, keep reprocessing the image with a denoising of about 0.2 - 0.3.
  5. if something on the image bothers you, work on it in an image editor, for example using a brush of the same colour of the what's adjacent to the detail you want to remove, or if you want to add something (like refining fingers), make sure to use the pencil with a contrasted colour (I generally use black).
  6. after editing, always reprocess the image with a denoising of about 0.3
  7. once the result satisfies you enough, use the "R-ESRGAN 4x+ Anime6B" upscaler if you want the image upscaled.

Models, Embeddings, and Hypernetworks

Models, embeddings, and hypernetworks can be pickled. Download at your own risk. Use https://rentry.org/safeunpickle2 to unpickle

Models (WIP)

Organized list: https://rentry.org/sdmodels
RISKY (MAJOR PICKLE WARNING) BUT A LOT OF MODELS HERE: https://bt4g.org/search/.ckpt/1
Collection: https://docs.google.com/spreadsheets/d/1fzsjGDlmbbPEnJWY9V2q5a99ySrK1Rj37sk7YJz8fCc/edit#gid=0
Organized list: https://cyberes.github.io/stable-diffusion-models/

Groups (add more later):

Upcoming models:

Stable Diffusion v1.4

Waifu Diffusion VAE (250k images)

Waifu Diffusion v1.3

Waifu Diffusion v1.2
Pruned Torrent:

magnet:?xt=urn:btih:153590fd7e93ee11d8db951451056c362e3a9150&dn=wd-v1-2-full-ema-pruned.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce

Full EMA Torrent:

magnet:?xt=urn:btih:f45cecf4e9de86da83a78dd2cccd7f27d5557a52&dn=wd-v1-2-full-ema.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce

Trinart2

Trinart1

gg1342_testrun1_pruned

  • magnet:?xt=urn:btih:c95e266e15e13cf0e2d69b29338a89a94d736546&dn=gg1342_testrun1_pruned.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
    
  • 280 NSFW nude solo women + 80 SFW fiction characters

Hentai Diffusion

RD1412

  • Pruned FP16
    magnet:?xt=urn:btih:da8986f9059ce4f64f84e7390eb542558b2cd466&dn=RD1412-pruned-fp16.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fipv4.tracker.harry.lu%3a80%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce&tr=https%3a%2f%2ftracker.nanoha.org%3a443%2fannounce&tr=https%3a%2f%2ftracker.lilithraws.org%3a443%2fannounce
    
  • Pruned FP32
    magnet:?xt=urn:btih:ab4c2d7308a3fa694f7409407399a1cc5d4c7ed9&dn=RD1412-pruned-fp32.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fipv4.tracker.harry.lu%3a80%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce&tr=https%3a%2f%2ftracker.nanoha.org%3a443%2fannounce&tr=https%3a%2f%2ftracker.lilithraws.org%3a443%2fannounce 
    

RD1212

  • Pruned FP16
    magnet:?xt=urn:btih:f4e78d085169d2077a316bd9b75723812c1ab429&dn=HenDiff_RD1212-pruned-fp16.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fipv4.tracker.harry.lu%3a80%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce&tr=https%3a%2f%2ftracker.nanoha.org%3a443%2fannounce&tr=https%3a%2f%2ftracker.lilithraws.org%3a443%2fannounce
    
  • Pruned FP32
    magnet:?xt=urn:btih:2a6b60f454dcf89b81e7db034fcb1536b774628c&dn=HenDiff_RD1212-pruned-fp32.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fipv4.tracker.harry.lu%3a80%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce&tr=https%3a%2f%2ftracker.nanoha.org%3a443%2fannounce&tr=https%3a%2f%2ftracker.lilithraws.org%3a443%2fannounce
    
  • Full EMA
    magnet:?xt=urn:btih:D0B89A0516205157EA0CBDDBBB49BC60C611A3B7&dn=RD1212.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
    

Bare Feet / Full Body b4_t16_noadd

  • Focused on bare feet and full body nude female images, good for genitalia and photorealistic feet
  • Pruned FP16 v3
    magnet:?xt=urn:btih:9530a8a0b43f83366216ab853b4419aa2056da58&dn=bf_fb_v3_t4_b16_noadd-ema-pruned-fp16.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.skyts.net%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.pomf.se%3a80%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.dler.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce
    
  • Pruned FP32 v3
    magnet:?xt=urn:btih:1f6bab17c548e35ac2a412e3e9119e5f4e00bb50&dn=bf_fb_v3_t4_b16_noadd-ema-pruned-fp32.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2810%2fannounce&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=http%3a%2f%2ftracker.openbittorrent.com%3a80%2fannounce&tr=udp%3a%2f%2fopentracker.i2p.rocks%3a6969%2fannounce&tr=https%3a%2f%2fopentracker.i2p.rocks%3a443%2fannounce&tr=udp%3a%2f%2ftracker.torrent.eu.org%3a451%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.skyts.net%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.pomf.se%3a80%2fannounce&tr=udp%3a%2f%2ftracker.moeking.me%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.dler.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.0x.tf%3a6969%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2fopen.stealth.si%3a80%2fannounce&tr=udp%3a%2f%2fopen.demonii.com%3a1337%2fannounce&tr=udp%3a%2f%2fmovies.zsw.ca%3a6969%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2fexodus.desync.com%3a6969%2fannounce&tr=udp%3a%2f%2fbt.oiyo.tk%3a6969%2fannounce
    

Lewd Diffusion

  • 70k images from Danbooru, based on Waifu Diffusion 1.2
  • Dataset: https://drive.google.com/drive/folders/1f_BYi88LLTZUzBHkUz8PDgw6l7M7swkd?usp=sharing
  • Dataset stats: https://docs.google.com/spreadsheets/d/1BzNSXyT4fhiM64DwIJSCyAXuhRQ9fkxqcr-t1frIYkc/edit
  • 2 epochs
    magnet:?xt=urn:btih:U5RICVYDEJL6LIJJWFKQOIVO5GMGCJNW&dn=last-pruned.ckpt&xl=3852165809&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce
    
  • 1 epoch
    magnet:?xt=urn:btih:fca8782a5a9861a6beb1aa3b48938bd1da1a665e&dn=LD-70k-1e-pruned.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
    
  • 0 epochs, 40k images
    magnet:?xt=urn:btih:f6976fbe3b9f93469bb62eb0c4950643b09f1f83&dn=Lewd-diffusion-pruned.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
    

Yiffy

  • During training explicit was misspelled as explict
  • Tags:
  • 18 epochs (210k images from e621):
  • 15 epochs (210k images from e621):
    • https://sexy.canine.wf/file/yiffy-ckpt/yiffy-e15.ckpt
    • https://pixeldrain.com/u/qkRKKpqg
    • https://iwiftp.yerf.org/Furry/Software/Stable%20Diffusion%20Furry%20Finetune%20Models/Finetune%20models/yiffy-e15.ckpt
      magnet:?xt=urn:btih:2b8d5f308244eddf56d4a350df84d63045e65dd6&dn=yiffy-e15.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
      
  • 13 epochs

    • first 4 epochs were trained on ~70k images with lama infilling (the cause of all of our headaches, because the network found a pattern in the edges and started replicating it everywhere)
    • next 6 epochs were trained on ~120k images with random cropping and a lower LR
    • last epochs were done on a different dataset, not bigger than 150k
    • https://iwiftp.yerf.org/Furry/Software/Stable%20Diffusion%20Furry%20Finetune%20Models/Finetune%20models/yiffy-e13.ckpt
      magnet:?xt=urn:btih:6d749325cbdcf1fc044483fb0d53c233b60735dc&dn=yiffy-e13.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
      

Furry

  • 300k images from e621
  • Tags:
  • Epoch 4
    • https://pixeldrain.com/u/dtYiYN7g
    • https://iwiftp.yerf.org/Furry/Software/Stable%20Diffusion%20Furry%20Finetune%20Models/Finetune%20models/furry_epoch4.ckpt
      magnet:?xt=urn:btih:a9635389ae4c5583b0cc76ec8f6dce35438b3016&dn=furry_epoch4.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
      
  • Epoch 1
    • https://iwiftp.yerf.org/Furry/Software/Stable%20Diffusion%20Furry%20Finetune%20Models/Finetune%20models/furry_epoch1.ckpt
      magnet:?xt=urn:btih:d62bc9a088b206565005cab915a58fd26da1802e&dn=furry_epoch1.ckpt&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fzecircle.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fyahor.ftp.sh%3A6969%2Fannounce&tr=udp%3A%2F%2Fvibe.sleepyinternetfun.xyz%3A1738%2Fannounce&tr=udp%3A%2F%2Fv2.iperson.xyz%3A6969%2Fannounce&tr=udp%3A%2F%2Fuploads.gamecoast.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker2.dler.org%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker1.bt.moack.co.kr%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.theoks.net%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.tcp.exchange%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.swateam.org.uk%3A2710%2Fannounce&tr=udp%3A%2F%2Ftracker.publictracker.xyz%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.bt4g.com%3A2095%2Fannounce
      
  • Epoch 0

Zack3D_Kinky-v1

  • Over 100k images, filtered aesthetics, NSFW, trained on SD v1.4, good for furry, specializes in kinks like transformation, latex, tentacles, goo, ferals, bondage, etc.
  • Uses e621 tags with underscores
    • https://pixeldrain.com/u/DEocAHsx
      magnet:?xt=urn:btih:807a71d3ed3f887e41c492cf24fbd3c6f5a81534&dn=Zack3D_Kinky-v1.ckpt&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2fopen.tracker.cl%3a1337%2fannounce
      

Anal Vore AVHumanFurryPony7

Gape Model

Pony Models

Purplesmart.AI's Pony v1:

https://mega.nz/file/ZT1xEKgC#Xxir5udMmU_mKaRZAbBkF247Yk7DqCr01V0pDzSlYI0

CookieSD's sfw/nsfw model:

https://drive.google.com/drive/folders/14JyQE36wYABH-0TSV_HBEsBJ3r8ZITrS

Pussy Diffusion 10/14 (only use for inpainting)

a merged model of 80 NAI 20 TRIN120k

  • https://mega nz/file/jB5lwa6J#ciSArZnJQLszvhatiMK2NTKFNjKYUhHJlXt9At3WRss

Berrymix Recipe
Rentry: https://rentry.org/berrymix

Make sure you have all the models needed, Novel Ai, Stable Diffusion 1.4, Zeipher F111, and r34_e4. All but Novel Ai can be downloaded from HERE
Open the Checkpoint Merger tab in the web ui
Set the Primary Model (A) to Novel Ai
Set the Secondary Model (B) to Zeipher F111
Set the Tertiary Model (C) to Stable Diffusion 1.4
Enter in a name that you will recognize
Set the Multiplier (M) slider all the way to the right, at "1"
Select "Add Difference"
Click "Run" and wait for the process to complete
Now set the Primary Model (A) to the new checkpoint you just made (Close the cmd and restart the webui, then refresh the web page if you have issues with the new checkpoint not being an option in the drop down)
Set the Secondary Model (B) to r34_e4
Ignore Tertiary Model (C) (I've tested it, it wont change anything)
Enter in the name of the final mix, something like "Berry's Mix" ;)
Set Multiplier (M) to "0.25"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
At the top left of the web page click the "Stable Diffusion Checkpoint" drop down and select the Berry's Mix.ckpt (or whatever you named it) it should have the hash "[c7d3154b]"

Fruit Salad Mix (might not be worth it to make)

Fruit Salad Guide

Recipe for the "Fruit Salad" checkpoint:
Make sure you have all the models needed, Novel Ai, Stable Diffusion 1.5, Trinart-11500, Zeipher F111, r34_e4, Gape_60 and Yiffy.
Open the Checkpoint Merger tab in the web ui
Set the Primary Model (A) to Novel Ai
Set the Secondary Model (B) to Yiffy e18
Set the Tertiary Model (C) to Stable Diffusion 1.4
Enter in a name that you will recognize
Set the Multiplier (M) slider to the left, at "0.1698765"
Select "Add Difference"
Click "Run" and wait for the process to complete
Now set the Primary Model (A) to the new checkpoint you just made (Close the cmd and restart the webui, then refresh the web page if you have issues with the new checkpoint not being an option in the drop down)
Set the Secondary Model (B) to r34_e4
Set the Tertiary Model (C) to Zeipher F111 (I've tested it, it changes EVERYTHING)
Set Multiplier (M) to "0.56565656"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
Now download a previous version of WebUI, which still contains the "Inverse Sigmoid" option for checkpoint merger.
Now set the Primary Model (A) to the new checkpoint you just made
Set the Secondary Model (B) to Trinart-11500
Set Multiplier (M) to "0.768932"
Select "Inverse Sigmoid"(this is kind of like Sigmoid but inverted)
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
Now set the Primary Model (A) to the new checkpoint you just made.
Set the Secondary Model (B) to SD 1.5
Set the Tertiary Model (C) to Gape_60
Set the name of the final mix to something you will remember, like "Fruit's Salad" ;)
Set Multiplier (M) to "1"
Select "Weighted Sum"
Click "Run" and wait for the process to complete
Restart the Web Ui and reload the page just to be safe
At the top left of the web page click the "Stable Diffusion Checkpoint" drop down and select the Fruit's Salad.ckpt (or whatever you named it)

Dreambooth Models:

Links:

Embeddings

If an embedding is >80mb, I mislabeled it and it's a hypernetwork

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

Found on 4chan:

Found on Discord:

Found on Reddit:

Hypernetworks:

If a hypernetwork is <80mb, I mislabeled it and it's an embedding

Use a download manager to download these. It saves a lot of time + good download managers will tell you if you have already downloaded one

Chinese telegram (uploaded by telegram anon): magnet:?xt=urn:btih:8cea1f404acfa11b5996d1f1a4af9e3ef2946be0&dn=ChatExport%5F2022-10-30&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

I've made a full export of the Chinese Telegram channel.

It's 37 GB (~160 hypernetworks and a bunch of full models).
If you don't want all that, I would recommend downloading everything but the 'files' folder first (like 26 MB), then opening the html file to decide what you want.

Big collection: https://drive.google.com/drive/folders/1-itk7b_UTrxVdWJcp6D0h4ak6kFDKsce?usp=sharing

Found on 4chan:

Found on Korean Site of Wisdom (WIP):

Found on Discord:

Aesthetic Gradients

Collection of Aesthetic Gradients: https://github.com/vicgalle/stable-diffusion-aesthetic-gradients/tree/main/aesthetic_embeddings

Polar Resources

DEAD/MISSING

If you have one of these, please get it to me

Embed:

Hypernetworks:

Datasets:

Training

Training:

Rate: 5e-5:1000, 5e-6:5000, 5e-7:20000, 5e-8:100000
Max Steps: 100,000

  • Anon's Guide:
  1. Having good text tags on the images is rather important. This means laboriously going through and adding tags to the BLIP tags and editing the BLIP tags as well, and often manually describing the image. Fortunately my dataset had only like...30 images total, so I was able to knock it out pretty quick, but I can imagine it being completely obnoxious for a 500 image gallery. Although I guess you could argue that strict prompt accuracy becomes less important as you have more training examples. Again, if they would just add an automatic deepdanbooru option alongside the BLIP for preprocessing that would take away 99% of the work.
  2. Vectors. Honestly I started out making my embedding at 8, it was shit. 16, still shit but better. 20, pretty good, and I was like fuck it let's go to 50 and that was even better still. IDK. I don't think you can go much higher though if you want to use your tag anyway where but the very beginning of a 75 token block. I had heard that having more tokens = needing more images and also overfitting but I did not find this to be the case.
  3. The other major thing I needed to do is make a character.txt for textual inversion. For whatever reason, the textual inversion templates literally have NO character/costume template. The closest thing is subject which is very far off and very bad. Thus, I had to write my own: https://files.catbox.moe/wbat5x.txt
  4. Yeah for whatever reason the VAE completely fries and fucks up any embedding training and you can only find this from reading comments on 4chan or in the issues list of the github. The unload VAE when training DOES NOT WORK for textual embedding. Again, I don't know why. Thus it is absolutely 100% stone cold essential to rename or move your vae then relaunch everything before you do any textual inversion training. Don't forget to put it back afterwards (and relaunch again) because without the VAE everything is a blurry mess and faces and like sloth from the goonies.

So all told, this is the process:

  1. Get a dataset of images together. Use the preprocess tab and the BLIP and the split and flip and all that.
  2. Laboriously go through EVERY SINGLE IMAGE YOU JUST MADE while simultaneously looking at their text file BLIP descriptions and updating them with the booru tags or deepdanbooru tags (which you have to have manually gotten ahead of time if you want them), and making sure the BLIP caption is at least roughly correct, and deleting any image which doesn't feature your character after the cropping operation if it was too big. EVERY. SINGLE. IMAGE. OAJRPIOANJROPIanrpianfrpianra
  3. Now that the hard parts over, just make your embedding using the make embedding page. Choose some vector amount (I mean I did good with 50 whatever), set girl as your initialization or whatever's appropriate.
  4. Go to train page and get training. Everything on the page is pretty self explanatory. I used 5e-02:2000, 5e-03:4000, 5e-04:6000, 5e-05 for the learning rate schedule but you can fool around. Make sure the prompt template file is pointed at an appropriate template file for what you're trying to do like the character one I made, and then just train. Honestly, it shouldn't take more than 10k steps which goes by pretty quick even with batch size 1.

OH and btw, obviously use https://github.com/mikf/gallery-dl to scrape your image dataset from whichever boorus you like. Don't forget the --write-tags flag!

Vector guide by anon:
Think of vectors per token as the number of individual traits the ai will associate with your embedding. For something like "coffee cup", this is going to be pretty low generally, like 1-4. For something more like an artist's style, you're going to want it to be higher, like 12-24. You could go for more, but you're really eating into your token budget on prompts then.

Its also worth noting, the higher the count, the more images and more varied images you're going to want.

You want the ai to find things that are consistent thematics in your image. If you use a small sample size, and all your images just happen to have girls in a bikini, or all with blonde hair, that trait might get attributed to your prompt, and suddenly "coffee cup" always turns out blonde girls in bikinis too.

Datasets

Common questions (CTRL/CMD + F):

Questions to add:
What model is the best?
How do i convert img1 to img2?
What is the abs easiest way to download this?
How do i speed up generation?
what is a tokenizer?
what is parsing?
what is float 16?
what is a vae?

What's all the new stuff?

Check here to see if your question is answered:

How do I set this up?

Refer to https://rentry.org/nai-speedrun (has the "Asuka test")
Easy guide: https://rentry.org/3okso
Standard guide: https://rentry.org/voldy
Paperspace: https://rentry.org/865dy

AMD Guide: https://rentry.org/sdamd

What's the "Hello Asuka" test?

It's a basic test to see if you're able to get a 1:1 recreation with NAI and have everything set up properly. Coined after asuka anon and his efforts to recreate 1:1 NAI before all the updates.

Refer to

What is pickling/getting pickled?

ckpt files and python files can execute code. Getting pickled is when these files execute malicious code that infect your computer with malware. It's a memey/funny way of saying you got hacked.

I want to run this, but my computer is too bad. Is there any other way?
Check out one of these:

Are there any alternatives to gradio for sharing my stuff online?

Try ngrok (https://ngrok.com/, recommended by anon)

  • free
  • custom links
  • connection limit of ~60 users
  • anon thinks it gives more control for the host over Gradio

Is there an invisible watermark?

If you're using AUTOMATIC1111's webui and referring to this: https://github.com/ShieldMnt/invisible-watermark#attack-performance, then no. The setting in the settings is never read.

How do I get more of a strong effect on my embedding?

(might be outdated info) Embeddings take your image and find tokens from the current model that match the image, and when you use the embedding, it called on those specific tokens. So, it really depends on what embedding you're trying to create (how close it is to the default model, how pronounced the imgs are, etc). Plus, you can always add more emphasis for more of an effect

How does the increasing prompt token limit work?

With the token counter for each word, let's say you have the prompt:

girl (1), blue (2),..., apple (74), banana (75), orange (76), kiwi (77)

The prompt would split after banana because banana consists of 1 token and the weights would reset to start from strong again. So it's basically like

girl (1), blue (2),..., apple (74), banana (75) AND orange (1), kiwi (2)

What is a vae?

Check out https://en.wikipedia.org/wiki/Variational_autoencoder
TLDR: a filter that changes output

What is dreambooth?

What is difference merging/why is there a way to merge three models?

The first two models are merged using standard interpolation. The third model is for a difference merge.

  • For models A, B, and C:
    • The formula is A + (B - C) * (1 - M), where
      • A is the model where the learning is going to be transfered
      • B is the model that has been finetuned (trained) on a certain object
      • C is the base model that was used to finetune B
      • B-C is difference between B and C, which is added to A
      • (1-M) is the multiplier > how much of the difference to add to A
        • (1-M) is in the slider in the UI, where the slider being on the right makes (1-M) = 1

How would distributed training work?

According to NeonTheCoder: "well a model trained ontop of another would be the same, plus added data, and slided data.
if you subtract the original from that you should be left with the difference, and the new data.
then if you add that ontop of another model, it should only be adding the difference it made from original SD to the target model, and the new data"

Basically: You train a model, subtract the original model so you only have the "trainings", then add the trainings to aonther model. This means you can delegate training tasks to different people and add up all the results.

Why doesn't my 4090 work?

You need to update your cuDNN or use an updated xformer.

https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

Uploaded cuDNN files by anon (which means it might be pickled):

https://pomf2.lain.la/f/5u34v576.7z
  • Goes into stable-diffusion-webui\venv\Lib\site-packages\torch\lib if you're using venv, otherwise wherever your torch is installed.

What GPU should I buy?

Refer to https://docs.google.com/spreadsheets/d/1Zlv4UFiciSgmJZncCujuXKHwc4BcxbjbSBg71-SdeNk/. Generally, higher VRAM is better.

What does Stability AI's "no nudity" for their SFW model mean for us?

It's only speculation, but probably a loss in accurate anatomy and overall NSFW stuff. We can always train NSFW into the released model, though the quality will probably be worse than if Stability AI did it (thousands/millions of images vs billions of images).

What's the difference between an embedding and a hypernetwork?

By anon: Embeddings add new tokens to the vocabulary but leave the model unchanged, hypernet alters the behavior of the network itself within the layers - hypernet has a lot more capacity to change the behavior but you can obviously only have one active at once

Why does my output differ from someone else's output?

Check to see if your settings match theirs. Also, using an optimization such as lowvram, medvram, and xformers will cause variations in outputs.

Hi-res generations are affected by the video card used

* https://desuarchive.org/g/thread/89259005/#89260871

Why do I keep getting/how do I fix a black output when using .vae with the NAI model?

Using --no-half-vae will fix the random black images when using the .vae

How do you add upscalers?

Put the files into stable-diffusion-webui/models/ESRGAN or GFPGAN, it should say on https://upscale.wiki/wiki/Model_Database

How do you add embeddings?

Make a folder in your webui install (next to webui-user.bat) named embeddings > put your .pt and .bin embedding files here

Why are my faces blurry/messed up/ugly/deformed/etc.?

This could be because of a variety of things. Generally, to fix this, try generating at a higher resolution, using hi-res fix, inpainting your face normally, inpainting your face at full resolution, editing a good face on top of your face and img2img-ing, adding pretty face (or some variant) to the positive prompt and ugly face (or some variant) to the negative prompt, and making the ai focus more on the face with a closer shot (e.g. "portrait shot").

What is textual inversion?

Textual inversion makes the ai brute forces tags that it thinks matches your imgs and creates an embedding. Doing this on a subject in many situations (e.g. different environments and poses) generally allows the AI to create better embedding

Stuff doesn't work/is outdated!

Git pull or do a fresh install by git cloning into another folder. You can also git reset --hard [commit id], but be careful of overwriting your outputs/embeddings/models

What is ENSD?

ENSD is eta noise seed delta (in the settings). It shifts your seed and does some eta/sigma stuff. NAI uses 31337

Is the leaked NAI model safe?

The risk isn't 0, but there hasn't been any reports of getting hacked/pickled yet. Only you can decide if it's safe enough for you to use.

Is anything here safe??

Similarly, the risk isn't 0, but no one has gotten hacked from any links here as far I ask know.

Help! I need a prompt and I don't know where to start

Find a prompt someone else used and repurpose it for yourself. Think if what you like and just start writing tags/descriptions. If you need a tag but don't know the Danbooru equivalent, you can usually find it by searching danbooru [write what tag you want here]

How do I make better pictures?

For a general workflow:

  1. paint something really crude, it can literally be blobs of color
  2. img2img with a prompt
  3. edit original img or prompt or output input
  4. reimg2img/inpaint
  5. repeat last few steps until you get what you want

Step 1 can be replaced with just starting with a txt2img if you want the AI to decide for you

What is no-half and full precision?

Most new GPUs will use half precision since it lowers VRAM. Only use no-half and/or full-precision if you want to get the absolute best quality (minor difference) or if you are running a 16xx card.

How to inpaint?

Why doesn't inpainting work?

  • Try running in incognito/private browsing mode, adblockers and certain plugins/extensions break inpainting
  • Try refreshing/restarting the webui

How do you get NAI (NovelAI) 1:1?

Refer to https://rentry.org/sdupdates#prompting

Does order of prompts matter?

Yes, the order = priority that the AI will put in your img

How do I setup NAIFU?

Read the text file that tells you how to in the download

By anon: (Windows):

  1. Make sure you install Python and check "Add Python to PATH": https://www.python.org/downloads/windows/
  2. Download the naifu torrent from this link:https://rentry.org/sdg_FAQ#naifu-novelai-model-backend-frontend.
  3. Inside of the naifu folder, right click program.zip and click "extract here" with 7zip or WinRAR.
  4. Run setup.bat and let it finish.
  5. Run run.bat and once it's running, open a new browser window/tab and make sure that you type in http://localhost:6969/ into the address bar.
  6. Bada bing bada boom you should have the site running locally on your PC.

What upscaler should I use?

I recommend SD Upscaler, it adds details as well as upscales. For a while LSDR was regarded as the best, this might've changed though. Some anons like swinir, some like esrgan4x, ymmv

How do I know if X is loaded?

Ususally, the console will tell you. It will not tell you if hypernetworks or v2.pt is loaded

How do I update?

open command prompt in auto's folder and type "git pull". Or, right click in the folder, git bash here, git pull. To make sure you have the requirements, run "pip install -r requirements.txt" in the same fashion.

Recommended Settings (need to update this)?

  • default NAI negatives: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
    • Supposedly adding "artist name" into here improves results
  • Prefix all prompts with masterpiece, best quality
    • Apparently NAI adds another hidden "masterpiece" after "best quality", but this might've been debunked already

    For non-NAI models: Clip skip 0, everything else is good (afaik don't use hypernetworks, v2, yaml, VAE)

What are (parentheses), [brackets], {this thing}, <>, and decimals?

() is more emphasis, [] is less emphasis, {} is NAI's "implementation" of (), <> is for embeddings, decimals specify the number of ()'s so you don't need to type in a bunch.

(boobs) would have more weight than [boobs] in the final result, (boobs:1.4) would increase the boobage by ~40% more than what they would've normally been, (boobs:0.6) will decrease it by ~40%.

If using multiple (parenthesis) instead of decimals, is changed by a multiplier of 1.1 with each new parenthesis
>(n) = (n:1.1)
>((n)) = (n:1.21)
>(((n))) = (n:1.331)
>((((n)))) = (n:1.4641)
>(((((n)))) = (n:1.61051)
>((((((n)))))) = (n:1.771561)

[n] = (n:0.9090909090909091)
[[n]] = (n:0.8264462809917355)
[[[n]]] = (n:0.7513148009015778)
[[[[n]]]] = (n:0.6830134553650707)
[[[[[n]]]]] = (n:0.6209213230591552)
[[[[[[n]]]]]] = (n:0.5644739300537775)

([prompt]:[number less than 1]) = [using this syntax]

* I don't think decimals work with this syntax, it's undocumented in AUTOMATIC's wiki

2 of {} = 1 of (), accurate with a difference of <1%

by anon: exceeding 3x () or [] is less predictable and can overcook your prompt

How do you escape/use () for series names (or whatever is in () that isn't supposed to be weighted) in prompts?

What is prompt editing?

  • https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing
  • Follows the format of [P1:P2:[step # to change from P1 to P2]]
    • Good for creating a base with P1 and adding details with P2
  • Loads one prompt for x steps, then runs the next prompt
  • If the number is between 0 and 1, it's a fraction (percentage) of the number of steps after which to make the switch. If it's an integer greater than zero, it's just the step after which to make the switch.
    • Example: The prompt is "a [fantasy:cyberpunk:16] landscape"
      The model will draw a fantasy landscape for 16 steps. After step 16, it will switch to drawing a cyberpunk landscape

What is AND?

AND combines prompts together:

A good metaphor to remember when prompting: P1 AND P2 AND P3 for 10 steps = P1 step 1, P2 step 2, P3 step 3, P1 step 4, etc... P1 step 10. It's good for combining two different prompts together, such as a background prompt and a subject prompt.

By kind anon: each prompt creates a "guidance" vector saying how to change the image to "match" the prompt, whatever that is, and AND makes you have TWO prompts pulling the image in different directions (that get added together)

Technical: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/

What are negative prompts?

By kind anon: this is also how negative prompts work-- it computes how to change the image to go towards the negative prompt, and SUBTRACTS that to move away from it

What are positive prompts?

  • Positive prompts calculate how much each step should move toward the positive prompt. Adding weights increases how much towards (or past) the positive prompt the AI will travel

What's my best option if I want to imitate a specific artstyle?

You could try finding someone similar, doing textual inversion, or describing the artsyle (eg thick stroke, lineart, etc.)

What are hypernetworks, VAE, v2.pt, etc?

Hypernetworks are like styles for your generation

VAE's fix faces and eyes, is generally good, but seems to dull colors

  • Disable if training

v2.pt censors and generally changes the whole composition. a lot of people don't like it

yaml doesn't seem to do anything except double ram usage

What is Deep Danbooru?

Deep Danbooru is an autotagger. The AI automatically finds Danbooru tags that it thinks matches the picture it's given. To activate it

  1. git pull, edit webui-user.bat
  2. add the --deepdanbooru argument to webui-user.bat so it looks like COMMANDLINE_ARGS=--deepdanbooru
  3. find the interrogation change in img2img.

How do I get NAIFU to automatically save images for me?

Edit run.bat. Ctrl + f to "export SAVE_files". Remove the "::" and change the word from export to set.

How do you load the VAE?

Rename it to [your model's name here].vae.pt and put it next to your model in the models/Stable-diffusion folder

How do you load hypernetworks?/How do you use hypernetworks?

create a hypernetworks folder in the models folder and place all the .pt files there

How do you load the v2.pt?

place the v2.pt in the same folder as your webui-user.bat file (the root folder) and add https://rentry.org/nai-prior-v2 into your scripts folder (rename the text file to [anything].py

How do you load the yaml?

It's not recommended to load the yaml because it doubles ram for no change in output, but if you really want to, rename it to [model name].yaml and place it next to your model.

Asukaimguranon: I will report that the yaml didn't double my vram usage outright, but i experienced something like a memory leak because it oom crashed after a dozen gens at most (compared to non-yaml for hundreds of gens no problem). the gens i did get matched non-yaml 1:1 so there's basically no reason to use yaml ever.

What are wildcards?

by anon: wildcard just lets you create a text file with a list of prompt words, and then you reference that text file to randomly pick from it, so you could randomize hair color, or pose, specific character, etc

Where do you get wildcards?

Got this info from a kind anon in hdg: Search archive or git for wildcard pastebin and copy what you want. Then, download wildcards.py from AUTOMATIC1111's wiki. Place script + pastebin text files (which are in a folder named wildcards, the text files are named [wildcard name here].txt) in /scripts. Activate wildcards in AUTO's gui. In a prompt, you would write [wildcard name here] to choose a random name from that txt file. To use weights: ([wildcards name here]:[weight amount])

What is interpolation when merging models?

It determines how much of each model is in the output model. if number is 40 for linear: 40% primary, 60% secondary. If number is 40 for sigmoid or inverse sig, this percent is weighted according to the graphs. sigmoid: primary checkpoint gets less weight than if using weighted sum. inverse sigmoid: opposite to sigmoid

Can you make X?

If you're creative enough, probably. If it was trained on that, definitely

TLDR of everything: https://rentry.org/sd-tldr
Current Issues: https://rentry.org/sd-issues

Info:

Boorus:

Upscalers:

Face restoration

Batch resize:

Git pull/revert guide:

Horde: https://stablehorde.net

A prompt dump:
https://pastebin.com/rbrtPCqZ

Part 1 NAI (with all the trackers I can find):

magnet:?xt=urn:btih:5bde442da86265b670a3e5ea3163afad2c6f8ecc&dn=novelaileak&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2F9.rarbg.com%3A2810%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A6969%2Fannounce&tr=http%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=https%3A%2F%2Fopentracker.i2p.rocks%3A443%2Fannounce&tr=udp%3A%2F%2Ftracker.torrent.eu.org%3A451%2Fannounce&tr=udp%3A%2F%2Ftracker.tiny-vps.com%3A6969%2Fannounce&tr=udp%3A%2F%2Ftracker.pomf.se%3A80%2Fannounce&tr=udp%3A%2F%2Fp4p.arenabg.com%3A1337%2Fannounce&tr=udp%3A%2F%2Fopen.stealth.si%3A80%2Fannounce&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp%3A%2F%2Fmovies.zsw.ca%3A6969%2Fannounce&tr=udp%3A%2F%2Fipv4.tracker.harry.lu%3A80%2Fannounce&tr=udp%3A%2F%2Fexplodie.org%3A6969%2Fannounce&tr=udp%3A%2F%2Fexodus.desync.com%3A6969%2Fannounce&tr=udp%3A%2F%2Fbt.oiyo.tk%3A6969%2Fannounce&tr=https%3A%2F%2Ftracker.nanoha.org%3A443%2Fannounce&tr=https%3A%2F%2Ftracker.lilithraws.org%3A443%2Fannounce&tr=https%3A%2F%2Ftr.burnabyhighstar.com%3A443%2Fannounce

Part 2 NovelAI Leak: https://rentry.org/ewahd

not sure what this is: https://rentry.org/naifunya
UNPICKLER: https://rentry.org/safeunpickle2
Prebuilt xformers: https://rentry.org/25i6yn
img search: https://iqdb.org
Chinese NAI: https://ai.nya.la/
Guide to setting up local NAI by chinese NAI:

https://telegra.ph/NovelAI-%E5%8E%9F%E7%89%88%E7%BD%91%E9%A1%B5UI%E5%90%8E%E7%AB%AF%E9%83%A8%E7%BD%B2%E6%95%99%E7%A8%8B-10-10

Rebasin (alternative merging models): https://github.com/samuela/git-re-basin
WD 1.3 Torrents: https://rentry.org/WDtorrents

learn ai (recommended by emad): https://www.fast.ai/
https://jalammar.github.io/illustrated-stable-diffusion/

AMD Ubundo 20.04

Mac: try using invoke ai https://github.com/invoke-ai
CPU (might be outdated): https://rentry.org/cputard

Danbooru dump: https://www.gwern.net/Danbooru2021

Outdated training: https://rentry.org/informal-training-guide
even more outdated: https://rentry.org/dq6vm

Free browser SD: https://huggingface.co/spaces/stabilityai/stable-diffusion
https://promptart.labml.ai/playground
https://novelai.manana.kr/
https://boards.4channel.org/g/thread/89199040
https://www.mage.space/
https://github.com/TheLastBen/fast-stable-diffusion
https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
visualise.ai

AMD Guide: https://rentry.org/ayymd-stable-diffustion-v1_4-guide

Furry dump from SD Labs server: https://e621.net/db_export/
2.5 mill posts from e621 with source links and more detailed tag info>>
https://mega.nz/file/kxN0zZSC#DaGsogjU_pURYxm1T7ZB8MOvjetuU2tRJBGpJ5m8bK4
Furry model tag counts >> https://mega.nz/file/AgdHDLLD#vERcoYTWGJguTsXmysbLq1NL_xBS8txQhVvPI5E3QKE

some 4chan alt afaik, maybe loli stuff: https://2chen.moe/tech/1909555?#bottom

Japanese chan:
https://www.2chan.net/

Krita: https://github.com/Interpause/auto-sd-krita
cool showcase: https://www.thispersondoesnotexist.com/

Voldy to NAI ui: https://pastebin.com/0NTMWFtb

  • save this to the root directory as user.css

Book: https://github.com/joelparkerhenderson/stable-diffusion-image-prompt-gallery

Celebs: https://docs.google.com/spreadsheets/u/0/d/1IqXkYDXux97aU8Y5kqqBrBvCn3CLRDhMZ7lEWsAtwUc

Multi GPU: https://hastebin.com/raw/labumiyiqo

Barebones SD: https://github.com/AmericanPresidentJimmyCarter/stable-diffusion

Explorer thing: http://cybernetnews.com/find-replace-multiple-files
Messy room: https://twitter.com/8co28/status/1583434494354210817

Video FPS interpolator: https://github.com/megvii-research/ECCV2022-RIFE
Another Video interpolator (Flowframes): https://nmkd.itch.io/flowframes
Another: https://film-net.github.io/

Alternatives: https://github.com/brycedrennan/imaginAIry
https://www.stablecabal.org/

GPU speed comparison
* https://lambdalabs.com/blog/inference-benchmark-stable-diffusion/

Proof of concept:
https://www.cutout.pro/photo-animer-gif-emoji/upload
(then paste part of the original tone o remove the watermark)
For her Voice :
https://www.neural-reader.com/tts/start
https://webmshare.com/play/O95LA

Image manipulation and stuff: https://imagemagick.org/index.php

Cool "is my image ai?" detection test: https://ai.azunyan1111.com/

Tracker list: https://ngosang.github.io/trackerslist/

Sampler comparison paper: https://arxiv.org/pdf/2206.00364.pdf

e621 thing: https://furry.booru.org/

img2img thing: https://huggingface.co/spaces/pharma/sd-prism

classifier guidance info: https://benanne.github.io/2022/05/26/guidance.html

Twitter anons: https://pastebin.com/k00qiJbL

Facebook thing: https://github.com/facebookincubator/AITemplate

Info for quick links: you can put CLIP_stop_at_last_layers in the quick settings list to make it more easily accessible.
(you can do this for any option by going into inspect element and finding the ID, ignoring the setting_ part: https://files.catbox.moe/2lcolb.PNG )

some colab: https://rentry.org/sd-colab-automatic

alt: https://github.com/n00mkrad/text2image-gui
1 click: https://github.com/cmdr2/stable-diffusion-ui

3d: https://github.com/ashawkey/stable-dreamfusion
https://dreamfusion3d.github.io/
3d gens: https://colab.research.google.com/drive/1706ToQrkIZshRSJSHvZ1RuCiM__YX3Bz#scrollTo=i5-MWEjfBjYx
https://colab.research.google.com/drive/1706ToQrkIZshRSJSHvZ1RuCiM__YX3Bz?authuser=2#scrollTo=i5-MWEjfBjYx
It's a bit more effort to set up, make sure you replace line 29 of main.py with

config = yaml.full_load(open(args.config, 'r'))

https://sketchfab.com/3d-models/low-poly-beretta-m9-c79ea90735b248e588d5be49809d7b34

installer, not sure if safe: https://github.com/EmpireMediaScience/A1111-Web-UI-Installer

clip stuff: https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/

AI youtubers/guides:

Slerp info: https://en.wikipedia.org/wiki/Slerp#Geometric_Slerp

AI Music: https://github.com/openai/jukebox/

booru thing: https://github.com/VivaLaPanda/infinibooru

KR thing: https://novelai.manana.kr/

some hosting thing: https://zguide.zeromq.org/
old thing: https://www.thispersondoesnotexist.com/

Clipart Studio plugin for lighting: https://nyatabe.booth.pm/items/4196349

AI showcase: AI 3D animation
https://twitter.com/TREE_Industries/status/1578071996033863681
https://www.youtube.com/watch?v=-TS2iLhYP28
https://www.youtube.com/watch?v=8oIQy6fxfCA
AI 3D models
https://www.youtube.com/watch?v=5j8I7V6blqM
https://www.youtube.com/watch?v=uM5NPodZZ1U
AI voice acting
https://www.youtube.com/watch?v=oQx4SyM_iH4
https://www.youtube.com/watch?v=ria6qt7UUN4
AI music
https://www.youtube.com/watch?v=QN0DDD7B3oU
AI coding
https://www.youtube.com/watch?v=pdSfgRYy8Ao&t=878s
https://www.youtube.com/watch?v=_9aN1-0T8hg
AI video
https://www.youtube.com/watch?v=PHRg241NjJA

Dance diffusion:
https://github.com/pollinations/dance-diffusion

fun way to find out if you were used for training: https://haveibeentrained.com/

Cool showcase by anons:
This one is simple but very pretty, shows you how incredibly simple but powerful this technology is.
https://twitter.com/tori29umai/status/1586367798988587014

This one is a fuller demonstration, converting an older music video into a hand-drawn style, just compare the facial expressions of the old and new versions, especially starting from 2:50
https://www.bilibili.com/video/BV14D4y1r7NJ/

This one is a test of style-transfer, using the famous artist Kurehito Misaki's textual inversion (illustrator of Saekano)
https://www.bilibili.com/video/BV1D14y1j7KL/

This one fuses Genshin and K-ON together
https://www.bilibili.com/video/BV1JR4y1Q73a/

The is the latest tech demo, rendering multi characters, and complex actions
https://www.bilibili.com/video/BV1B14y1575G

twitter anons:
https://twitter.com/PorchedArt
https://twitter.com/FEDERALOFFICER
https://twitter.com/Elf_Anon
https://twitter.com/ElfBreasts
https://twitter.com/BluMeino
https://twitter.com/Lisandra_brave
https://twitter.com/nadanainone
https://twitter.com/Rahmeljackson
https://twitter.com/dproompter
https://twitter.com/Kw0337
https://twitter.com/AICoomer
https://twitter.com/mommyartfactory
https://twitter.com/ai_sneed
https://twitter.com/YoucefN30829772
https://twitter.com/KLaknatullah
https://twitter.com/spee321
https://twitter.com/EyeAI_
https://twitter.com/S37030315
https://twitter.com/ElfieAi
https://twitter.com/Headstacker
https://twitter.com/RaincoatWasted
https://twitter.com/RatmanScott
https://twitter.com/Merkurial_Mika
https://twitter.com/epitaphtoadog
https://twitter.com/lillyaiart
(good stuff) https://twitter.com/LeftGRGR

collection of papers to learn about this research from its inception: https://github.com/prodramp/DeepWorks/tree/main/12-Research-Papers
https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators
https://arxiv.org/abs/2110.13746

something: http://blog.dlprimitives.org/

some unpickle info: https://www.reddit.com/r/sdforall/comments/y5axt7/with_lots_of_models_appearing_due_merging_and/

colab; https://colab.research.google.com/drive/1jUwJ0owjigpG-9m6AI_wEStwimisUE17?pli=1#scrollTo=Ucr5_i21xSjv

webm from imgs: https://ffmpeg.party/webm-from-image-sequence/

Music AIs: https://soundraw.io/
https://boomy.com/
https://www.aiva.ai/
https://huggingface.co/spaces/fffiloni/img-to-music

vocal ai: https://twitter.com/fifteenai

text to speech ai (deepfake): https://fakeyou.com/

Gradio alternative: ngrok (https://ngrok.com/)

  • free
  • custom links
  • connection limit of ~60 users
  • anon thinks it gives more control

Confirmed Drama

10/20 News

TLDR from reddit:

discord

4chan
anon's recap:

we are so open guys, we have a clause in our employee contracts stating that they have the right to release any model they work on
we've never stopped developers from releasing models btw
we didn't issue a takedown request btw
well ok, we did issue a takedown request, but it wasn't a LEGAL request ok?
there was no leak btw guys everything is fine haha folk get so riled up!
one of the lead researchers of the model leaked OUR model, what a pathetic publicity stunt!

Quick Rundown:

emad confirming censorship for 1.5, no more porn

Might cause anatomical issues in released model. Non-issue for NSFW though, especially with model merging

Debunked, SD will probably release a SFW model before their 1.5 model because of legal issues

drama between emad and automatic1111

Emad issued a private and public apology over it
AUTOMATIC v. StabilityAI:

Emad will supposedly not censor the expected Stability AI model release since SAI are only training their SFW model

  • Conversation from AMA:

    User: is it a risk the new models (v1.X, v2, v3, vX) to be released only on dreamstudio or for B2B(2C)? what can we do to help you on this?

    Emad: basically releasing NSFW models is hard right now Emad: SFW models are training

    User: could you also detail in more concrete terms what the "extreme edge cases" are to do with the delay in 1.5? i assume it's not all nudity in that case, just things that might cause legal concern?

    Emad: Sigh, what type of image if created from a vanilla model (ie out of the box) could cause legal troubles for all involved and destroy all this. I do not want to say what it is and will not confirm for Reasons but you should be able to guess.

    User: what is the practical difference between your SFW and NSFW models? just filtering of the dataset? if so, where is the line drawn -- all nudity and violence? as i understand it, the dataset used for 1.4 did not have so much NSFW material to start with, apart from artsy nudes

    Emad: nudity really. Not sure violence is NSFW

By the question asker:

just want to clarify a misunderstanding that seems to have taken hold to do with 1.5 censorship
i was the person asking emad about the clarifications about "extreme edge cases" and the difference between their NSFW and SFW models
the context of that question was emad was speaking about how SFW models are easier to release right now because of potential legal issues with the NSFW models, and about training a separate set of SFW models
the "extreme edge cases" question was about 1.5 specifically; as far as i understood it, 1.5 is one of their NSFW models and the "extreme edge cases" that they want to censor are cp, not all nudity
the "SFW vs NSFW" question was about the distinction between two category of models that emad was referring to, the SFW models are separate and trained with most (all?) nsfw content filtered from the dataset, but not violence
Emad
of course i'm not trying to shill for them or anything, and we'll see the true extent of the censorship if/when they release the models, but at the very least this is what was actually said

Unconfirmed Drama

Quick Rundown:

drama around models supposedly trained on CP

Don't download every random model link you come across

Whether the fed bait is an actual fed bait or not cannot be confirmed

Some anons theorize that the fed bait wasn't due to the ckpt but was due to glowie anon's scraper downloading actual CP from a torrent somewhere using a DHT search engine

An anon claims to have tested the fed bait and said that the output a worse version of another existing model. Whether this is true or not cannot be confirmed

Another anon's theory: "if he truly did get his acct taken down, he mentioned auto-scraping torrents, then it's possible the fed did have a honeypot or tracking on that particular torrent (but they didn't make the content), that seems the most likely to me. if this was all a hoax to spread panic about a particular torrent, he chose a relatively unknown torrent to do it with, for example it would be explosive if he made these accusations about the original novelai torrent. he may be telling the truth, but he hasn't provided strong evidence backing up the full story, so there are a lot of grains of salt to be taken"

Fed Bait Information

Editor's note: I shortened the fed bait PSA because it was not officially confirmed by sources other than by the owner of SD training and the 4channer who got baited. It also took up too much space at the top. The fed bait info was a huge dump in the first place because it was a copy paste of the announcement from the SD training server and I got confirmation from the owner of SD training, and, if the announcement was true, it was a relevant and timely warning to delete/avoid downloading the pickle model. Even if this whole thing turns out to be a huge troll by both the owner and the 4channer, this situation showcases the danger of downloading random files from the internet.

Given the severity of the situation, the fed bait warning will stay a PSA (for now) to keep people wary of random ckpts/pts/vaes they download.

That being said, future drama on this rentry will be confirmed by multiple sources before being posted at the top. For the unconfirmed drama, refer to https://rentry.org/sdupdates#unconfirmed-drama.

TLDR: fed allegedly uploaded ckpt of CP as a honeypot, anon downloaded it and got a warning from their seedbox who got a warning from Child Exploitation and Online Protection Command

There are claims of a model trained on CP in the wild

It is alleged that these models were distributed by feds via torrents starting around October 11, 2022.

Similar (?) event last year: https://saucenao.blogspot.com/?view=classic

Original fed post: original fed post

Anon's statement: "one of our users was auto-scrapping every CKPT file on every tracker (pseudo) and they stumbled upon it, it had a weird name that literally nobody knew about until he searched on the 4chan archives and found what it was abt (pic related). then ofc he deleted it, and just a few minutes ago he told me he got a letter on his email from his seed box and it was abt "containing materials created and containing with sufficiently sexually suggestive images of minors" and it seems it was a fed bait"

Email from seedbox company:
email from seedbox company

Hall of Fame

AUTOMATIC1111/Voldy: Best webui, for the people, madlad gigachad
Leaker anon: Leaked NAI's imagen model + text gen
Asuka anon: Large 1:1 NAI efforts before all the updates
Booru anon: Self-hostable, intuitive, easy to setup booru
Asuka Test Imgur anon: easy to follow guides, helping out the rentry
Model anon: writing up https://rentry.org/sdmodels + helping out
Glowie'd anon: first public fed bait
Ixy anon: Good guide
mogubro: A lot of hypernets. also cool name, very nice
koreanon: legendary korean disciple

Misc

(WIP) Precursor to https://rentry.org/sdwiki / https://github.com/questianon/SDwiki

author socials: questianon !!YbTGdICxQOw (malt#6065, u/questianon, https://github.com/questianon, https://twitter.com/questianon), ping or tag if I need to change anything or if you have questions

Edit
Pub: 09 Oct 2022 19:41 UTC
Edit: 21 Nov 2022 20:07 UTC
Views: 550504