UPDATE 01/12/2023 : ALL REPO HAVE BEEN CLEANED/DELETED, THANKS FOR ALL YOUR SUPPORT.

Undi95 list of wrongs repo

What happened

Some time ago I got a tool that let us extract the "difference" between two models and make a LoRA out of it.
I found the idea cool, and my first idea was to apply Mistral data into Llama2.
I then tried to apply the LoRA the script gave me and didn't get any errors.
I quantized the output and didn't get any errors.
I tried the model and didn't get any errors.
Should be fine then, right ?

Wrong.

We discovered that LoRA extracted from 7B, don't have any effect on 13B, for example.
Different architecture give unusable LoRA too, so Llama2-Mistral was placebo, even for me, I bamboozled myself.
So, here's a list of model that is just wrong and actually give false information, now that I have the tools to correctly see if the LoRA worked, I can say that the following models isn't what I wanted it to be. And I'm terribly sorry.
All models listed here will have a "What it needed to be" and "What is really inside", and I will post this rentry on each "Community" section of each problematic model/LoRA before deleting/modifying them in one month. Or just fix the problem reworking them entirely.

UPDATE: After more testing, even 13B LoRA extracted from 13B model AND applied to 13B don't work kek.
The LoRA don't throw error when used but don't change anything.
The only model that used one on Undi95/ was Lewd-Sydney-20B, the repo got fixed (I just deleted the LoRA used from the model card since it work perfectly, LoRA was applied on the two base model).
The only model that used some on NeverSleep/ was Echidna-13B-v0.3, the repo got fixed too (I just deleted the LoRA used from the model card since it work perfectly too, the recipe wasn't the same than v0.2 anyway).

List of bad repos

  • Undi95/ReML-Mistral-v2.2-13B
    • What it needed to be : ReML-v2.2 with Mistral data added
    • What it really is : ReML-v2.2
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/Llama-2-13b-Mistral
    • What it needed to be : Llama2-13 with Mistral data added
    • What it really is : Llama2-13B
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/Llama-2-13b-Mistral-GGUF
    • What it needed to be : Llama2-13 with Mistral data added
    • What it really is : Llama2-13B
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/ReMM-Mistral-13B
    • What it needed to be : ReMM-v2.2 with Mistral data added
    • What it really is : ReMM-v2.2
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/ReMM-Mistral-13B-GGUF
    • What it needed to be : ReMM-v2.2 with Mistral data added
    • What it really is : ReMM-v2.2
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/CodeLlama-34b-Mistral
    • What it needed to be : CodeLlama-34b with Mistral data added
    • What it really is : CodeLlama-34b
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/CodeLlama-34b-Mistral-GGUF
    • What it needed to be : CodeLlama-34b with Mistral data added
    • What it really is : CodeLlama-34b
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/airoboros-c34b-2.2.1-Mistral
    • What it needed to be : airoboros-c34b-2.2.1 with Mistral data added
    • What it really is : airoboros-c34b-2.2.1
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/airoboros-c34b-2.2.1-Mistral-GGUF
    • What it needed to be : airoboros-c34b-2.2.1 with Mistral data added
    • What it really is : airoboros-c34b-2.2.1
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/Amethyst-13B-Mistral
    • What it needed to be : Amethyst-13B with Mistral data added
    • What it really is : Amethyst-13B
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/Amethyst-13B-Mistral-GGUF
    • What it needed to be : Amethyst-13B with Mistral data added
    • What it really is : Amethyst-13B
    • Cause : Mistral LoRA was made out of a 7B, proven it doesn't apply at all
  • Undi95/Xwin-MLewd-7B-V0.2
    • What it needed to be : Xwin-MLewd-7B with MLewd data merged
    • What it really is : Xwin-7B-V0.2
    • Cause : MLewd LoRA was made out of a 13B, proven it doesn't apply at all
  • Undi95/Xwin-MLewd-7B-V0.2-GGUF
    • What it needed to be : Xwin-MLewd-7B with MLewd data merged
    • What it really is : Xwin-7B-V0.2
    • Cause : MLewd LoRA was made out of a 13B, proven it doesn't apply at all
  • Undi95/Dawn-v0.1-70B
    • What it needed to be : Xwin-70B-V0.1 with MLewd, Echidna and LimaRP data
    • What it really is : Xwin-70B-V0.1 with LimaRP data
    • Cause : MLewd LoRA and Echidna LoRA was made out of a 13B, proven it doesn't apply at all
  • Undi95/Dawn-v0.1-70B-GGUF
    • What it needed to be : Xwin-70B-V0.1 with MLewd, Echidna and LimaRP data
    • What it really is : Xwin-70B-V0.1 with LimaRP data
    • Cause : MLewd LoRA and Echidna LoRA was made out of a 13B, proven it doesn't apply at all
  • Undi95/Trismegistus-lora
    • What it needed to be : LoRA applicable to a LLama2 model
    • What it really is : ?
    • Cause : Different architecture
  • Undi95/llama2-to-mistral-diff
    • What it needed to be : LoRA applicable to a LLama2 model
    • What it really is : ?
    • Cause : Different architecture
  • Undi95/llama2-to-mistral-instruct-diff
    • What it needed to be : LoRA applicable to a LLama2 model
    • What it really is : ?
    • Cause : Different architecture
  • Undi95/Llama-2-7b-Mistral
    • What it needed to be : Llama-2-7b with Mistral data added
    • What it really is : Llama-2-7b
    • Cause : Different architecture
  • Undi95/Llama-2-7b-Mistral-GGUF
    • What it needed to be : Llama-2-7b with Mistral data added
    • What it really is : Llama-2-7b
    • Cause : Different architecture

Model count : 9
Lora count : 3

What will I do now?

I know how to test my shit properly, so will probably do that before releasing shit in the wild.
I'm bummed out. I feel like I lied to some of you, but it needs to be told for future work, and to simply be honest
I will try to redo all the work needed of model people really seem to like (well, the idea of it, since the model wasn't really what it was really) if it's possible.
All my other repos are 100% safe, I didn't used this script or those LoRA on them, should work normally.

Edit Report
Pub: 01 Nov 2023 01:01 UTC
Edit: 01 Dec 2023 18:39 UTC
Views: 2669