there's an intriguing clique of AI APIs with shady dealings
involved are the following:
HelixMind - 859600773029953568 (by @faer1x
)
though, note here - the "infraction" by HelixMind mostly relates to being aware of the shenanigans of the other two - and doing nothing about it.
They have, to my awareness, not faked any models themselves or done anything 'really terrible' - it is solely their inaction that makes them referenced here.
Though, this still isn't exactly a good look - https://files.catbox.moe/1khz21.png
--> HelixMind statement/response: https://files.catbox.moe/x2on2e.png
FeathrAI - 883350648853262346 (by @ichate
)
NexeonAI - 1105052387187634279 (by @._.cl0ver_.
)
- The Story of a FeathrAI Ticket:
A request by a user to doublecheck as for why Roo is referencing 'ChatGPT' (https://files.catbox.moe/zekfsq.jpg) on opus-4 leads to the initial drama.
Initial response (https://files.catbox.moe/nl4smm.png) appears to just be "it's all g on my end" - but as the user in question also identifies, there is a bigger issue lurking.
Multiple independent screenshots around the same time corroborate that the model is "not real" at the time, with Hecker 'brought on' to fix the problem:
- https://files.catbox.moe/piqxos.png
- https://files.catbox.moe/5q70gp.png
- https://files.catbox.moe/ta3erf.jpg
- https://files.catbox.moe/scqxre.jpg
- https://files.catbox.moe/n34a6w.jpg
And, to my current awareness & testing - the model's behavior DOES appear to align with proper Anthropic behavior (cross-compared with Zukijourney's behavior).
However, there's two parts that make me not fully confident in this still.
Neither Ichate nor Hecker are exactly trustworthy people:
- Ichate: https://rentry.co/sjlore & Hecker is known to be a ddos-nuisance.
The admission to system-prompt models:
- https://files.catbox.moe/4xfbf1.png
- https://files.catbox.moe/k8rbx9.png
Models should always come in their real state - it does not matter that users may expect the behaviors of 'chatgpt.com' or 'claude.ai' - by "falling into the trap" of thinking to yourself that system-prompting is appropriate, you invite a mentality that is damaging and dangerous to the integrity of your API.
If you are willing to "fake model behavior" to "appease the expectation of how a model behaves..." - this is a pitfall to more serious infractions.
--> Ichate statement/response: https://files.catbox.moe/1yl091.png
Make your own judgements.