After having used the new model for over a month, mostly on AI Story Generator, and investigating on the old and new AI models used, I've reached to a conclussion that, in m opinion, makes sense.
The old model was Llama 2. Llama 2 (and Llama 3) are models feed on books, as in lots of literature. Meta licensed a LOT of them to train the models.
The new model is Deep Seek, or at least it seems to be so. We'll assume it is, but to be fair, it doesn't changes the argument a lot. DS has an issue, it is trained on normal content, say: internet, some books obviously, interations, etc.
Now, what's the issue with this?
Llama is a model that knows WAY better how a story works, having hundreds of them on its dataset and having processed them during its training. DS doesn't, DS is a more generalist model, thought more as an assistant than a story creator.
For the kind of usage done here, essentially either chatting with characters with AI-Character-Chat or writing a story with AI-Story Generator, the improvement in context and general knowledge DS gives is not worth the decrease in narrative quality, and understanding of story writing. That's not mentioning all the hallucinations, total ignoration of context and prompting, and similar the new model has.
Llama 2 is a way better option for the kind of usage we have. Yes, we would be lossing some general knowledge. Yes, it may not be the best AI model out there. But it's all things considered, it's a matter of chosing the best option for our use case.
I understand the dev does all this work alone, and appretiate his effort for it. That's why, as a really active user of this platform and service, I consider the best choice here is to return to the old model.
If you have some argument more for it, please add it in the comments. Thanks everyone for your time.
-Lucalis.
Hi! :)
Just commenting to clarify that the 'break bad patterns' bug is unrelated to the new model: this behavior is actually caused by the prompt used by AI character chat when generating the bot reply -- it always contains the phrase "Remember the break bad patterns rule", in reference to an item in the default writing instructions. IIRC, and in case it hasn't yet been fixed, this line is added to the end of the prompt somewhere deep in the getBotReply function; I forked ACC and can confirm that editing it out removed the issue.
Anyway, there are similar bugs in other generators, and I suspect most of them are also due to the prompt having similar instructions that where originally meant to mitigate quirks of the old model, but now only cause problems.
More on-topic, I've been testing the new model a lot, writing prompts for it from scratch, and the results are amazing: it can consistently understand complex, structured instructions, so one can more reliably make little 'programs' with it, not just narrative stuff. But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.
I really, really wish the new model stays. It has opened up a lot of possibilities for making new generators, and a rollback would really suck for me as a developer. I'm specifically hacking away at ACC to put together a new tool for narrative, world-building and roleplay; it's working fantastic, so I get a feeling of absolute dread each time I see posts like this! Please, don't take it away from me, I only need some more time! ;)
Anyway, just wanted to share these bits. Cheers!
Being brutally honest, no. The AI just does whatever it wants. How long are your stories? cos the old model used to handle my 300k word long ones with ease (around 2.1MB size as the downloaded JSON), and the new model can't even understand what point of the story it is on. Like it's consistensy ir horibid, it just becomes idiotic after the 50 paragraphs, sometimes even less.
The whole point of AI-Story-Generator is to be a model capable of creating a long story, and the situation now is: it can't.
Please, pay close attention:
That is the last sentence in the text you quoted, emphasis mine.
The argument: prompts need to be rewritten to make full use of the new model's capabilities, and that takes time as there's a lot of trial and error involved. After (not before) such a rework is done, the results become much better.
This is not speculation on my part: I've been doing exactly that, tweaking old code, and I'm merely reporting my findings. How do you know the maintainer of AI Story Generator is not in the middle of a similar rework?
In the famous words of the old model: let's not get ahead of ourselves. Patience will be more rewarding than a rollback, this I can assure you.
Really, REALLY doubt it. I'm struggling right now to get it to write about alternate history, but not the entire AH, fucking singular dialogs. It hallucinates that the character is drunk and tired when nothing similar was even mentioned, it ignores already written paragraphs and does whatever it wants. It is not getting better, unless you consider stupidization better.
https://perchance.org/story-ai#data=uup1%3A7c498bf05802fc5b74f5e9eb85becacf.gz
Here's the story as example.
Forgive me, but I believe I have explained the situation to you in a rather thorough manner, and fail to see a way to make it any more clear.
I am not arguing that the specific generator you're using is working correctly at the present moment; I am letting you know that this is temporary. Do not take my word for it: click the little 'edit' button to bring up the source code and tweak the prompts yourself. The bulk of the work is fairly straightforward: replacing rules designed to deal with the old model's quirks for rules that work for the new model.
You will have to experiment a fair bit with writing the entire prompt from scratch, and for doing this, the AI Text Generator is a tool I cannot recommend enough. There are multiple ways to structure a complex prompt, but from my own testing, I've found that a very good way is to break it into sections, providing a role for the model, followed by context data, then optionally an input, and then a task followed by a list of contraints.
As an example, here's a prompt I've been using for generating lorebook entries from narration passages:
The no-em-dash rule doesn't work 100% of the time, but other than that it's actually pretty fun: you can just write away for a few paragraphs, and it'll output you some memories/lore, which you can then paste into the
Lorebook
section, and repeat the process. I've been using variations of this method to generate things like character descriptions, factions, locations, or just to make it rapid fire minor lore details that "fill in the blanks" between existing entries for realism.You can take that template and rework it to your liking, even build new generators based off of that. Go ahead: the new model lets you do some extremely cool things, the difference for prompt engineering is simply night and day.
Now, this labor may be entirely outside of your skillset, and that's alright. However, if that's indeed the case, then I'd humbly request you give the maintainer(s) the time to do it for you before calling for a rollback.
That is all.