this post was submitted on 25 Sep 2025
-4 points (30.0% liked)

Perchance - Create a Random Text Generator

1185 readers
22 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS
 

After having used the new model for over a month, mostly on AI Story Generator, and investigating on the old and new AI models used, I've reached to a conclussion that, in m opinion, makes sense.

The old model was Llama 2. Llama 2 (and Llama 3) are models feed on books, as in lots of literature. Meta licensed a LOT of them to train the models.

The new model is Deep Seek, or at least it seems to be so. We'll assume it is, but to be fair, it doesn't changes the argument a lot. DS has an issue, it is trained on normal content, say: internet, some books obviously, interations, etc.

Now, what's the issue with this?

Llama is a model that knows WAY better how a story works, having hundreds of them on its dataset and having processed them during its training. DS doesn't, DS is a more generalist model, thought more as an assistant than a story creator.

For the kind of usage done here, essentially either chatting with characters with AI-Character-Chat or writing a story with AI-Story Generator, the improvement in context and general knowledge DS gives is not worth the decrease in narrative quality, and understanding of story writing. That's not mentioning all the hallucinations, total ignoration of context and prompting, and similar the new model has.

Llama 2 is a way better option for the kind of usage we have. Yes, we would be lossing some general knowledge. Yes, it may not be the best AI model out there. But it's all things considered, it's a matter of chosing the best option for our use case.

I understand the dev does all this work alone, and appretiate his effort for it. That's why, as a really active user of this platform and service, I consider the best choice here is to return to the old model.

If you have some argument more for it, please add it in the comments. Thanks everyone for your time.

-Lucalis.

top 14 comments
sorted by: hot top controversial new old
[–] Garth01@lemmy.world 2 points 6 days ago (2 children)

I agree. The new model isn't very good. The main issues I have with it are that it tends to break immersion, saying stuff like "BREAKING_BAD_PATTERNS" or talking about "breaking bad patterns" for no given reason whatsoever and has no context to the roleplay at hand. They also tend to speak like this all the time:

John nods, "I think we've got it under control," he said "but watch your back just in case."

Even if the initial message has asterisk roleplay such as this, characters added to that thread will still talk like the above example, asterisks have to be incorporated a LOT into the initial message (at least from what I observed) in order for the new characters to RP like that.

The new model isn't that bad though, as it is able to portray fictional characters almost perfectly. I don't really use the AI Story Generator much, I only use the AI Character Chat and occasionally the image generator. I'm saying this based on what I've observed from there.

[–] Almaumbria@lemmy.world 1 points 5 days ago (2 children)

Hi! :)

Just commenting to clarify that the 'break bad patterns' bug is unrelated to the new model: this behavior is actually caused by the prompt used by AI character chat when generating the bot reply -- it always contains the phrase "Remember the break bad patterns rule", in reference to an item in the default writing instructions. IIRC, and in case it hasn't yet been fixed, this line is added to the end of the prompt somewhere deep in the getBotReply function; I forked ACC and can confirm that editing it out removed the issue.

Anyway, there are similar bugs in other generators, and I suspect most of them are also due to the prompt having similar instructions that where originally meant to mitigate quirks of the old model, but now only cause problems.

More on-topic, I've been testing the new model a lot, writing prompts for it from scratch, and the results are amazing: it can consistently understand complex, structured instructions, so one can more reliably make little 'programs' with it, not just narrative stuff. But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.

I really, really wish the new model stays. It has opened up a lot of possibilities for making new generators, and a rollback would really suck for me as a developer. I'm specifically hacking away at ACC to put together a new tool for narrative, world-building and roleplay; it's working fantastic, so I get a feeling of absolute dread each time I see posts like this! Please, don't take it away from me, I only need some more time! ;)

Anyway, just wanted to share these bits. Cheers!

[–] Randomize@lemmy.world 1 points 5 days ago (1 children)

Good luck, I hope you're successful! I also like the new model for rp (when it works properly once in a while), it's much smarter and doesn't need me to hold it's hand for every little detail. I immediately notice the difference. old model often doesn't understand chars can't know what happens in places far away unless there is some kind of stable connection. My user isn't constantly with the char 24/7, so that irked me quite a bit. New model knew without prompt. <3

[–] Lucalis@lemmy.world 1 points 5 days ago (1 children)

it’s much smarter and doesn’t need me to hold it’s hand for every little detail

Maybe Im the unluckiest guy ever, but on my end, the AI just hallucinates whatever it wants when I do something with a character. I MUST be actively guiding it to obvious things, and it still just completelly ignores it, something never happened before

[–] Randomize@lemmy.world 1 points 5 days ago* (last edited 5 days ago) (2 children)

That's why I said when it works once in a while. There are certain hours where I think dev is working on the model constantly (like right now) and yes, then it's dumb af. But like... uh... I don't know, like 10 hours ago or so, it worked perfectly fine for me. I was able to have a real flow of back-and-forth messages for the 10 mins I used it, without much rerolling or needing to prompt real-life mechanics like "char can't see what user does while texting" (from across the city)

Those 10 mins I got more story done than in two hours yesterday. And this wasn't the first time, that's why I think dev might works on it on certain more or less fixed times.

[–] Randomize@lemmy.world 1 points 4 days ago

Uhhh...damn, I jinxed it. maybe because it's weekend.

[–] Lucalis@lemmy.world 1 points 5 days ago (1 children)

Those 10 mins I got more story done than in two hours yesterday. And this wasn’t the first time, that’s why I think dev might works on it on certain more or less fixed times.

The issue is... I use it always, it helps with my Asperger and ADHD

[–] Randomize@lemmy.world 0 points 5 days ago* (last edited 5 days ago) (1 children)

Maybe see it like your doc being on vacation. If there's a serious issue, you need to find a substitute, otherwise you need to wait. It might not be perfect, but once the doc comes back, they are (hopefully) better than before.

It probably will take more time, days, weeks, mabey even months, idk much about programming. But complaining right now always seems to me like shouting at a surgeon mid-surgery why there's so much blood.

[–] Lucalis@lemmy.world 1 points 5 days ago

Those 10 mins I got more story done than in two hours yesterday. And this wasn’t the first time, that’s why I think dev might works on it on certain more or less fixed times.

Following your own analogy, the problem is: this didn't needed any surgery.

[–] Lucalis@lemmy.world 1 points 5 days ago (1 children)

More on-topic, I’ve been testing the new model a lot, writing prompts for it from scratch, and the results are amazing: it can consistently understand complex, structured instructions, so one can more reliably make little ‘programs’ with it, not just narrative stuff. But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.

Being brutally honest, no. The AI just does whatever it wants. How long are your stories? cos the old model used to handle my 300k word long ones with ease (around 2.1MB size as the downloaded JSON), and the new model can't even understand what point of the story it is on. Like it's consistensy ir horibid, it just becomes idiotic after the 50 paragraphs, sometimes even less.

The whole point of AI-Story-Generator is to be a model capable of creating a long story, and the situation now is: it can't.

[–] Almaumbria@lemmy.world 2 points 5 days ago (1 children)

Please, pay close attention:

But you have to understand that generators using old prompts will more than likely not work out of the box, you have to tinker with them to get the results you want.

That is the last sentence in the text you quoted, emphasis mine.

The argument: prompts need to be rewritten to make full use of the new model's capabilities, and that takes time as there's a lot of trial and error involved. After (not before) such a rework is done, the results become much better.

This is not speculation on my part: I've been doing exactly that, tweaking old code, and I'm merely reporting my findings. How do you know the maintainer of AI Story Generator is not in the middle of a similar rework?

In the famous words of the old model: let's not get ahead of ourselves. Patience will be more rewarding than a rollback, this I can assure you.

[–] Lucalis@lemmy.world 1 points 5 days ago (1 children)

Patience will be more rewarding than a rollback

Really, REALLY doubt it. I'm struggling right now to get it to write about alternate history, but not the entire AH, fucking singular dialogs. It hallucinates that the character is drunk and tired when nothing similar was even mentioned, it ignores already written paragraphs and does whatever it wants. It is not getting better, unless you consider stupidization better.

https://perchance.org/story-ai#data=uup1%3A7c498bf05802fc5b74f5e9eb85becacf.gz

Here's the story as example.

[–] Almaumbria@lemmy.world 1 points 5 days ago

Forgive me, but I believe I have explained the situation to you in a rather thorough manner, and fail to see a way to make it any more clear.

I am not arguing that the specific generator you're using is working correctly at the present moment; I am letting you know that this is temporary. Do not take my word for it: click the little 'edit' button to bring up the source code and tweak the prompts yourself. The bulk of the work is fairly straightforward: replacing rules designed to deal with the old model's quirks for rules that work for the new model.

You will have to experiment a fair bit with writing the entire prompt from scratch, and for doing this, the AI Text Generator is a tool I cannot recommend enough. There are multiple ways to structure a complex prompt, but from my own testing, I've found that a very good way is to break it into sections, providing a role for the model, followed by context data, then optionally an input, and then a task followed by a list of contraints.

As an example, here's a prompt I've been using for generating lorebook entries from narration passages:

# Role:

You are a cultured English linguist, novelist and dramaturge working on a theatrical play. Maintain internal consistency within the story, and prioritize pacing and plot momentum over minor details. Currently, you are writing brief lorebook entries for the play's world and characters. Such an entry is a timeless observation, peculiarity, key fact and/or theme, or an otherwise noteworthy piece of information about the world or it's characters.

***

# Lorebook:

<paste existing lore here or leave blank>

***

# INPUT:

<paste some passages here>

***

# Task:

Condense INPUT into compact single-paragraph lorebook entries, extracting solely novel information. Each entry must be self-contained: Provide enough surrounding context such that it would make sense if read on its own, leaving little room for ambiguity. Entries must also be timeless: they must still be true if read later on, so phrase them as referencing a past event. Each entry must be no more than 3 sentences; abridge details as needed. Utilize names rather than pronouns when referencing characters or locations.

Format each entry like this: `[[<Title> (<search keywords/tags>)]]: <content>.`

Output as many entries as needed.

***

# Constraints:

- Do not use the em dash ("–") symbol. Replace the em dash symbol with either of: comma (","), colon (":"), semicolon (";"), double hyphen ("--"), ellipsis ("..."), period ("."), or wrap the text between parenthesis.
- Avoid rehashing phrases and verbal constructs. If a line or sentiment echoes a previous one, either in content or structure, then rephrase or omit it. Minimize repetition to keep the text fluid and interesting.
- Avoid hyperfixating on trivialities; some information is merely there for flavor or as backdrop, and doesn't need over-explaining nor over-description. If a detail doesn’t advance character arcs or stakes, either ignore it or summarize it in under 10 words.

The no-em-dash rule doesn't work 100% of the time, but other than that it's actually pretty fun: you can just write away for a few paragraphs, and it'll output you some memories/lore, which you can then paste into the Lorebook section, and repeat the process. I've been using variations of this method to generate things like character descriptions, factions, locations, or just to make it rapid fire minor lore details that "fill in the blanks" between existing entries for realism.

You can take that template and rework it to your liking, even build new generators based off of that. Go ahead: the new model lets you do some extremely cool things, the difference for prompt engineering is simply night and day.

Now, this labor may be entirely outside of your skillset, and that's alright. However, if that's indeed the case, then I'd humbly request you give the maintainer(s) the time to do it for you before calling for a rollback.

That is all.

[–] Lucalis@lemmy.world 2 points 6 days ago

I don't have as much experience with character chats, but the old model was perfectly capable of handling fictional characters. I mostly use it for story creation, and the decrease in narration quality is stagering