this post was submitted on 25 Sep 2025
-4 points (30.0% liked)

Perchance - Create a Random Text Generator

1185 readers
22 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS
 

After having used the new model for over a month, mostly on AI Story Generator, and investigating on the old and new AI models used, I've reached to a conclussion that, in m opinion, makes sense.

The old model was Llama 2. Llama 2 (and Llama 3) are models feed on books, as in lots of literature. Meta licensed a LOT of them to train the models.

The new model is Deep Seek, or at least it seems to be so. We'll assume it is, but to be fair, it doesn't changes the argument a lot. DS has an issue, it is trained on normal content, say: internet, some books obviously, interations, etc.

Now, what's the issue with this?

Llama is a model that knows WAY better how a story works, having hundreds of them on its dataset and having processed them during its training. DS doesn't, DS is a more generalist model, thought more as an assistant than a story creator.

For the kind of usage done here, essentially either chatting with characters with AI-Character-Chat or writing a story with AI-Story Generator, the improvement in context and general knowledge DS gives is not worth the decrease in narrative quality, and understanding of story writing. That's not mentioning all the hallucinations, total ignoration of context and prompting, and similar the new model has.

Llama 2 is a way better option for the kind of usage we have. Yes, we would be lossing some general knowledge. Yes, it may not be the best AI model out there. But it's all things considered, it's a matter of chosing the best option for our use case.

I understand the dev does all this work alone, and appretiate his effort for it. That's why, as a really active user of this platform and service, I consider the best choice here is to return to the old model.

If you have some argument more for it, please add it in the comments. Thanks everyone for your time.

-Lucalis.

you are viewing a single comment's thread
view the rest of the comments
[–] Almaumbria@lemmy.world 1 points 5 days ago

Forgive me, but I believe I have explained the situation to you in a rather thorough manner, and fail to see a way to make it any more clear.

I am not arguing that the specific generator you're using is working correctly at the present moment; I am letting you know that this is temporary. Do not take my word for it: click the little 'edit' button to bring up the source code and tweak the prompts yourself. The bulk of the work is fairly straightforward: replacing rules designed to deal with the old model's quirks for rules that work for the new model.

You will have to experiment a fair bit with writing the entire prompt from scratch, and for doing this, the AI Text Generator is a tool I cannot recommend enough. There are multiple ways to structure a complex prompt, but from my own testing, I've found that a very good way is to break it into sections, providing a role for the model, followed by context data, then optionally an input, and then a task followed by a list of contraints.

As an example, here's a prompt I've been using for generating lorebook entries from narration passages:

# Role:

You are a cultured English linguist, novelist and dramaturge working on a theatrical play. Maintain internal consistency within the story, and prioritize pacing and plot momentum over minor details. Currently, you are writing brief lorebook entries for the play's world and characters. Such an entry is a timeless observation, peculiarity, key fact and/or theme, or an otherwise noteworthy piece of information about the world or it's characters.

***

# Lorebook:

<paste existing lore here or leave blank>

***

# INPUT:

<paste some passages here>

***

# Task:

Condense INPUT into compact single-paragraph lorebook entries, extracting solely novel information. Each entry must be self-contained: Provide enough surrounding context such that it would make sense if read on its own, leaving little room for ambiguity. Entries must also be timeless: they must still be true if read later on, so phrase them as referencing a past event. Each entry must be no more than 3 sentences; abridge details as needed. Utilize names rather than pronouns when referencing characters or locations.

Format each entry like this: `[[<Title> (<search keywords/tags>)]]: <content>.`

Output as many entries as needed.

***

# Constraints:

- Do not use the em dash ("–") symbol. Replace the em dash symbol with either of: comma (","), colon (":"), semicolon (";"), double hyphen ("--"), ellipsis ("..."), period ("."), or wrap the text between parenthesis.
- Avoid rehashing phrases and verbal constructs. If a line or sentiment echoes a previous one, either in content or structure, then rephrase or omit it. Minimize repetition to keep the text fluid and interesting.
- Avoid hyperfixating on trivialities; some information is merely there for flavor or as backdrop, and doesn't need over-explaining nor over-description. If a detail doesn’t advance character arcs or stakes, either ignore it or summarize it in under 10 words.

The no-em-dash rule doesn't work 100% of the time, but other than that it's actually pretty fun: you can just write away for a few paragraphs, and it'll output you some memories/lore, which you can then paste into the Lorebook section, and repeat the process. I've been using variations of this method to generate things like character descriptions, factions, locations, or just to make it rapid fire minor lore details that "fill in the blanks" between existing entries for realism.

You can take that template and rework it to your liking, even build new generators based off of that. Go ahead: the new model lets you do some extremely cool things, the difference for prompt engineering is simply night and day.

Now, this labor may be entirely outside of your skillset, and that's alright. However, if that's indeed the case, then I'd humbly request you give the maintainer(s) the time to do it for you before calling for a rollback.

That is all.