Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Not to be a downer if you're anti-AI, but you should know a functional, small, 1B parameter model only needs ~85GB of data if the training data set is high quality (the four-year old chinchilla paper set out the 20 to 1 optimization rule for ai training, so it may require even less today).
That's basically nothing. If a language has over ~130,000 books or an equivalent amount of writing (1,500 books is about a gig in plain ascii), a functional text-based ai model could be built that uses it.
My understanding is there are next to zero languages in existence today that do not have this amount of quality text. Certainly, spoken languages that have no written word are not accessible like this, but most endangered languages with few speakers that have a historical written word could in theory have ai models built that effectively communicate in those languages.
To give you an idea of what this means for less-written languages and a website revolving around them, look at worldcat (which does NOT have anywhere near most of the written text available entirely online for each language listed, it's JUST a resource for libraries): https://www.oclc.org/en/worldcat/inside-worldcat.html
But this gets even harder for a theoretical website used to avoid an LLM that can read it, because this is all assuming creating an ai model for language from scratch. That is not necessary today because of transfer learning.
Major LLM models with over 100 diverse major languages can be fined-tuned on an insignificant amount of data (even 1GB could work in theory) and produce results like those of a 1B parameter model trained solely on one language. This is because the multi-lingual models developed cross-cultural vector-based understandings of Grammer.
In truth, the only remaining major barriers for any language not understood by fine-tuning an ai model today are both (1) digitization and (2) character recognition. Digitization will vanish as an issue for basically every written language that has a unique script within the next ten years. Character recognition (and more specifically, the economic viability of building the character recognition) will be the only remaining issue.
Ironically, in creating such a website, you will be creating more data for a future potential ai model to use in training. Especially if whatever you write makes the language of greater economic importance.