Maybe Reddit is paying them to ignore lemmy
EGN+ has moved to: !eurographicnovels@piefed.social
“BD” refers to Franco-Belgian comics, but let's open things up to include ALL Euro comics and GN's. Euro-style artistry from around the world is also welcome. ^^
* BD = "Bandes dessinées"
* BDT = Bedetheque
* GN = graphic novel
* LBK = Lambiek
* LC = "Ligne claire"
Please DO: 1) follow good 'netiquette' and 2) the four simple rules of lemm.ee (this instance) when posting and commenting. As for extracts, they're fine, but don't link to pirated downloads. Moderation will be based on readers' willingness to follow the above guidelines.
The designated language here is English, with a traditional bias towards French, followed by other Euro languages.
When posting foreign-language content, please DO include helpful context for English-speakers.
---> Here's the community F.A.Q, and our resource page <---
RELATED COMMUNITIES:
- BD on Mastodon
- BD on Tumblr
- BritComics@feddit.uk
- Comics on Lemmy
- GN's on Lemmy
- Heathcliff (w/o HC)
- r/bandedessinee
- r/noDCnoMarvel
- Moebius_Art
- Moomin Valley
SEARCHES:
# #MAILBOX #Tintin #Asterix #LuckyLuke #Spirou #Gaston #CortoMaltese #Thorgal #Sillage(Wake) #Smurfs #Trondheim #Moebius #Jodorowsky
Maybe Google reps are reading the privacy channel.
Google is indexing Lemmy, but the way it's distributed makes it really difficult to reach high rankings.
If a post gets really popular on Lemmy, the popularity would be distributed across 50+ instances, meaning there isn't a single link that gets popular enough.
But I've seen little forum and blog posts indexed within a day of their posting, and those are places which draw a fraction of the traffic that big instances do.
Yes, and I get that Lemmy is distributed, but any particular thread is still discretely available at it's home (generational) instance. It's not hard to understand, and I don't see why a cutting edge corp like Google couldn't figure out the best means of tracking such situations.
I mean-- all they need to do is assign a couple of their standard web crawlers to content native to the big instances, right? Or are you saying that the fact that most content is appearing as mirrored content is preventing such bots from working properly.
Sorry, I don't mean to appear dogmatic or needlessly argumentative; I just don't get it.