Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Once "multi-reddits" have been defined and implemented in kbin that shouldn't be an issue. I don't know what'll happen with lemmy, but it would probably be in its interest to implement it too.
Fragmentation is certainly a problem if you’re looking for Reddit-style cohesive communities, how much of a problem it is remains to be seen in my opinion. The risk with trying to do things the Reddit way is that one or two large instances become dominant and you’ve just got Reddit all over again.
One potential solution that I’ve been turning over in my mind is the concept of “meta communities” - collections of smaller related communities across the fediverse that can be subscribed to and interacted with as if they were one. Users could potentially vote on a smaller community being admitted into the meta community, or there could be some other requirement. It could even be done locally by the user through a browser extension. It’s not perfect but it’s maybe something to explore.
Alternatively we just get used to more compact communities again. Let’s be honest - do we really have to know everything, all of the time?
Meta communities is 100% the answer. Should be doable too.
IMO this is just a temporary problem - as communities establish themselves one will eventually become dominant. E.g. /c/technology@beehaw.org might become the dominant technology community, while others die out or stay small.
Think about it - it's a feed forward system. If one community of a certain subject has more users, it will generate on average more content and therefore will attract more subscribers, thus generating even more content.
You're right on that part. Federations works great with mastodon and its communities made of individuals directly interacting with each other's accounts.
But when it comes to interacting though communities already spread through instances, not only it makes it hard for people to follow all these duplicates, but it threatens the very principle of federation in a certain way. Because most people will eventually subscribe to the biggest community for each subject (tech, nature, photo), which often turns out to be hosted on the biggest instances...and that is centralization once again.
A solution could be for users to gather all the communities he subscribed to, around topics. Then your feed would be a mix of these topics' groups and single /c. Twitter does that similarly with its List feature.
This software is so new, and it has lots of potential.
I can see someone building an extension that aggregates many versions of the same sublemmy into one feed seamlessly, and then the feature being added to the main lemmy code.
This will evolve and improve the more we use it.
Ultimately this is a problem that's never going away until we replace URLs. The HTTP approach to find documents by URL, i.e. server/path, is fundamentally brittle. Doesn't matter how careful you are, doesn't matter how much best practice you follow, that URL is going to be dead in a few years. The problem is made worse by DNS, which in turn makes URLs expensive and expire.
There are approaches like IPFS, which uses content-based addressing (i.e. fancy file hashes), but that's note enough either, as it provide no good way to update a resource.
The best™ solution would be some kind of global blockchain thing that keeps record of what people publish, giving each document a unique id, hash, and some way to update that resource in a non-destructive way (i.e. the version history is preserved). Hosting itself would still need to be done by other parties, but a global log file that lists out all the stuff humans have published would make it much easier and reliable to mirror it.
The end result should be "Internet as globally distributed immutable data structure".
Bit frustrating that this whole problem isn't getting the attention it deserves.