Thanks! This is the first time I used nail polish remover first and put a top coat on at the end and it feels way stronger and hasn't chipped at all yet. I'll have to get a base coat though, the brand I've been ordering from doesn't sell one but I heard good things about using one online too so I'll look for one. I appreciate all the advice lol, I'm still figuring out proper technique and stuff to prevent bubbling, or getting it all over my fingers, or having the top be textured like the brush, etc. This attempt felt way better than my first couple for sure, but I'm also definitely still learning lol.
WrittenInRed
I definitely will, just need to wait for 786 to have some of the colors I'd need back in stock (or finding another brand to buy from would also work lol). I just like that they're vegan/cruelty free and also have a shop I can order directly from rather than going through amazon.
One rule I think might be a good idea is that mods aren't allowed to moderate their own posts/comment chains. Not that it's really been an issue on 196 in the past afaik, but there are some communities where the mods will get into an argument with another user and then remove comments for incivility or a similar rule which obviously has massive potential for abuse. Assuming there are enough mods where it's not an issue to do so (which seems very likely based on the number of people interested in moderating) preventing situations like that entirely seems beneficial.
This is awesome, I love random little trinkets like this. Are you planning on mounting it (idk if that's the right term lol) to some kind of jewlery or just keeping it as a figurine?
Honestly if this gets all the people who have problems respecting "weird" pronouns/gender identities to leave for .world it'll probably be a net positive for 196 and blahaj.zone as a whole.
I posted this in another thread but I also wanted to say it here so it's more likely one of you will see it. I get the intention behind this, and I think it's well intentioned, but it's also definitely the wrong way to go about things. By lumping opposing viewpoints and misinformation together, all you end up doing is implying that having a difference in opinion on something more subjective is tantamount to spreading a proven lie, and lending credence to misinformation. A common tactic used to try and spread the influence of hate or misinformation is to present it as a "different opinion" and ask people to debate it. Doing so leads to others coming across the misinfo seeing responses that discuss it, and even if most of those are attempting to argue against it, it makes it seem like something that is a debatable opinion instead of an objective falsehood. Someone posting links to sources that show how being trans isn't mental health issue for the 1000th time wont convince anyone that they're wrong for believing so, but it will add another example of people arguing about an idea, making those without an opinion see the ideas as both equally worthy of consideration. Forcing moderators to engage in debate is the exact scenario people who post this sort of disguised hate would love.
Even if the person posting it genuinely believes the statement to be true, there are studies that show presenting someone with sources that refute something they hold as fact doesn't get them to change their mind.
If the thread in question is actually subjective, then preventing moderators from removing just because they disagree is great. The goal of preventing overmodedation of dissenting opinions is extremely important. You cannot do so by equating them with blatent lies and hate though, as that will run counter to both goals this policy has in mind. Blurring the line between them like this will just make misinformation harder to spot, and disagreements easier to mistake as falsehoods.
Oh also something I just realized, they basically want to force mods to debate misinformation, which is literally a tatic used to spread disinformation in the first place. By getting people to debunk a ridiculous claim it lends credence to the idea as something worth discussing and also spreads it to more people. I feel like the intentions behind this are noble, but it's been proven that presenting evidence doesn't really get people to change their opinion all that often. The whole thing is super misguided.
Holy shit this is such a bad policy lol. World is known for being too aggressive at deleting a lot of content they really shouldn't be deleting, but this policy really doesn't seem like it will improve that. The issue is most of the time if they want something removed they do so and then add a policy after to justify it, meaning that regardless of this rule people can't "advocate for violence", but they will be able to post misinformation and hate speech since apparently "LGBTQ people are mentally ill" hasn't been debunked enough elsewhere and a random comment chain in Lemmy is where it needs to be done. Never mind the actual harm those sorts of statements cause to individuals and the community at large.
All I can see this doing is any actual types of that get wrongly overly censored will still do so since the world admins believe they are justified in doing so, while other provably false information will be required to stay up since the admins believe the mods aren't justified in removing it.
This policy seems to only apply to actual misinformation too, not just subjective debates. So if there's a comment thread about whether violence is justified in protest would likely have one side removed, while I guess someone arguing that every trans person is a pedophile would be forced to stay up and be debated. Its like the exact opposite of how moderation should work lol.
I've been thinking recently about chain of trust algorithms and decentralized moderation and am considering making a bot that functions a bit like fediseer but designed more for individual users where people can be vouched for by other users. Ideally you end up with a network where trust is generated pseudo automatically based on interactions between users and could have reports be used to gauge whether a post should be removed based on the trust level of the people making the reports vs the person getting reported. It wouldn't necessarily be a perfect system but I feel like there would be a lot of upsides to it, and could hopefully lead to mods/admins only needing to remove the most egregious stuff but anything more borderline could be handled via community consensus. (The main issue is lurkers would get ignored with this, but idk if there's a great way to avoid something like that happening tbh)
My main issue atm is how to do vouching without it being too annoying for people to keep up with. Not every instance enables downvotes, plus upvote/downvote totals in general aren't necessarily reflective of someone's trustworthiness. I'm thinking maybe it can be based on interactions, where replies to posts/comments can be ranked by a sentiment analysis model and then that positive/negative number can be used? I still don't think that's a perfect solution or anything but it would probably be a decent starting point.
If trust decays over time as well then it rewards more active members somewhat, and means that it's a lot harder to build up a bot swarm. If you wanted any significant number of accounts you'd have to have them all posting at around the same time which would be a lot more obvious an activity spike.
Idk, this was a wall of text lol, but it's something I've been considering for a while and whenever this sort of drama pops up it makes me want to work on implementing something.