RotaryKeyboard

joined 2 years ago
MODERATOR OF
[–] RotaryKeyboard@lemmy.ninja 3 points 2 years ago (2 children)

Nemesis above Undiscovered Country? That’s a shock!

[–] RotaryKeyboard@lemmy.ninja 15 points 2 years ago (1 children)

Regardless of whether or not any of the titles do or do not contain said content, ChatGPT’s varying responses highlight troubling deficiencies of accuracy, analysis, and consistency. A repeat inquiry regarding The Kite Runner, for example, gives contradictory answers. In one response, ChatGPT deems Khaled Hosseini’s novel to contain “little to no explicit sexual content.” Upon a separate follow-up, the LLM affirms the book “does contain a description of a sexual assault.”

On the one hand, the possibility that ChatGPT will hallucinate that an appropriate book is inappropriate is a big problem. But on the other hand, making high-profile mistakes like this keeps the practice in the news and keeps showing how bad it is to ban books, so maybe it has a silver lining.

[–] RotaryKeyboard@lemmy.ninja -2 points 2 years ago (4 children)

Roasting means to cook over a flame. Baking means to cook in an oven.

[–] RotaryKeyboard@lemmy.ninja 4 points 2 years ago

Oh god that's hilarious

[–] RotaryKeyboard@lemmy.ninja 7 points 2 years ago

Most of the built-in thermostats on barbecue grills are garbage. Before speculating on what might be behind the temperature difference, get a thermometer that is accurate with two probes and measure again.

[–] RotaryKeyboard@lemmy.ninja 2 points 2 years ago

It really is beautiful. I'd call it the most cinematic show on the air right now. It takes paying attention to follow, but the story has been satisfying so far. The acting is excellent. I recommend it, but you have to invest yourself in keeping track of what's going on.

[–] RotaryKeyboard@lemmy.ninja 1 points 2 years ago (1 children)

TIL! Are there good GUI front-ends for Rsync for when you want to browse the file versions?

[–] RotaryKeyboard@lemmy.ninja 1 points 2 years ago (1 children)

My friend's requirements were that the transfers be encrypted (which ssh does) and that his family have a server that was easy for them to use to upload and download files. The file server also had to be private -- meaning not stored in the cloud. They aren't technically savvy, so we needed an option where they could literally drag and drop a file from their desktop onto a web browser window. It worked well for them. My only regret is that the VPN was so complicated to set up. But on the bright side, Synology unifies the username and password between the VPN server and DSM, which makes it a little easier for my friend (and his family) to maintain.

[–] RotaryKeyboard@lemmy.ninja 1 points 2 years ago* (last edited 2 years ago) (3 children)

Offsite backups are hard

If you build a NAS instead of using Synology stuff it will be as easy as setup SSH between the machines and rsync.

To be fair, you can do this with Synology as well. Rsync is built-in and even integrated into DSM. The advantage to using Hyperbackup is that you get block-level incremental backups.

[–] RotaryKeyboard@lemmy.ninja 2 points 2 years ago (1 children)

I love that Verizon mounting solution! Velcro is the civilized man's duct tape!

[–] RotaryKeyboard@lemmy.ninja 1 points 2 years ago (3 children)

Just a quick follow-up on how we set up self-hosted cloud storage for my friend:

Synology has an OpenVPN server built-in. We configured that to grant his offsite family members access to his network, and then set up DSM to have a custom URL specifically for Synology Drive. (It's in the Remote Access section of the control panel.) This way users could just visit /drive and get access to a google drive-like interface that was easy for them to use. Setting up the OpenVPN client on their computer was a pain in the butt (as per usual for OpenVPN), but after that was properly configured, they just have a little toggle switch that enables them to access his NAS, which is easy for them to use.

When you share files with someone on Synology Drive, it even sends them an email telling them that you made a file available. Very convenient! They just have to remember how they access the NAS.

[–] RotaryKeyboard@lemmy.ninja 2 points 2 years ago (8 children)

I just got through helping a friend set up a NAS. Even today I recommend people stick with Synology because you get so much with it. Security updates and software upgrades are easy, you get good software packages for free, and the Synology platform is just easier to manage unless you want to be a real power user. Honestly, I would replace your current Synology device with an updated one. The DS423+ I set my friend up with had a reasonable processor that could even do hardware transcoding for Plex. Not a lot, mind you, but plenty for his 1080p and DVD library.

I use my Synology NAS for computer backups, photo storage and display, and occasionally I use Synology Drive (Synology's NextCloud clone -- or possibly a fork of NextCloud) to host files for people to access from my network. I wouldn't say that any self-hosted solution would be extremely easy to use, but Synology Drive was really excellent for moderately techy people.

Offsite backups are hard. I just use Synology's HyperBackup to create an archive of the files I can't afford to lose and physically carry those drives to an offsite location. I've had to restore from it from time to time, and it has been a nice experience. I especially like that I can restore only specific files and that it handles versioning. It gets hard when you need an immense amount of space for your backup. But these days you can get drives that are positively huge.

5
submitted 2 years ago* (last edited 2 years ago) by RotaryKeyboard@lemmy.ninja to c/town_square@lemmy.ninja
 

We may not be lemmy.world with its 81,000 users, but we like to think of ourselves as just as important as they are in the fediverse. That's why I'm thrilled to announce that Lemmy.ninja has reached 20 users! (That's legitimate users, by the way. No more bots here!)

To welcome our new users, I'd like to take a second to share the best tips that we've learned since launching on June 13, 2023.

  1. If you are a mobile user and you liked Apollo, go try wefwef.app. It's a web app that is trying to achieve parity of features with both Apollo and Lemmy. It is arguably faster than using a lemmy instance's web UI, and by combining features from Lemmy and Apollo, it creates a better user experience. Try it out! If you are a self-hoster, you can even host your own instance of wefwef, giving you greater privacy and more control.
  2. If you are looking for communities, visit lemmyverse.net and browse.feddit.de to find them. These sites give information about participation levels that is otherwise impossible to see in Lemmy's UI, allowing you to pick the most active communities to participate in.
  3. Talk to other users! The more you comment, the more people you will meet who can share insights on good communities, how to deal with bugs, or whatever else is interesting or troubling you at a given moment. Here in the Ninja Tea Room community, you can introduce yourself to other lemmy.ninja users and get a better picture of who those 20 users are.
  4. Finally, don't sweat it if things don't appear to be working well. Lemmy.world is so big that they're having issues serving up content. Other sites are aggressively updating, but may be running into issues sharing or receiving content from other Lemmy servers. If things don't work, give it a day or two and see if they resolve.

That's all for now, ninjas!

 

cross-posted from: https://lemmy.ninja/post/46230 because the kbin.social proxmox community is still teeny tiny.

I've been wondering why traffic seems to get through to LXCs and VMs on ports in spite of the Datacenter firewall being active. It's my understanding that the Datacenter firewall has an implicit DROP rule (which I confirmed is set) and that once active, it drops all traffic for all nodes and VMs and LXCs under those nodes.

However, when I port-forward port 32400 from my router to a Plex LXC, traffic gets through. If I forward port 80 from my router to my reverse proxy LXC, traffic gets through on that port.

Right now I have the datacenter, node, and VM/LXC firewalls enabled. Only the Datacenter firewall has any rules at all, which are:

  • Allow traffic to port 8006 from all subnets in my local network
  • Allow ICMP traffic from all subnets in my local network.

I confirmed that the input policy is DROP on both the Datacenter and LXC firewalls.

(I'm using Proxmox 8.0.3.)

Why is traffic forwarded from my gateway router making it into my LXCs?

Thanks for any help on this.

 

Fans who played NetherRealm’s Mortal Kombat 1 stress test last week discovered a few new details about the Roomba — which is almost assuredly not an official iRobot Roomba vacuum cleaner — including that it will actually try to clean up the blood splattered across Johnny Cage’s nice marble floors.

 

cross-posted from: https://lemmy.ninja/post/30492

Summary

We started a Lemmy instance on June 13 during the Reddit blackout. While we were configuring the site, we accumulated a few thousand bot accounts, leading some sites to defederate with us. Read on to see how we cleaned up the mess.

Introduction

Like many of you, we came to Lemmy during the Great Reddit Blackout. @MrEUser started Lemmy.ninja on the 13th, and the rest of us on the site got to work populating some initial rules and content, learning how Lemmy worked, and finding workarounds for bugs and issues in the software. Unfortunately for us, one of the challenges to getting the site up turned out to be getting the email validation to work. So, assuming we were small and beneath notice, we opened our registration for a few days until we could figure out if the problems we were experiencing were configuration related or software bugs.

In that brief time, we were discovered by malicious actors and hundreds of new bot users were being created on the site. Of course we had no idea, since Lemmy provides no user management features. We couldn't see them, and the bots didn't participate in any of our local content.

Discovering the Bots

Within a couple of days, we discovered some third-party tools that gave us the only insights we had into our user base. Lemmy Explorer and The Federation were showing us that a huge number of users had registered. It took a while, but we eventually tracked down a post that described how to output a list of users from our Lemmy database. Sure enough, there were thousands of users there. It took some investigation, but we were eventually able to see which users were actually registered at lemmy.ninja. There were thousands, just like the third-party tools told us.

Meanwhile...

While we were figuring this out, others in Lemmy had noticed a coordinated bot attack, and some were rightly taking steps to cordon off the sites with bots as they began to interact with federated content. Unfortunately for us, this news never made it to us because our site was still young, and young Lemmy servers don't automatically download all federated content right away. (In fact, despite daily efforts to connect lemmy.ninja to as many communities as possible, I didn't even learn about the lemm.ee mitigation efforts until today.)

We know now that the bots began to interact with other Mastodon and Lemmy instances at some point, because we learned (again, today) that we had been blocked by a few of them. (Again, this required third-party tools to even discover.) At the time, we were completely unaware of the attack, that we had been blocked, or that the bots were doing anything at all.

Cleaning Up

The moment we learned that the bots were in our database, we set out to eliminate them. The first step, of course, was to enable a captcha and activate email validation so that no new bots could sign up. [Note: The captcha feature was eliminated in Lemmy 0.18.0.] Then we had to delete the bot users.

Next we made a backup. Always make a backup! After that, we asked the database to output all the users so we could manually review the data. After logging into the database docker container, we executed the following command:


select
  p.name,
  p.display_name,
  a.person_id,
  a.email,
  a.email_verified,
  a.accepted_application
from
  local_user a,
  person p
where
  a.person_id = p.id;

That showed us that yes, every user after #8 or so was indeed a bot.

Next, we composed a SQL statement to wipe all the bots.


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE person_id > 85347;
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

If you read the code, you'll see that we deleted records whose person_id was > 85347. That's the approach that worked for us. But you could just as easily delete all users who haven't passed email verification, for example. If that's the approach you want to use, try this SQL statement:


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE email_verified = 'f';
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

Even more aggressive mods could put these commands into a nightly cron job, wiping accounts every day if they don't finish their registration process. We chose not to do that (yet). Our user count has remained stable with email verification on.

After that, the bots were gone. Third party tools reflected the change in about 12 hours. We did some testing to make sure we hadn't destroyed the site, but found that everything worked flawlessly.

Wrapping Up

We chose to write this up for the rest of the new Lemmy administrators out there who may unwittingly be hosts of bots. Hopefully having all of the details in one place will help speed their discovery and elimination. Feel free to ask questions, but understand that we aren't experts. Hopefully other, more knowledgeable people can respond to your questions in the comments here.

 

cross-posted from: https://lemmy.ninja/post/30492

Summary

We started a Lemmy instance on June 13 during the Reddit blackout. While we were configuring the site, we accumulated a few thousand bot accounts, leading some sites to defederate with us. Read on to see how we cleaned up the mess.

Introduction

Like many of you, we came to Lemmy during the Great Reddit Blackout. @MrEUser started Lemmy.ninja on the 13th, and the rest of us on the site got to work populating some initial rules and content, learning how Lemmy worked, and finding workarounds for bugs and issues in the software. Unfortunately for us, one of the challenges to getting the site up turned out to be getting the email validation to work. So, assuming we were small and beneath notice, we opened our registration for a few days until we could figure out if the problems we were experiencing were configuration related or software bugs.

In that brief time, we were discovered by malicious actors and hundreds of new bot users were being created on the site. Of course we had no idea, since Lemmy provides no user management features. We couldn't see them, and the bots didn't participate in any of our local content.

Discovering the Bots

Within a couple of days, we discovered some third-party tools that gave us the only insights we had into our user base. Lemmy Explorer and The Federation were showing us that a huge number of users had registered. It took a while, but we eventually tracked down a post that described how to output a list of users from our Lemmy database. Sure enough, there were thousands of users there. It took some investigation, but we were eventually able to see which users were actually registered at lemmy.ninja. There were thousands, just like the third-party tools told us.

Meanwhile...

While we were figuring this out, others in Lemmy had noticed a coordinated bot attack, and some were rightly taking steps to cordon off the sites with bots as they began to interact with federated content. Unfortunately for us, this news never made it to us because our site was still young, and young Lemmy servers don't automatically download all federated content right away. (In fact, despite daily efforts to connect lemmy.ninja to as many communities as possible, I didn't even learn about the lemm.ee mitigation efforts until today.)

We know now that the bots began to interact with other Mastodon and Lemmy instances at some point, because we learned (again, today) that we had been blocked by a few of them. (Again, this required third-party tools to even discover.) At the time, we were completely unaware of the attack, that we had been blocked, or that the bots were doing anything at all.

Cleaning Up

The moment we learned that the bots were in our database, we set out to eliminate them. The first step, of course, was to enable a captcha and activate email validation so that no new bots could sign up. [Note: The captcha feature was eliminated in Lemmy 0.18.0.] Then we had to delete the bot users.

Next we made a backup. Always make a backup! After that, we asked the database to output all the users so we could manually review the data. After logging into the database docker container, we executed the following command:


select
  p.name,
  p.display_name,
  a.person_id,
  a.email,
  a.email_verified,
  a.accepted_application
from
  local_user a,
  person p
where
  a.person_id = p.id;

That showed us that yes, every user after #8 or so was indeed a bot.

Next, we composed a SQL statement to wipe all the bots.


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE person_id > 85347;
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

If you read the code, you'll see that we deleted records whose person_id was > 85347. That's the approach that worked for us. But you could just as easily delete all users who haven't passed email verification, for example. If that's the approach you want to use, try this SQL statement:


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE email_verified = 'f';
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

Even more aggressive mods could put these commands into a nightly cron job, wiping accounts every day if they don't finish their registration process. We chose not to do that (yet). Our user count has remained stable with email verification on.

After that, the bots were gone. Third party tools reflected the change in about 12 hours. We did some testing to make sure we hadn't destroyed the site, but found that everything worked flawlessly.

Wrapping Up

We chose to write this up for the rest of the new Lemmy administrators out there who may unwittingly be hosts of bots. Hopefully having all of the details in one place will help speed their discovery and elimination. Feel free to ask questions, but understand that we aren't experts. Hopefully other, more knowledgeable people can respond to your questions in the comments here.

 

Summary

We started a Lemmy instance on June 13 during the Reddit blackout. While we were configuring the site, we accumulated a few thousand bot accounts, leading some sites to defederate with us. Read on to see how we cleaned up the mess.

Introduction

Like many of you, we came to Lemmy during the Great Reddit Blackout. @MrEUser started Lemmy.ninja on the 13th, and the rest of us on the site got to work populating some initial rules and content, learning how Lemmy worked, and finding workarounds for bugs and issues in the software. Unfortunately for us, one of the challenges to getting the site up turned out to be getting the email validation to work. So, assuming we were small and beneath notice, we opened our registration for a few days until we could figure out if the problems we were experiencing were configuration related or software bugs.

In that brief time, we were discovered by malicious actors and hundreds of new bot users were being created on the site. Of course we had no idea, since Lemmy provides no user management features. We couldn't see them, and the bots didn't participate in any of our local content.

Discovering the Bots

Within a couple of days, we discovered some third-party tools that gave us the only insights we had into our user base. Lemmy Explorer and The Federation were showing us that a huge number of users had registered. It took a while, but we eventually tracked down a post that described how to output a list of users from our Lemmy database. Sure enough, there were thousands of users there. It took some investigation, but we were eventually able to see which users were actually registered at lemmy.ninja. There were thousands, just like the third-party tools told us.

Meanwhile...

While we were figuring this out, others in Lemmy had noticed a coordinated bot attack, and some were rightly taking steps to cordon off the sites with bots as they began to interact with federated content. Unfortunately for us, this news never made it to us because our site was still young, and young Lemmy servers don't automatically download all federated content right away. (In fact, despite daily efforts to connect lemmy.ninja to as many communities as possible, I didn't even learn about the lemm.ee mitigation efforts until today.)

We know now that the bots began to interact with other Mastodon and Lemmy instances at some point, because we learned (again, today) that we had been blocked by a few of them. (Again, this required third-party tools to even discover.) At the time, we were completely unaware of the attack, that we had been blocked, or that the bots were doing anything at all.

Cleaning Up

The moment we learned that the bots were in our database, we set out to eliminate them. The first step, of course, was to enable a captcha and activate email validation so that no new bots could sign up. [Note: The captcha feature was eliminated in Lemmy 0.18.0.] Then we had to delete the bot users.

Next we made a backup. Always make a backup! After that, we asked the database to output all the users so we could manually review the data. After logging into the database docker container, we executed the following command:


select
  p.name,
  p.display_name,
  a.person_id,
  a.email,
  a.email_verified,
  a.accepted_application
from
  local_user a,
  person p
where
  a.person_id = p.id;

That showed us that yes, every user after #8 or so was indeed a bot.

Next, we composed a SQL statement to wipe all the bots.


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE person_id > 85347;
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

If you read the code, you'll see that we deleted records whose person_id was > 85347. That's the approach that worked for us. But you could just as easily delete all users who haven't passed email verification, for example. If that's the approach you want to use, try this SQL statement:


BEGIN;
CREATE TEMP TABLE temp_ids AS
SELECT person_id FROM local_user WHERE email_verified = 'f';
DELETE FROM local_user WHERE person_id IN (SELECT person_id FROM temp_ids);
DELETE FROM person WHERE id IN (SELECT person_id FROM temp_ids);
DROP TABLE temp_ids;
COMMIT;

And to finalize the change:


UPDATE site_aggregates SET users = (SELECT count(*) FROM local_user) WHERE site_id = 1;

Even more aggressive mods could put these commands into a nightly cron job, wiping accounts every day if they don't finish their registration process. We chose not to do that (yet). Our user count has remained stable with email verification on.

After that, the bots were gone. Third party tools reflected the change in about 12 hours. We did some testing to make sure we hadn't destroyed the site, but found that everything worked flawlessly.

Wrapping Up

We chose to write this up for the rest of the new Lemmy administrators out there who may unwittingly be hosts of bots. Hopefully having all of the details in one place will help speed their discovery and elimination. Feel free to ask questions, but understand that we aren't experts. Hopefully other, more knowledgeable people can respond to your questions in the comments here.

 

This is our first Community Spotlight for a subreddit that officially migrated to Lemmy! !Steamdeck@sopuli.xyz comes to us from r/steamdeck_linux, a subreddit of about 3,000 users. Steamdeck aims to provide guides and support to people that want to experiment with the more Linux side of the Steam Deck.

Prior to moving to Lemmy, the goals of the subreddit were:

  • Creating a wiki, with detailed, noob friendly, guides for using Plasma / Arch
  • Planning on how to grow awareness for Linux off the back of the Steam Deck
  • Growing up this community, ready for the Steam Deck release

Moving to Lemmy seems to have had a positive effect on the community, as the Lemmy version has over 4,700 subscribers now.

 

cross-posted from: https://lemmy.ninja/post/27359

“In order to provide a higher-than-average, dependable wage, we shifted to a no-tipping model and doubled the hourly rate to more than $30/hr for our service staff. This shift also benefits our guests, who can enjoy Casa Bonita without incurring unexpected costs,” management said.

 

“In order to provide a higher-than-average, dependable wage, we shifted to a no-tipping model and doubled the hourly rate to more than $30/hr for our service staff. This shift also benefits our guests, who can enjoy Casa Bonita without incurring unexpected costs,” management said.

 

Here we have a community dedicated to Nethack! It's hosted by SDF Chatter, a Lemmy instance that hosts many communities dedicated to retro topics.

As a seminal roguelike video game, NetHack has significantly influenced the genre with its procedurally generated dungeon filled with monsters, pitfalls, and enigmas. Lauded for its replayability and intricate gameplay, it utilizes a simple ASCII aesthetic that fosters imaginative immersion. Its comprehensive design philosophy is reflected in its motto: "The DevTeam thinks of everything."

 

Credit to Neal Agarwal.

 

Oregon's Senate has repealed a 72-year prohibition against self-service gas, with new legislation requiring gas stations to staff half the available pumps, while allowing the rest to be self-service. The bill, responding to industry staffing shortages, also prohibits charging more for full-service than self-service, likely leading to the phasing out of full-service pumps.

view more: ‹ prev next ›