ch00f

joined 2 years ago
[–] ch00f@lemmy.world 3 points 21 hours ago

just get a magnet made that changes the text in a matching font. Nobody will notice.

39
Ann Rule (infosec.pub)
[–] ch00f@lemmy.world 7 points 1 day ago

WARNING

The docker-down.sh script associated with offtiktok runs a docker prune -f and will delete any unused docker containers you have without warning.

[–] ch00f@lemmy.world 2 points 1 day ago

Wow, this is exactly what I was looking for!

[–] ch00f@lemmy.world 6 points 2 days ago

Allegedly Rowling was not involved in the game at all. I'd think of it as an act of goodwill from the developers.

You're required to meet her in the game, so it's not like anything was snuck in. https://www.youtube.com/watch?v=A8k8DseLCl8

[–] ch00f@lemmy.world 2 points 2 days ago

Yes, and as we've seen time and time again, companies are totally cool when operating costs suddenly revert back to what they were years ago.

[–] ch00f@lemmy.world 53 points 3 days ago (8 children)

That's great that they were able to replace people with equipment that they own and control. Oh what's that? The price and capabilities of this AI can change at any time?

Very safe and cool investment.

[–] ch00f@lemmy.world 1 points 3 days ago (2 children)

Legacy literally has an explicitly trans npc in it.

[–] ch00f@lemmy.world -1 points 3 days ago (8 children)

Jesus or just use Wordpress. Takes like an hour to set up.

[–] ch00f@lemmy.world 55 points 4 days ago (4 children)

Seattle actually briefly had the opposite of this for a while.

After the Great Fire, the city wanted to use the reconstruction as an opportunity to regrade Pioneer Square to make it less steep. Because the city only owned the land under the street (sidewalks belonged to the landowners), they just regraded the streets and left the rest up to the landowners.

This created an awkward period where landowners had sidewalks that met with an immediate wall that was supporting the street at the new level. This means pedestrians had to climb ladders to cross the street.

It also gave property owners double the store front as they kept the entrances at the original level and built a new entrances at the new street level creating a double-decker sidewalk. Many of those subterranean establishments turned into speakeasies during prohibition and can still be visited today as part of the Seattle Underground Tour.

Here's an illustration:

 

I'm moving my music library to a funkwhale instance, but I don't want to have to keep two copies of every song (one imported to Funkwhale, one on a local drive).

It looks like Funkwhale will let you download a single song at a time from your own library , but there doesn't seem to be a similar button for albums or playlists.

The files themselves are obfuscated in whatever indexing system it uses, so there's nothing to be done there.

Anyone know how this is possible?

 

Over the week, I've been slowly moving from mdadm raid to ZFS. My process was:

  • create ZFS pool on secondary server
  • rsync all files over to zfs server
  • Nuke mdadm array on primary and set up zpool
  • ssh dataset from secondary server to primary server.

This is 15tb of data and even over gigabit, it took a day and a half to transfer. It finally finished tonight, and somehow I'm the owner and group of every single file. In addition to this generally being weird, it also broke some docker volume binds, and I generally don't want it.

It looks like the same is the case for the files on the secondary server too, so it must have happened during the initial rsync.

Fortunately, I also rsynced to some offline drives which kept ownership fine.

Anyway, I'm trying to figure out how the hell this happened. The rsync command I used was:

sudo rsync -ahu --delete --info=progress2 -e ssh /mnt/MONSTERDRIVE/ ch00f@192.168.1.65:/bluepool/monsterdrive/

At least I'm pretty sure this is what I used. I had to reverse-i-search to find it.

This is similar to the command I use when backing up to cold storage which has worked fine in the past. My understanding is that -a is shorthand for -rlptgoD where -o is "preserve owner."

So how could this have happened?

Does it matter that the secondary server doesn't have the same users as the primary server?

[SOLUTION]

From what I read online, using rsync over ssh as I did does not establish root permissions on the receiving end. So while I have the rights to modify the owners on the local side, I can only set the owners to the user I ssh'd as on the receiving side. Thus, I was the owner of every file.

The solution is two fold. First, I need to specify --rsync-path "sudo rsync" This tells the receiving side to use rsync as a super user.

Secondly, because there is no way to enter a super user password on the receiving side, I added a file to /etc/sudoers.d/ with

ch00f ALL=NOPASSWD:/usr/bin/rsync

This makes it so that the ch00f user doesn't need to enter a password when running rsync as a super user.

I don't think this is a security hole, and it got it to work.

 

Just noticed this a week or so ago. When I try to scroll the feed on lemmy.world, my page will halt and go even though I'm scrolling consistently on my trackpad. No other website has this problem to my knowledge.

Info: Framework 13 AMD laptop 32 gigs memory Firefox 136.0.1 64-bit

Any ideas? It's really irritating.

 

I'm hosting a few services using docker. For something like an openstreetmap tileserver, I'd like it to remain on my SSD because high speed improves performance, and the directory is unlikely to grow and fill the drive.

For other services like NextCloud, speed isn't as important as storage size, so I might want it on a larger HDD raid.

I know it's trivial to move the volumes directory to wherever, but can I move some volumes to one directory and some volumes to another?

 

You always hear about gun sales in the US, but you never hear about what happens to the guns at the end of their lifecycle. I assume guns wear out eventually, and I assume you can't just chuck them in the garbage when they do. What happens to them?

6
submitted 5 months ago* (last edited 4 months ago) by ch00f@lemmy.world to c/techsupport@lemmy.world
 

I'm working on trying to streamline the process of ripping my blu-ray collection. The biggest bottlneck in this process has always been dealing with subtitles and converting from image-based PGS to textbased SRT. I usually use SubtitleEdit which does okay with occasional mistakes. My understanding is that it combines Tesseract with a decent library to correct errors.

I'm trying to find something that works in the command line and found pgs-to-srt. It also uses Tesseract, but it appears without the library, the results are...not good:

Here's the first two minutes of Love, Actually:

00:01:13,991 --> 00:01:16,368
DAVID: Whenever | get gloomy
with the state of the world,

2
00:01:16,451 --> 00:01:19,830
| think about
the arrivals gate
alt [Heathrow airport.

3
00:01:20,38 --> 00:01:21,415
General opinion
Started {to make oul

This is just OCR of plain text on a transparent background. How is it this bad? This is using the Tesseract "best" training data.

Edit: I’ve been playing around with ocr-to-pgs which also uses tesseract and discovered that subtitles having black outlines really messes with it. I made some improvements.

https://github.com/wydengyre/pgs-to-srt/pull/348

 

I hate the cloud.

 

This requires either multiple trips or a quick view theough your gadget into the new future.

 

Since 2016, I've had a fileserver mostly just for backups. System is on 1 drive, RAID6 for files, and semi-annual cold backup.

I was playing with Photoprism, and their docs say "we recommend placing the storage folder on a local SSD drive for best performance." In this case, the storage folder holds basically everything but the pictures themselves such as the database files.

Up until now, if I lost any database files, it was just a matter of rebuilding them by re-indexing my photos or whatever, but I'm looking for something more robust since I'll have some friends/family using Pixelfed, Matrix, etc.

So my question is: Is it a valid strategy to keep database files on the SSD with some kind of nightly backup to RAID, or should I just store the whole lot on the RAID from the get go? Or does it even matter if all of these databases can fit in RAM anyway?

edit: I'm just now learning of ZFS caching which might be my answer.

view more: next ›