smiletolerantly

joined 2 years ago
[–] smiletolerantly@awful.systems 12 points 12 hours ago

No - because the do not count trains that either never started their journey, or that stopped midway through and turned around. It's a really sorry trick to make the statistic look "better".

"It's not delayed, it just.... Jever got there!"

I have not opened my SMS app in years.

[–] smiletolerantly@awful.systems 3 points 1 day ago* (last edited 1 day ago) (1 children)

Oh, nice! And there's even a nixpg!

[–] smiletolerantly@awful.systems 7 points 1 day ago (2 children)

But that actually is an asshole move then. Because a lot of emotional attachment and memories can be in a chat.

Yeah. I just put the media location on my nas, and that is being mirrored to hetzner.

Over the years, we've made a lot of Dashboards. In the most recent iteration, I've been using the popular Rounded theme, plus some customizations.

I've also tried to get rid of as many elements as possible. 90% of entities I just do not need to see on a daily basis, or only in certain specific situations, so I'm trying to automate that (i.e. only show what I'm actually interested in).

[–] smiletolerantly@awful.systems 2 points 4 days ago (1 children)

Wow, that looks super nice! Great job!

Hm. Aber realistischer: Ich glaube, ich muss mich eigentlich zwingen, direkt nach der Arbeit laufen zu gehen. Erhöht die Chance dass ich es überhaupt mache, und danach ist der Kopf wesentlich freier für anderes.

[–] smiletolerantly@awful.systems 10 points 5 days ago (2 children)

Das glaub ich dir sofort.

Eigentlich müsste man umgekehrt machen, Erstmal morgens schön 6h Hobby, und dann 8h Arbeit Autopilot... 😄

[–] smiletolerantly@awful.systems 16 points 5 days ago* (last edited 5 days ago) (4 children)

Keine Ahnung. Ich arbeite von zuhause und fange recht früh an, bin jeden Tag spätestens um 16 Uhr fertig. Dann mach ich 2h irgendwas Hobby-relevantes.

Danach? Danach bin ich durch. Kochen noch, OK, und danach Couch. Wenn ich um 10 ins Bett gehe, hab ich auf dem Papier 6h Freizeit, aber mehr als 2h ist einfach nicht drin energietechnisch.

Ich benutze Garnichts

 

Not an ad, I'm not involved with Bento (or Stirling, for that matter). I've been unhappy with Stirling for a while (why do documents need to be uploaded to the server? That makes it really hard to safely host publicly. Why is it so slow? Plus, too many things are put behind a fucking paywall).

Learned about Bento this morning, tried it out, really liked it, spent an hour today packaging it for nixpkgs. It doesn't quite have feature parity with Stirling yet, but at least for me, everything I need is there, it's fast, and it keeps processing in the browser. Like, not even joking: the output of the build process/nixpkg are just a couple of static HTML files and some WASM. No server-side components at all. Really refreshing to see.

 

A while back I played a round with the HASS Voice Assistant, and pretty easily got to a point where STT and TTS were working really well on my local installation. Also got the hardware to build wyoming satellites with wakeword recognition.

However, what kept me from going through the effort of setting everything up properly (and finally getting fucking Alexa out of my house) was the "all or nothing" approach HASS seemingly has to intent recognition. You either:

  • use the build in Assistant conversation agent, which is a pain in the ass because it matches what your STT recognized 1:1, letter by letter, so it's almost impossible to actually get it to do something unless you spoke perfectly (and forget, for example, about putting something on your ToDo list; Todo, todo, To-Do,... are all not recognized, and have fun getting your STT to reliably generate the ToDo spelling!), or
  • you slap a full-blown LLM behind it, either forcing you to again rely on a shitty company, or host the LLM locally; but even in the latter case and on decent (not H100, of course, but with a GPU at least) hardware, the results were slow and shit, and due to context size limitations, you can just forget about exposing all your entities to the LLM Agent.
  • You also have the option of combining the two approaches; match exactly first, if no intent recognized, forward to LLM; but in practice, that just means that sometimes, you get what you wanted ("all lights off" with a 70% success rate, I'd say), and still a lot of the time you have to wait for ages for a response that may be correct, but often isn't from the LLM.

What I'd like is a third option, doing fuzzy matching on what the STT generated. Indeed, there seems to have been multiple options for that through rhasspy, but that project appears to be dead? The HASS integration has not been updated in over 4 years, and the rhasspy repos are archived as of earlier this month.

Besides, it was not entirely clear to me if you could just use the intent recognition part of the project, forgoing the rest in favor of what HASS already brings to the table.

At this point, I am willing to implement a custom conversation agent, but wanted to make sure first that I haven't simply missed an obvious setting/addon/... for HASS.

My questions are:

  • are you using the HASS Voice Assistant without an LLM?
  • if so, how do you get your intents to be recognized reliably?
  • do you know of any setting/project/addon helping with that?

Cheers! Have a good start into the working week...!

 

If you've been selfhosting conduit or conduwuit, you probabl are aware that the conduwuit project was discontinued a couple months back.

I've been holding out on updating my matrix homeserver until it becomes clear which fork(s) will survive long term.

I feel like I can't put off updating for much longer now, plus the tuwunel nixpkg and -module were merged yesterday, so now the two most promising forks are both options for me.

Still, I'm unsure what route to take. Here's my thoughts:

  • not going through another round of this in a couple of months from now would be great, so stability and long-term maintenance promises would be great
  • I assume incompatibility between the forks, if not now then very soon; this is a "pick an option, then stick with it and pray" situation
  • tuwunel apparently has a full-time paid dev working on it now, which is great; at the same time, that means features will follow the priorities of the (as of now unknown) sponsor of the project
  • it is, however, the officially endorsed successor
  • it also seems like few other people are actively involved, putting in question development practices, reviews, and what happens should the lead dev throw in the towel
  • lastly, while there's been a lot of apparently rapid progress (with releases 1.0.0, 1.1.0, and 1.2.0 at quite a fast pace), the repo itself seems... empty? Few issues, few PRs, commentlessly-deleted issues
  • on the other hand, continuwuity seems more active by commit/contributors count, but is seemingly 100% volunteer work
  • they do seem to backport tuwunel changes and features, which is great!
  • they are not officially endorsed

In short: I fucking hate community drama. What fork did you go with? Is there anything else to consider? I just want an up-to-date matrix homeserver, and not to have to tell my users "sorry, starting from scratch because we picked the wrong fork..."

Update: there's been some back and forth on the nixpkgs PR, esp. one user who posted a lot of receipts here:

@scvalex @queeek180 @Askhalion you wanted links, here's some links :)

claim legitimacy over or de legitimise other projects:

https://matrix.to/#/#ping:maunium.net/$V9aN1Wn0pId-JWbxH1WV5I8PAVMajooX7WMFKmDyh6E
https://matrix.to/#/#ping:maunium.net/$IsfOfe8anRYqbRAwj7OdlX_hS-kBbHUJTVhQW-32Etk
https://matrix.to/#/#ping:maunium.net/$-Bswk96jj3ns8xpSISKH0Y24pXZ2Xcd6Rwl8mRZQIaM (ironic)
https://matrix.to/#/#meowlnir:maunium.net/$zOmf7-NIHfQ_f_Ku9Q794GeKyu8n9v2MAvPtYjlGJIE (ironic that he asked https://matrix.to/#/#meowlnir:maunium.net/$nE57Bi_DmvodZJe7JDPS7NxUBlxeDLUBhYIWNzgNk0g despite having cherrypicked a bunch of fixes from continuwuity already)
https://matrix.to/#/#tuwunel:grin.hu/$svIUeuWfm2VWuHGSUMeT5VWWcZclraKcmUaDK3NiYEM ("June and I dealt with another "continuwuity" called "grapevine" last year")

threats against the project:

https://matrix.to/#/#ping:maunium.net/$o27P102ebbFa9U80e-FK-DxGTupy8IJ3TSWFYJm6hIs
https://matrix.to/#/#ping:maunium.net/$priRlTsBuH2YfTo_pb04xHUJpTeU2DKXdJ7tAVrR5w4

personal threats:

https://matrix.to/#/#ping:maunium.net/$5YefXN_uVR5WiGfj32j3Po9Q1JMKuTTfxve_8IHp1J8
https://matrix.to/#/#ping:maunium.net/$L-dXYMXucfJiLkyc5dvv4t7pQqUKMwnLEd9zzLjZlu0

attempting to get security details released early (knowing only he and three other servers have finished implementing):

https://matrix.to/#%2F%21NasysSDfxKxZBzJJoE%3Amatrix.org%2F%24_d2wJk45JtwblMHRVBdfeEV1cAU5flPuRebTAvfOr-s%3Fvia=nexy7574.co.uk&via=matrix.org&via=element.io
https://matrix.to/#/#tuwunel:grin.hu/$mgi2dDGnL-L9Jqjm_YZPhu4NoAx8q3OMF9KIfRiGwFs

other trivia:

Jason getting his server ACL'ed from all foundation rooms:
https://matrix.to/#/!WuBtumawCeOGEieRrp:matrix.org/$u8YRBq_s-OrOpl4IGt15iUHPBKubKa4A_n-u_WbgqAU` - zemos.net ban
https://matrix.to/#/!WuBtumawCeOGEieRrp:matrix.org/$l8pKC-mR0tjLFnbnmi_8xSXbHGA3vgew-QTRWAk-kCs - wildcard ban on his domain

if any of these events get redacted, feel free to reach out and I will provide the original events - unredacted. just as another layer of certainty, when i provide the events, you can verify the server signing keys yourself, fairly trivially, as well as calculate the event ID (which is a hash). fetching the event from your $CONDUWUIT_DESCENDANT homeserver is as simple as running @conduit debug get-pdu $id in your admin room, as well as checking validity with @conduit debug verify-json or @conduit debug verify-pdu.

UPDATE: i've just been informed json signing is based on the redacted event, not the full input.

Honestly, that first link is all the info I needed. Keep reading, <100 messages and it becomes clear that I do not want to put the continuation of my homeserver into Jasons/tuwunels hands. Going to migrate to continuwuity later today.

82
submitted 8 months ago* (last edited 8 months ago) by smiletolerantly@awful.systems to c/ich_iel@feddit.org
 

Danke!! Endlich sagt wer was!

 

Schadenfreude 🙂

 

Basically, the title. After years of inactivty, I'll be taking music (cello) lessons again, with my teacher of yesteryear, from whom I've moved half a country away.

She has suggested Zoom but is open to alternatives. I don't particularly like Zoom, plus I have a feeling better quality can be had through a custom solution - but I'm at a bit of a loss as to what exactly would be a good fit for this project.

Maybe Jitsi? Does someone here have experience with it and could tell me if it's possible to set something like a "target" audio quality?

For hardware, I basically have two options. Both are already in use, for different things, and have sufficient processing capabilities - albeit no GPU:

  • host everything at home. Plus: lowest possible latency from me to the server. Not sure how much that is worth though.
  • root server in the Hetzner cloud: much faster network speed. Again though, not sure how beneficial that is, the ultimate bottleneck will always be my upload speed (40Mbit)

OK, I realize that this post is a but of a random assortment of thoughts. I'd be really happy about suggestions and / or hearing about other's experiences with similar use-cases!

 

Hi,

not sure where else to post this. For a while now, I've unsuccessfully been trying to get WireGuard to work with Crunchyroll.

Setup is as follows:

  • dedicated server hosts a wg-quick instance in [neighboring country]
  • OPNSense acts as peer on a single IP
  • I have a rule for routing the entire traffic of some source device via that IP

This works just fine. Handshake successful, traffic is routed via the server. traceroute shows the server as the hop immediately after my device's local gateway. The connection is stable, and fast.

...except for Crunchyroll. The site / app itself is fine, but I can not, for the life of me, get a video to play. It just keeps loading forever.

I don't think this is an issue with CR recognizing that I'm not where I say I am - looking online, it seems pretty easy to use CR with a VPN. I've also tried from multiple other devices, all with the same symptom.

If anyone has suggestions, I'd love to hear them 😅

EDIT: ~~It was MTU. Had to manually set it to 1500 on both devices.~~

Nope, still the same issues. I was using the fallback interface there briefly.

EDIT: It WAS MTU related, I had to enable MSS clamping on the OPNSense.

view more: next ›