Perspectivist

joined 2 weeks ago
[–] Perspectivist@feddit.uk 4 points 2 hours ago (1 children)

True! Then instead of spilling my coffee on the counter I could spill it on the counter instead.

[–] Perspectivist@feddit.uk 1 points 3 hours ago

Just pull yourself up by your bootstraps, right?

[–] Perspectivist@feddit.uk 2 points 3 hours ago

LLM chatbots are designed as echo chambers.

They're designed to generate natural sounding language. It's a tool. What you put in is what you get out.

[–] Perspectivist@feddit.uk 5 points 5 hours ago (3 children)

One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.

I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.

[–] Perspectivist@feddit.uk 1 points 5 hours ago

I doubt it. They just think others do.

[–] Perspectivist@feddit.uk 13 points 8 hours ago (3 children)

Sure - it's just missing every single one of my friends.

[–] Perspectivist@feddit.uk 24 points 8 hours ago* (last edited 5 hours ago) (6 children)

I wish I had Elon Musk money so I could buy this platform and turn it back to pictures only with the main focus on professional and hobbyist photographers - not pictures of food and selfies. It used to be one of the few social media platforms I actually liked.

[–] Perspectivist@feddit.uk 4 points 8 hours ago

Not really but we occasionally refer to it as "Big Black" for obvious reasons.

[–] Perspectivist@feddit.uk 1 points 11 hours ago (1 children)

The best coffee I've ever drank was from Aeropress but honestly, if you use freshly ground beans on a Moccamaster they're quite difficult to tell apart.

[–] Perspectivist@feddit.uk -3 points 13 hours ago

I don't wish to kill anyone and reading these comments makes me sick.

[–] Perspectivist@feddit.uk 10 points 1 day ago (2 children)

It’s not to protect it from cracking - it’s to stop the leftover coffee from burning onto it, since I only rinse it after use.

[–] Perspectivist@feddit.uk 1 points 1 day ago

I don't waste good coffee.

 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

138
submitted 2 days ago* (last edited 2 days ago) by Perspectivist@feddit.uk to c/til@lemmy.world
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power once per crank revolution. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

 

Olisi hyödyllistä tietoa seuraavia vaaleja ajatellen.

Ihmetyttää kyllä myös miten vähän tästä on Yle ainakaan mitään uutisoinut. Tuntuu melkein tarkoitukselliselta salamyhkäisyydeltä.

102
submitted 5 days ago* (last edited 5 days ago) by Perspectivist@feddit.uk to c/knives@sopuli.xyz
 

I figured I’d give this chisel knife a try, since it’s not like I use this particular knife for its intended purpose anyway but rather as a general purpose sharpish piece of steel. I’m already carrying a folding knife and a Leatherman, so I don’t need a third knife with a pointy tip.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›