AnAmericanPotato

joined 1 year ago

Yep. On a Blu-ray disk, you have 25-100GB of space to work with. The Blu-ray standard allows up to 40mbps for 1080p video (not counting audio). Way more for 4K.

Netflix recommends a 5mbps internet connection for 1080p, and 15mbps for 4K. Reportedly they cut down their 4K streams to 8mbps last year, though I haven't confirmed. That's a fraction of what Blu-ray uses for 1080p, never mind 4K.

I have some 4K/UHD Blu-rays, and for comparison they're about 80mbps for video.

They use similar codecs, too, so the bitrates are fairly comparable. UHD Blu-rays use H.265, which is still a good video codec. Some streaming sites use AV1 (at least on some supported devices) now, which is a bit more efficient, but nowhere near enough to close that kind of gap in bitrate.

[–] AnAmericanPotato@programming.dev 105 points 4 months ago (3 children)

Gemini might be good at something, but I'll never know because it is bad at all the things I have ever used the assistant for. If it's good at anything at all, it's something I don't need or want.

Looking forward to 2027 when Google Gemini is replaced by Google Assistant (not to be confused with today's Google Assistant, totally different product).

[–] AnAmericanPotato@programming.dev 14 points 4 months ago (1 children)

In case anyone is unfamiliar, Aaron Swartz downloaded a bunch of academic journals from JSTOR. This wasn't for training AI, though. Swartz was an advocate for open access to scientific knowledge. Many papers are "open access" and yet are not readily available to the public.

Much of what he downloaded was open-access, and he had legitimate access to the system via his university affiliation. The entire case was a sham. They charged him with wire fraud, unauthorized access to a computer system, breaking and entering, and a host of other trumped-up charges, because he...opened an unlocked closet door and used an ethernet jack from there. The fucking Secret Service was involved.

https://en.wikipedia.org/wiki/Aaron_Swartz#Arrest_and_prosecution

The federal prosecution involved what was characterized by numerous critics (such as former Nixon White House counsel John Dean) as an "overcharging" 13-count indictment and "overzealous", "Nixonian" prosecution for alleged computer crimes, brought by then U.S. Attorney for Massachusetts Carmen Ortiz.

Nothing Swartz did is anywhere close to the abuse by OpenAI, Meta, etc., who openly admit they pirated all their shit.

Again: What is the percent “accurate” of an SEO infested blog

I don't think that's a good comparison in context. If Forbes replaced all their bloggers with ChatGPT, that might very well be a net gain. But that's not the use case we're talking about. Nobody goes to Forbes as their first step for information anyway (I mean...I sure hope not...).

The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

Correct.

If we're talking about an AI search summarizer, then the accuracy lies not in how correct the information is in regard to my query, but in how closely the AI summary matches the cited source material. Kagi does this pretty well. Last I checked, Bing and Google did it very badly. Not sure about Samsung.

On top of that, the UX is critically important. In a traditional search engine, the source comes before the content. I can implicitly ignore any results from Forbes blogs. Even Kagi shunts the sources into footnotes. That's not a great UX because it elevates unvetted information above its source. In this context, I think it's fair to consider the quality of the source material as part of the "accuracy", the same way I would when reading Wikipedia. If Wikipedia replaced their editors with ChatGPT, it would most certainly NOT be a net gain.

[–] AnAmericanPotato@programming.dev 21 points 4 months ago (3 children)

99.999% would be fantastic.

90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

What we have now is like...I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

I haven't used Samsung's stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it's great.

Ideally, I don't ever want to hear an AI's opinion, and I don't ever want information that's baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That's what LLMs are actually good at.

I agree. Of all the UI crimes committed by Microsoft, this one wouldn't crack the top 100. But I sure wouldn't call it great.

I can't remember the last time I used the start menu to put my laptop to sleep. However, Windows Vista was released 20 years ago. At that time, most Windows users were not on laptops. Windows laptops were pretty much garbage until the Intel Core series, which launched a year later. In my offices, laptops were still the exception until the 2010s.

[–] AnAmericanPotato@programming.dev 32 points 4 months ago (2 children)

Google as an organization is simply dysfunctional. Everything they make is either some cowboy bullshit with no direction, or else it's death by committee à la Microsoft.

Google has always had a problem with incentives internally, where the only way to get promoted or get any recognition was to make something new. So their most talented devs would make some cool new thing, and then it would immediately stagnate and eventually die of neglect as they either got their promotion or moved on to another flashy new thing. If you've ever wondered why Google kills so many products (even well-loved ones), this is why. There's no glory in maintaining someone else's work.

But now I think Google has entered a new phase, and they are simply the new Microsoft -- too successful for their own good, and bloated as a result, with too many levels of management trying to justify their existence. I keep thinking of this article by a Microsoft engineer around the time Vista came out, about how something like 40 people were involved in redesigning the power options in the start menu, how it took over a year, and how it was an absolute shitshow. It's an eye-opening read: https://moishelettvin.blogspot.com/2006/11/windows-shutdown-crapfest.html

This is really cool! I like the idea of pen and paper as a supported UI. I've never found handwriting on a touchscreen to be an effective or enjoyable experience, across the myriad devices I've tried it on (including an iPad with an Apple Pencil). And app-based form entry is often a drag. By the time I've even opened the app and clicked the "new entry" button, I often could've been done already with a simple pen and paper.

[–] AnAmericanPotato@programming.dev 5 points 5 months ago (3 children)

I affect a British accent

Lower-effort life hack: wear a Canadian maple leaf prominently. Put a patch on your bags, get a baseball cap, wear a t-shirt. Project "Canadian" any way you can.

[–] AnAmericanPotato@programming.dev 1 points 5 months ago (1 children)

IPFS content IDs (CID) are a hash of the tree of chunks. Changes to chunk size can also change the hash!

I don't understand why this is a deal-breaker. It seems like you could accomplish what you describe within IPFS simply by committing to a fixed chunk size. That's valid within IPFS, right?

Is it important to use any specific hashing algorithm(s)? If not, then isn't an IPFS CID (with a fixed, predetermined chunk size) a stable hash algorithm in and of itself?

[–] AnAmericanPotato@programming.dev 200 points 5 months ago (29 children)

Disgusting and unsurprising.

Most web admins do not care. I've lost count of how many sites make me jump through CAPTCHAS or outright block me in private browsing or on VPN. Most of these sites have no sensitive information, or already know exactly who I am because I am already authenticating with my username and password. It's not something the actual site admins even think about. They click the button, say "it works on my machine!" and will happily blame any user whose client is not dead-center average.

Enter username, but first pass this CAPTCHA.

Enter password, but first pass this second CAPTCHA.

Here's another CAPTCHA because lol why not?

Some sites even have their RSS feed behind Cloudflare. And guess what that means? It means you can't fucking load it in a typical RSS reader. Good job!

The web is broken. JavaScript was a mistake. Return to ~~monke~~ gopher.

Fuck Cloudflare.

[–] AnAmericanPotato@programming.dev 1 points 5 months ago (1 children)

Thank you for the correction.

Sender and recipient can’t be encrypted e2e. How would the server know to whom deliver the email if those are encrypted and not visible to it?

"End-to-end" is a bit of a misnomer in this case. Both Proton and Tuta apply encryption after receiving email in the general case, since email is not sent with E2EE across different providers (in general). Both Proton and Tuta can see your incoming email (body and all) from external servers in the general case — they just don't store it that way. (This is different when sending email between two Proton users or two Tuta users.)

view more: ‹ prev next ›