Personally I find myself renting GPU and running Goliath 120b. Smaller models could do what I’m doing if I spent more time optimizing my prompts. But every day I’m doing different tasks, and Goliath 120b will just handle whatever I throw at it, no matter how sloppy I am. I’ve also been playing with LLAVA and Hermes vision models to describe images to me. However, when I really need alt-text for an image I can’t see, I still find myself resorting to GPT4; the open source options just aren’t as accurate or detailed.
Apparently! I don’t hide my data in any way, and constantly get ads in languages I don’t speak. Usually French, but sometimes Hindi or Chinese. And as a blind person myself, I’m not sure that my well paid full time job working in large enterprise and big tech accessibility is altruism deserving of thanks haha.
I assume it’s because I live in Canada, and big American data just assumes all Canadians speak French. I regularly get French ads on English websites.
I don’t block anything. I work in accessibility, so it’s important to me to know what the experiences are like for my fellow users with disabilities. I also don’t want to recommend sites or apps that are riddled with inaccessible ads. I’d rather not give them traffic at all. Though even though I let them track me, I still get ads in a language I don’t speak for cars I can’t drive. What’re they doing with all that data?
Good to know; thanks! I’ll keep an eye on it.
I was having issues with outgoing federation to Mastodon on 0.19.0. I just did the update five minutes ago, so we'll see if that fixes it. If you're seeing this comment I guess it's working at the moment.
A couple reasons, I think:
-
AI dubbing: this makes it way easier for YouTube to add secondary dubbed tracks to videos in multiple languages. Based on the Google push to add AI into everything, including creating AI related OKR's, that's probably a primary driver. Multiple audio tracks is just needed infrastructure to add AI dubbing.
-
Audio description: Google is fighting enough antitrust related legal battles right now. The fact that YouTube doesn't support audio description for those of us who are blind has been an issue for a long time, and now that basically every other video streaming service supports it, I suspect they're starting to feel increased pressure to get on board. Once again, multiple audio tracks is needed infrastructure for offering audio description.
Surprised nobody has mentioned my two favourites:
- Behind The Bastards: Robert Evans (formerly of Cracked fame) talks about the worst people in history for hours.
- Oh No Ross and Carrie: "When they make the claims we show up so you don't have to." Maybe start with the series on scientology, it's some of the best work they've done.
Most of the other stuff I listen to is either industry specific or fandom/hobby specific.
I run the RBlind.com Lemmy instance at Accuris Hosting. Decent Virtual Machines, easy IPV6 support, and everything works fine. Prices are a bit on the high end, but it's worth it to me to use a provider located in my country, where I understand all of the associated laws and can pay in my own currency via my local bank. Also, I'd rather not give money to big tech if I can help it, and support local business instead. This isn't sponsored or anything, I'm just a mostly contented customer.
Also, of course, the fact that the control panel is screen-reader accessible is super important to me, though I doubt anyone else cares. But unfortunately that's not yet the case with most of the larger cloud providers like AWS. And if they do deploy an inaccessible update, the company is small enough that I can send an email and get an answer from a human who has actually read what I wrote, rather than a corporate AI.
It's just as long and incomprehensible as Google's and Microsoft's. So I have no idea.
That's what worries me. When companies get desperate for cash, they tend to do pretty terrible things.
Can Mistral describe images yet? Not sure if it's multi-modal or not. If it could that would be a super useful feature for those of us over on rblind.com. And/or is the code available somewhere for us to hack in something like openrouter and spin up a copy?