mkhoury

joined 2 years ago
[–] mkhoury@lemmy.ca 12 points 2 years ago

Cory Doctorow writes extensively about how it's Spotify's fault, as an extension of the common exploitation of musicians in the industry, in the excellent book Chokepoint Capitalism. Here's a short summary of the Spotify argument by the author: https://www.youtube.com/watch?v=FZ5z_KKeFqE

[–] mkhoury@lemmy.ca 47 points 2 years ago (16 children)

What Spotify does affects the entire music market. Why should you worry about their income? Because Spotify's strategy makes it harder and harder for musicians to have the income to keep on making music. If you care about having music to listen to, you should care about this. Also, Spotify and music is just one example of the overall exploitation of workers. If you don't stand for artists when it's their livelihood at stake, why should anyone stand up for your rights when it's your livelihood at stake?

[–] mkhoury@lemmy.ca 1 points 2 years ago

I'm not sure what you mean. GPTs also allow you to add datasets, and external APIs. Both of which can be used to supply a spiritual pov.

[–] mkhoury@lemmy.ca 2 points 2 years ago* (last edited 2 years ago)

It seems like they have an automatic lab that tested 58 of them and 41 were successfully synthesized. So 70% success

[–] mkhoury@lemmy.ca 14 points 2 years ago

It does more than that, it magnifies, feeds and perpetuates them. It's not just simple exposition.

[–] mkhoury@lemmy.ca 9 points 2 years ago (4 children)

What kind of advice were you looking for if not this?

[–] mkhoury@lemmy.ca 10 points 2 years ago (1 children)

I agree that the technologies did pan out, but I don't think it's an ignorant opinion.

I also feel blasé about the new battery articles because they tend to promise orders of magnitude changes rather than incremental change. Batteries did get much better, but it doesn't really feel that way I suppose. Our experience of battery power hasn't changed much.

It's really about getting excited about the article or the tech, it takes so long to see its mild effects that there's no real cashing out on the excitement, so it's not very satisfying.

[–] mkhoury@lemmy.ca 6 points 2 years ago (3 children)

Essentially, you don't ask them to use their internal knowledge. In fact, you explicitly ask them not to. The technique is generally referred to as Retrieval Augmented Generation. You take the context/user input and you retrieve relevant information from the net/your DB/vector DB/whatever, and you give it to an LLM with how to transform this information (summarize, answer a question, etc).

So you try as much as you can to "ground" the LLM with knowledge that you trust, and to only use this information to perform the task.

So you get a system that can do a really good job at transforming the data you have into the right shape for the task(s) you need to perform, without requiring your LLM to act as a source of information, only a great data massager.

[–] mkhoury@lemmy.ca 20 points 2 years ago (5 children)

I've been using LLMs pretty extensively in a professional capacity and with the proper grounding work it becomes very useful and reliable.

LLMs on their own is not the world changing tech, LLMs+grounding (what is now being called a Cognitive Architecture), that's the world changing tech. So while LLMs can be vulnerable to bullshitting, there is a lot of work around them that can qualitatively change their performance.

[–] mkhoury@lemmy.ca 4 points 2 years ago (2 children)

"It has already started to be a problem with the current LLMs that have exhausted most easily reached sources of content on the internet and are now feeding off LLM-generated content, which has resulted in a sharp drop in quality."

Do you have any sources to back that claim? LLMs are rising in quality, not dropping, afaik.

[–] mkhoury@lemmy.ca 5 points 2 years ago
view more: ‹ prev next ›