Akisamb

joined 2 years ago
MODERATOR OF
[–] Akisamb@programming.dev 2 points 1 year ago (1 children)

Cette franchise doit être aussi payé par les plus pauvres ?

[–] Akisamb@programming.dev 3 points 1 year ago

This is not true in France. Politicians that have proven fraud are arrested and charged. In France we have Sarkozy, Cahuzac, Fillon that were all charged with crimes.

They were president, minister and presidential candidate respectively. I'd be surprised if it was different in the USA. I'm seeing that trump is also being charged, the system seems to be working.

[–] Akisamb@programming.dev 3 points 1 year ago

Convolutional neural networks and plant identifying apps came before chat gpt. Beyond both relying on neural networks they don't have much in common.

[–] Akisamb@programming.dev 1 points 1 year ago

Don't know why you are down voted it's a good question.

As a matter of fact it almost happened for search engines in France. Newspaper's argued that snippets were leading people to not go into their ad infested sites thus losing them revenue.

https://techcrunch.com/2020/04/09/frances-competition-watchdog-orders-google-to-pay-for-news-reuse/

[–] Akisamb@programming.dev 4 points 1 year ago

It does seem odd that scraping activity from just two accounts allegedly managed to cause such an extended server outage. The irony of this situation also hasn’t been lost on online creatives, who have extensively criticized both companies (and generative AI systems in general) for training their models on masses of online data scraped from their works without consent. Stable Diffusion and Midjourney have both been targeted with several copyright lawsuits, with the latter being accused of creating an artist database for training purposes in December.

As far as I know they do not have copyright over the output of their models. Apart from banning the users they pretty much have no solutions to stop this. Even if they had copyright, it's still legally unknown if training LLMs constitutes a copyright violation.

In a similar fashion a lot of the recent chat llm's have been trained on output from chatgpt. After all why pay humans to produce training data when your competitor has already done it for you.

[–] Akisamb@programming.dev 1 points 1 year ago (1 children)

Why would java have an impact on battery performance ? Pretty much all credit cards run java for their encryption algorithms, and they need pretty much no power to run.

[–] Akisamb@programming.dev -2 points 2 years ago

You can't take one accident and use that to generalize.

You need to take into account all accidents and see how worse humans are.

https://arstechnica.com/cars/2023/12/human-drivers-crash-a-lot-more-than-waymos-software-data-shows/

Cars are naturally dangerous. A robot car is going to have deaths no matter what. That does not mean they are bad if they mean a reduction of cars and accidents. Taxis if done properly can help a public transport system.

[–] Akisamb@programming.dev 14 points 2 years ago

They gave them a birth control shot without properly informing them of what it was. Still scandalous, but not what you are saying.

[–] Akisamb@programming.dev 3 points 2 years ago

These models do not see letters but tokens. For the model, violet is probably two symbols viol and et. Apart from learning by heart the number of letters in each token, it is impossible for the model to know the number of letters in a word.

This is also why gpt family sucks at addition their tokenizer has symbols for common numbers like 14. This meant that to do 14 + 1 it could not use the knowledge 4 + 1 was 5 as it could not see the link between the token 4 and the token 14. The Llama tokenizer fixes this, and is thus much better at basic algebra even with much smaller models.

[–] Akisamb@programming.dev 2 points 2 years ago

Yes to your question, but that's not what I was saying.

Here is one of the most popular training datasets : https://pile.eleuther.ai/

If you look at the pdf describing the dataset, you'll find the mean length of these documents to be somewhat short with mean length being less than 20kb (20000 characters) for most documents.

You are asking for a model to retain a memory for the whole duration of a discussion, which can be very long. If I chat for one hour I'll type approximately 8400 words, or around 42KB. Longer than most documents in the training set. If I chat for 20 hours, It'll be longer than almost all the documents in the training set. The model needs to learn how to extract information from a long context and it can't do that well if the documents on which it trained are short.

You are also right that during training the text is cut off. A value I often see is 2k to 8k tokens. This is arbitrary, some models are trained with a cut off of 200k tokens. You can use models on context lengths longer than that what they were trained on (with some caveats) but performance falls of badly.

[–] Akisamb@programming.dev 2 points 2 years ago (2 children)

There are two issues with large prompts. One is linked to the current language technology, were the computation time and memory usage scale badly with prompt size. This is being solved by projects such as RWKV or mamba, but these remain unproven at large sizes (more than 100 billion parameters). Somebody will have to spend some millions to train one.

The other issue will probably be harder to solve. There is less high quality long context training data. Most datasets were created for small context models.

[–] Akisamb@programming.dev 4 points 2 years ago* (last edited 2 years ago)

For folks who aren’t sure how to interpret this, what we’re looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I'm not sure this work accomplishes that. Sure, it builds up on previous work that showed that a transformer can be simulated by a TC^0^ family. However, the limits of this fact are not clear. The paper even admits as such

Our result on the limitations of T-LLMs as general learners comes from Proposition 1 and Theorem 2. On the one hand, T-LLMs are within the TC^0^ complexity family; on the other hand, general learners require at least as hard as P/ poly-complete. In the field of circuit theory, it is known that TC^0^ is a subset of P/ poly and commonly believed that TC^0^ is a strict subset of P/ poly, though the strictness is still an open problem to be proved.

I believe this is one of the weakest points of the paper, as it bases all of its reasoning on an unproven theorem. And you can implement many things with a TC^0^ algorithm, addition, multiplication, basic logic, heck you can even make transformers.

There still is something that bothers me. Why did it define general learning as being at least a universal circuit for the set of all circuits within a polynomial size ? Why this restriction ? I tried googling general learner and universal circuit and only came up with this paper.

While searching, I found that this paper was rejected, you can find the reviews here : https://openreview.net/forum?id=e5lR6tySR7

If you are searching for a paper on the limits of T-LLMs the paper What Algorithms can Transformers Learn? A Study in Length Generalization may prove more informative. https://arxiv.org/pdf/2310.16028.pdf It explains why transformers are so bad at addition.

Here is the key part of their abstract :

Specifically, we leverage RASP (Weiss et al., 2021)— a programming language designed for the computational model of a Transformer— and introduce the RASP-Generalization Conjecture: Transformers tend to length generalize on a task if the task can be solved by a short RASP program which works for all input lengths.

view more: ‹ prev next ›