You sure about that?
superfluous
You sure about that?
superfluous
So the core concept is that when you validate some property about input you should also transform the input into a new form that represents the new guarantee in the type system.
This is very, closely related to the "make invalid states unrepresentable" concept. If we have validated our list to be non empty, we should return a non empty list - after all an empty list is now invalid and as such the type system should exclude that possibility.
Have you looked at Elm?
It's very much not JavaScript, but I think that comes with the territory of wanting something significantly different.
For what it's worth, with wasm you could use any language that compiles to it as a frontend language. Rust has a few frameworks that can compile to standalone wasm web pages.
Like ... Have you ever read a word with w in it?
I kinda know what you are getting at - if you dictate a word by pronouncing each letter separately you need to add stuff to each one to make it stand out - but Jesus Christ, what a question.
Hodoubleu is the doublueather today? Only a fedoubleu oubleuhite clouds in a clear blue sky.
Thanks for making me laugh!
Edit: in German it is pronounced "we", with the e like in ketchup.
There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?
Correct. The clip encoder is trained on images and their corresponding description. Therefore it learns the names for things in images.
And now it is obvious why this prompt fails: there are no images of empty rooms tagged as "no elephants". This can be fixed by adding a negative prompt, which subtracts the concept of "elephants" from the image in one of the automagical steps.
If you prompt stable Diffusion for "a room without elephants in it" you'll get elephants. You need to add elephants to the negative prompt to get a room without them. I don't think LLMs have been given the ability to add negative prompts
No, it does not. At least not in the same way that generative pre-trained transformers do. It is handling natural language though.
The research is all open source if you want details. For Stable Diffusion you'll find plenty of pretty graphs that show how the different parts interact.
No.
The only one I'd trust without having to do more research on their reporting quality is netzpolitik.org. Not sure how much of a newspaper they are though. I'd consider them digital activists - with sound positions based on facts, but activists nonetheless.
Das Saatgut Monopol, insbesondere in Kombination mit Unkrautvernichtern (also ein Unternehmer verkauft sowohl das Pflanzengift, als auch die Samen, die dagegen immun sind) ist in der Tat ein ziemlich gutes argument gegen Gentechnik. Bin mir nicht sicher was dafür die Lösung ist. Man konnte natürlich festschreiben, dass nur gute™ Gentechnik erlaubt ist, aber hier ne scharfe Linie zu ziehen ist unmöglich.
AI / LLM only tries to predict the next word or token
This is not wrong, but also absolutely irrelevant here. You can be against AI, but please make the argument based on facts, not by parroting some distantly related talking points.
Current image generation is powered by diffusion models. Their inner workings are completely different from large language models. The part failing here in particular is the text encoder (clip). If you learn how it works and think about it you'll be able to deduce how the image generator is forced to draw this image.
Edit: because it's an obvious limitation, negative prompts have existed pretty much since diffusion models came out
Englan't
I read that as "toilet paper vs Biden is also political" and did not even consider it weird, because I don't expect respectful political discussions on Lemmy anymore.