FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io 7 points 1 year ago (13 children)

I am describing a usage that is explicitly not like that. A usage that has nothing to do with art. The concept of "NFT" is not somehow inextricably tied to spending ridiculous amounts of money on pictures of apes, it's a general technology.

This is a perfect illustration of the problem here. People are lamenting about difficult it is to come up with a truly decentralized method of owning domain names that can't be commandeered by authorities or big business, a system to do exactly that already exists, but it's based on a technology that people have such an extreme prejudice about that they'd rather downvote anyone who tries to explain it and go back to helplessly lamenting.

[–] FaceDeer@fedia.io 2 points 1 year ago

It was a waste of time and resources for a particular application, yes. But the basic technology is useful for many applications.

Those "bored ape" NFTs were for jpeg images, do you also think that the jpeg algorithm was a colossal waste of time and resources?

[–] FaceDeer@fedia.io 4 points 1 year ago

There isn't just one single way of coding an NFT, you're talking about an entire class of application here. You can indeed add all sorts of safety features if you want to.

Saying "anyone can mint NFTs" shows a misunderstanding of the specific application we're discussing here. Not just anyone can mint an ENS name, specifically, which is what we're talking about. ENS names are minted by the ENS contract, so they can be guaranteed unique. An ENS name isn't "representing" anything other than the information contained within it, so there are no legal issues whatsoever. If you own the ENS name NFT then that's all that you need to worry about, it has no other effect or implication other than that.

This is what I was talking about when I mentioned the "scarlet letters NFT". People have an enormous prejudice about the technology and leap to incorrect assumptions about its uses based on those prejudices.

[–] FaceDeer@fedia.io 2 points 1 year ago

And as I suspected would be the case, some other folks have responded to my comment with a bunch of additional "simple" suggestions for what to do in this case. Which have hidden exceptions of their own, which will have unexpected impacts and loopholes, which will then elicit further "simple" suggestions for how to fix them, and before you know it we've got a complex tax code again.

There's an old quote of unclear providence that I think applies here, "everything should be made as simple as possible, but no simpler".

[–] FaceDeer@fedia.io 4 points 1 year ago (4 children)

Taxes usually start out simple, since that appeals to people. Then over time they get more complicated as people discover more and more edge cases to exploit.

If you make it all income tax, well, what counts as "income"? Elon Musk just got "paid" $46 billion worth of stock in Tesla, for example. But it's not actually 46 billion dollars. It's a share in ownership of a company. Those shares can't actually be sold for 46 billion dollars. Trying to sell them would cause their price to drop. He can't actually sell them at all right away, for that matter - they're restricted stock. He has to hold on to them for a while, as incentive to keep doing a good job as CEO.

So if he keeps doing a good job as CEO and the stock goes up in value by 10 billion dollars, was that rise in value income? What if it goes down by 10 billion instead?

This stuff is inherently complicated. I'm not sure that any simple tax system is going to work.

[–] FaceDeer@fedia.io 4 points 1 year ago

Ironically, as far as I'm aware it's based off of research done by some AI decelerationists over on the alignment forum who wanted to show how "unsafe" open models were in the hopes that there'd be regulation imposed to prevent companies from distributing them. They demonstrated that the "refusals" trained into LLMs could be removed with this method, allowing it to answer questions they considered scary.

The open LLM community responded by going "coooool!" And adapting the technique as a general tool for "training" models in various other ways.

[–] FaceDeer@fedia.io 11 points 1 year ago (22 children)

There's the completely decentralized ENS name system that would bypass this censorship entirely.

But unfortunately it's got the scarlet letters "NFT" hanging around its neck, and so good luck trying to discuss its actual merits or try to implement support for it anywhere.

[–] FaceDeer@fedia.io 2 points 1 year ago

That would be part of what's required for them to be "open-weight".

A plain old binary LLM model is somewhat equivalent to compiled object code, so redistributability is the main thing you can "open" about it compared to a "closed" model.

An LLM model is more malleable than compiled object code, though, as I described above there's various ways you can mutate an LLM model without needing its "source code." So it's not exactly equivalent to compiled object code.

[–] FaceDeer@fedia.io 13 points 1 year ago (6 children)

Fortunately, LLMs don't really need to be fully open source to get almost all of the benefits of open source. From a safety and security perspective it's fine because the model weights don't really do anything; all of the actual work is done by the framework code that's running them, and if you can trust that due to it being open source you're 99% of the way there. The LLM model just sits there transforming the input text into the output text.

From a customization standpoint it's a little worse, but we're coming up with a lot of neat tricks for retraining and fine-tuning model weights in powerful ways. The most recent bit development I've heard of is abliteration, a technique that lets you isolate a particular "feature" of an LLM and either enhance it or remove it. The first big use of it is to modify various "censored" LLMs to remove their ability to refuse to comply with instructions, so that all those "safe" and "responsible" AIs like Goody-2 can turned into something that's actually useful. A more fun example is MopeyMule, a LLaMA3 model that has had all of his hope and joy abliterated.

So I'm willing to accept open-weight models as being "nearly as good" as a full-blown open source model. I'd like to see full-blown open source models develop more, sure, but I'm not terribly concerned about having to rely on an open-weight model to make an AI system work for the immediate term.

[–] FaceDeer@fedia.io 2 points 1 year ago* (last edited 1 year ago)

They're not claiming it's AGI, though. You're missing a broad middle ground between dumb calculators and HAL 9000.

[–] FaceDeer@fedia.io 3 points 1 year ago

Ukraine: "You don't seem to understand. I'm not trapped in here with you, you're trapped in here with me!"

view more: ‹ prev next ›