webghost0101

joined 2 years ago
[–] webghost0101@sopuli.xyz 2 points 2 months ago

That a great take.

Double blind could be a different team comes In and either does the surgery or fakes it. And this team also does or Fakes the after care.

This team is never to communicate with patient or normal staff.

[–] webghost0101@sopuli.xyz 0 points 2 months ago* (last edited 2 months ago) (9 children)

Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.

Llms cannot produce opinions because they lack a subjective concious experience.

However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.

Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.

I am not sure there is a basic explanation, this is very complex field computer science.

If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.

Common sense (from Latin sensus communis) is "knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument".

If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.

A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.

I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.

[–] webghost0101@sopuli.xyz 1 points 2 months ago* (last edited 2 months ago) (11 children)

Sorry to say but your about as reliable as llm chatbots when it comes to this.

You are not researching facts and just making things up that sound like they make sense to you.

Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”

When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.

When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.

Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.

[–] webghost0101@sopuli.xyz 2 points 2 months ago (13 children)

I understand what your saying. It definitely is the eliza effect.

But you are taking sementics quite far to state its not ai because it has no “intelligence”

I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.

[–] webghost0101@sopuli.xyz 3 points 2 months ago (1 children)

Pretty sure its in the Tos it can’t be used for therapy.

It used to be even worse. Older version of chatgpt would simply refuse to continue the conversation on the mention of suicide.

[–] webghost0101@sopuli.xyz 6 points 2 months ago

It goes further than that. It’s because dogs really fucked up lots of things and he wants out before people catch wind.

[–] webghost0101@sopuli.xyz 46 points 2 months ago (19 children)

So does anyone know a proper alternative that doesn’t scare every non-tech person on my server?

I already lost people when we initially moved from a facebook group to discord.

I believe i can self-host a matrix client but no idea how well it will work with people currently having notifications by discord bot on their phone

[–] webghost0101@sopuli.xyz 18 points 2 months ago

I am not sure this logic holds because i am not sure anyone has been dedicated to write these for a while.

Discovered years ago that many papers/magazines reuse the same generic texts for different horoscopes. Sometimes different horoscopes have the same text on the same day.

Also different magazines will steal them From eachother.

People usually only read their own and the text is generic enough you’re not going to remember seeing it again.

I assume these will all be ai soon. Its the perfect low stakes job for it.

[–] webghost0101@sopuli.xyz 3 points 2 months ago* (last edited 2 months ago) (1 children)

Isnt a spider usually 2 marks?

[–] webghost0101@sopuli.xyz 24 points 2 months ago (4 children)

Thats their point. How do you test placebo effect for a surgical operation?

[–] webghost0101@sopuli.xyz 29 points 2 months ago* (last edited 2 months ago) (2 children)

The llm models aren’t, they don't really have focus or discriminate.

The ai chatbots that are build using those models absolutely are and its no secret.

What confuses me is that the article points to llama3 which is a meta owned model. But not to a chatbot.

This could be an official facebook ai (do they have one?) but it could also be. Bro i used this self hosted model to build a therapist, wanna try it for your meth problem?

Heck i could even see it happen that a dealer pretends to help customers who are trying to kick it.

view more: ‹ prev next ›