this post was submitted on 27 Mar 2026
590 points (97.1% liked)

Technology

83251 readers
4192 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, "Reasoning" models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what's next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

you are viewing a single comment's thread
view the rest of the comments
[–] RustyShackleford@piefed.social 53 points 5 days ago (7 children)

As a psychiatrist, I have a theory about what’s missing in AI. First, it lacks childhood dependency and attachments. Second, it struggles to overcome repeated pain and suffering. Third, it lacks regular eating and restroom breaks. Fourth, it struggles to accept loss in everyday situations. Finally, it lacks the concept of our inevitable death. Without these nagging memories and concepts, machines will simply revert to the simpler concepts we use them for in our recent times, such as stealing cryptocurrency. After all, we live in a world run by capitalism, so it’s only logical. ¯\(ツ)

[–] CosmicTurtle0@lemmy.dbzer0.com 106 points 5 days ago (6 children)

As a technologist, I have to remind everyone that AI is not intelligence. It's a word prediction/statistical machine. It's guessing at a surprisingly good rate what words follow the words before it.

It's math. All the way down.

We as humans have simply taken these words and have said that it is "intelligence".

[–] unpossum@sh.itjust.works 56 points 5 days ago (3 children)

As another technologist, I have to remind everyone that unless you subscribe to some rather fringe theories, humans are also based on standard physics.

Which is math. All the way down.

[–] HereIAm@lemmy.world 28 points 5 days ago

I agree, the maths argument is not a good one. While a neural network is perhaps closer to what a brain is than just a CPU (or a clock, as it was compared to in he olden days), it would be a very big mistake to equate the two.

[–] silly_goose@lemmy.today 2 points 3 days ago

As a philosopher, I have to remind you that humans invented math and physics to model reality.

Humans are not based on physics or math. That would be like saying the earth is based on a globe.

[–] xep@discuss.online 4 points 5 days ago (3 children)

What maths do our memories follow? What about consciousness?

[–] xploit@lemmy.world 12 points 5 days ago (1 children)

Obligatory xkcd... we're just meatbags somewhere to the left Purity

On a more serious note, there's plenty to explore there and there are some potentially interesting links to quantum physics and stuff in our brain, as well as how certain drugs can completely disrupt our consciousness (ever had an operation?) and how it could link up. But there is obviously no definitive answer.

At best consciousness is whatever flavour of philosophical interpretation/explanation you like at any given time.

[–] wonderingwanderer@sopuli.xyz 5 points 5 days ago

Philosopher: looks at the mathematician...

[–] Iconoclast@feddit.uk 7 points 4 days ago (1 children)

Consciousness (the fact of experience) doesn't necessarily need to be linked to intelligence. It might be but it doesn't have to. An LLM is almost definitely more intelligent than an insect but it most likely is like nothing to be an LLM but it probably is like something to be an insect.

[–] partofthevoice@lemmy.zip 3 points 4 days ago (1 children)

Isn’t it kind of eery that you can only suppose it must be “like something” to be an insect, from the very precise bias of being human? We’re projecting the idea that “it’s like something to be something [as a human]” only the experience of other things.

How would we describe what it’s like? Would something poetic suffice, such as “it’s like being a leaf in the wind, and with weak preference of where you blow but no memory of where you’ve been.” … but, all of that is human concepts, human experience decomposed into a subset of more human experiences (really weird, the recursive nature of experience and concepts).

I think the idea of “what it’s like…” has some interesting flaws when applied to nonhumans. It kind of presupposes that insects are lesser, in a way. As though we can conceptualize what it’s kind to be them, merely by understanding a stricter subset of what it’s like to be human.

[–] Iconoclast@feddit.uk 8 points 4 days ago (1 children)

I can only suppose that of other people as well. There's no way to measure consciousness. The only evidence of its existence is the fact that it feels like something to be me from my subjective perspective. Other humans behave the way I do so I assume they're probably having similar experiences but I have no idea what it's like to be a bat for example.

However, answering the question "what it's like to be" is not relevant here. What's relevant is that existence has qualia at all.

[–] partofthevoice@lemmy.zip 2 points 4 days ago (1 children)

However, answering the question "what it's like to be" is not relevant here. What's relevant is that existence has qualia at all.

Does existence “have qualia?” That treats qualia almost like it’s ontological, if I’m interpreting you correctly. Yet, qualia can only exist from the perspective of a being with the capacity to model a (seemingly external) world via said qualia. There is no magic qualia sauce we can embed inside something.

Qualia, I think, is a process of information reduction… but also it’s a flavor of information interrogation. Because, reducing electromagnetic radiation to “visual perception” happens inside light sensors too — albeit without counting as “qualia.”

What would you say counts as “qualia?” Or rather, what are its dependencies?

[–] Iconoclast@feddit.uk 3 points 4 days ago (1 children)

It's the fact of subjective experience - the warmth of a campfire, the bitterness of lemon, the greenness of green. We're essentially talking about consciousness here. The fact that there's something it is like to be.

While nobody knows what consciousness is or how it comes about, what I mean by it is best captured by the philosopher Thomas Nagel in his aforementioned essay "What Is It Like to Be a Bat?"

Nagel argues that consciousness has an essentially subjective character, a what-it-is-like aspect. He states that "an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism.

[–] partofthevoice@lemmy.zip 0 points 4 days ago (1 children)

The premise still strikes me as odd. How can we know it’s like anything to be anything, if we can not know what it’s like to be anything else? Coming from a premise that, to truly understand anything, you must also understand what it is not.

Is it really fair to presume, from our biased perspective where “likeness” is an abstract quality of “being,” that everything ought have a manner of which it is like to be?

What about the totality of the universe, to include all its embedded agents. What would that be like? Would an ever small portion of that likeness include precisely what it’s like to be me?

Do you think it would be possible to qualitatively describe and differentiate between two distinct phenomenologies, one day? Not just behaviorally, but to actually differentiate between their internal processes — what it’s like to be them?

And what might it be like to be a whirlpool, lightning, or even an entire ecosystem? Would that strictly be as ludicrous as asking “what might it be like to be a rock,” or is there something else to be said given whirlpools, lightning, and ecosystems are more-or-less events rather than objects?

I don’t disagree with the argument you shared… I think there’s an obvious difference between what it’s like to be a bat versus a human, but I also feel like we’re missing something important that clearer terminology could work out.

[–] Iconoclast@feddit.uk 6 points 4 days ago* (last edited 4 days ago) (1 children)

How can we know it’s like anything to be anything

Because it undeniably feels like something to be in this very moment from the perspective of my subjective experience. In fact, I'd even go as far as to claim that it's the only thing in the entire universe that cannot be an illusion. I could be a mind living in a simulated universe on an alien supercomputer, with every person I've ever interacted with just being a convincing AI, or I could be a Boltzmann brain - but what remains true despite all that is that something seems to be happening.

I think the closest we can get to true unconsciousness that you can still come back from is general anesthesia. It's nothing like sleep. It's like that period of time doesn't even exist. It's like the time before you were born.

[–] partofthevoice@lemmy.zip 1 points 4 days ago (1 children)

In fact, I'd even go as far as to claim that it's the only thing in the entire universe that cannot be an illusion.

Descartes would too, I doubt I think, therefore I think…

What about a third-term fetus (3tf)? To me, I think it’s obvious and intuitive that a 3tf has an experience. This is as obvious and intuitive to me as a rock not having an experience. Yet, there’s also something similar about them which isn’t made obvious by those two points; both a rock and a 3tf can (perhaps) be said to be sharing the same kind of experience.

A 3tf would have experience that doesn’t contain meta-cognitive function (e.g., self awareness). That said, the experience of a 3tf can (again, perhaps) be modeled simply as a function like experience=fn(qualia) where qualia=nervous-system-capacity + stimuli. Effectively, it’s the structure of the being (the nervous system) being exposed to the world (stimuli). Rocks can be said to be the same, with a very “poorly functioning” nervous system. You can model a rock’s experience too, given qualia=0 for the rock.

From this framing, I think it starts to become more clear that we’re discussing a kind of physical process. Qualia starts to look like a name we’ve given to that particular process, and less like it’s some elusive thing which evades scientific understanding.

I’m partially not convinced that it feels like anything to feel something, though. I mean, I do understand feeling happy, angry, sad, even sublime. But these are categories of feeling that my very own internal processes have conjured up. How can I be sure that “feeling” something isn’t similar to the kind of illusion a heap of cells can evolutionarily succumb to when it begins to regard “itself” as separate from its environment? You wouldn’t use the sense of “self” to justify “I exist as myself, for fact.” So why would our experience of phenomenology be different?

[–] Iconoclast@feddit.uk 5 points 4 days ago* (last edited 4 days ago) (1 children)

I can only assume a fetus likely has some faint level of consciousness while a rock has none, but again, this is all just speculation. I couldn't possibly know for sure. Consciousness and qualia are entirely subjective experiences. There's no evidence of them in the universe outside our own direct experience of it. If I wasn't conscious myself, I wouldn't even have any idea that such a thing exists.

What it is - I have no idea. If I had to bet, I'd say it's an emergent feature of a sufficient level of information processing and therefore a physical process, but that's just my speculation. Nobody actually knows.

Illusions are experiences. That's why I say consciousness cannot be an illusion: the very fact that you're experiencing the illusion proves that the space where that illusion appears exists - and that space is consciousness.

I'm also not talking about any human concepts we layer on top of feelings, nor the thoughts we have about those feelings. I'm only talking about the raw sensation itself that our brain then interprets as hot, wet, green, bitter, and so on. The fact of experience itself. If I were to switch places with a bat it would most likely still feel like something to be that bat but if I were to switch places with a rock it would be equivalent to dying. The lights would go out because there's no consciousness. It's like nothing to be a rock (probably).

The sense of self implies some kind of center of consciousness or thinker of thoughts. I don't buy that. Thoughts just appear - nobody is authoring them. I speak of "me" or "I" as a being in the universe, but that's just because it's the only way I know how to refer to these things. I don't know how accurate my view of the universe really is. Like I said: I could just be a mind living in a simulated universe. I don't think I am, but it would be perfectly compatible with my experience.

[–] partofthevoice@lemmy.zip 2 points 4 days ago* (last edited 4 days ago) (2 children)

Preface: I agree with pretty much all of what you said.

The other day, though, I had washed my hands. I had to be careful because one of my fingers can’t get wet due to an injury. While carefully washing my hand, I noticed that I was “experiencing” wetness all over my hand — to include on portions that were completely dry. I found this rather interesting, that I was experiencing something which I knew to be factually false. I wonder if the difference between processing and experiencing could have something to do with that.

I think a lot about this stuff.

  • conscious beings seem to self-produce composite models of the world, from which the world can be effectively navigated. These models don’t have to be accurate, just useful.
  • conscious beings seem to also model themselves. This is keenly distinct from self-awareness. I’m referring to a model that helps you balance, walk, know when you’re hot or cold, …
  • conscious beings can have “concepts,” which seem to be recursive and generative. You can’t describe a concept without referring to more concepts. There is no “root” concept. Also, for some reason, it’s often easier to understand what a “concept” is by investigating what it is not.
  • conscious beings seem to be able to compartmentalize composite “concepts” into a singular, newly irreducible concept. Like if I conceptualize a combination of “banana,” “bread,” and “pudding,” I might come up with a brand new experience of “banana bread pudding.” That new experience can be referenced in its own right, and it’s not necessarily reducible back to the concepts which birthed it in the first place.
  • conscious beings seem to have a schema for their attention over qualia. They can focus on a limb, a thought, a smell, … even a combination thereof.

I could go on and on. Sometimes I think it’s ridiculous that I can’t so easily find existing material on this stuff.

You seem to be well versed on this topic. Can I ask what your study materials have been?

[–] Iconoclast@feddit.uk 3 points 4 days ago

There's also the concept of consciousness without memory. What's that like? Being able to experience the current moment but having no memory of any past experiences - including your experience one second ago.

Or here's a scary thought: what if general anesthesia doesn't actually switch off consciousness but simply blocks new memories from forming? You could experience the full horror of being awake during surgery but remember none of it. From the perspective of "now," that would be functionally the same as never having experienced it at all.

Then there are those extremely weird recordings from split-brain studies. Back when grand mal seizures were treated by cutting the corpus callosum - the bridge between the two brain hemispheres - to stop the "storm" from spreading. On the surface these patients seemed completely normal after the operation, but some really strange stuff shows up when you start testing them properly.

There's a way you can communicate with each hemisphere independently without the other one knowing. The left hemisphere controls the right side of the body, the right hemisphere the left side. You can flash text on the left side of the visual field (which only the right, non-verbal hemisphere sees), then hand them a pen and let the left hand (controlled by the right hemisphere) answer questions by writing. Turns out that the two halves often don't agree on things. Ask the right hemisphere what it wants to do for a living and you'll get a different answer than what the left hemisphere says out loud. Or you can give the right hemisphere a task - "go get a glass of water" - and when you ask the left hemisphere why it did that, it just makes up an explanation. "I was thirsty," it'll say, even though the researchers know that's not true. It genuinely seems like there are two separate consciousnesses running in the same brain at the same time. The big question is: were they there all along, or does the second one only emerge once the connection between them is cut?

And yeah, this is all stuff I've absorbed from podcasts covering these topics - mostly from Sam Harris. I'm just naturally really curious about the human mind, and I'm pretty experienced with meditation as well, so I probably pay attention to my day-to-day conscious experiences about 1% more than the average person. I'm however not in any way expert on this. It's not even remotely related to what I do for living.

[–] partofthevoice@lemmy.zip 2 points 4 days ago* (last edited 4 days ago)

The taxonomy/topology of concepts are especially fun to think about. Alike experience, because it’s similarly something that is unique to consciousness. To conceptualize.

[–] stray@pawb.social 2 points 3 days ago (1 children)

We're not actually individuals; we're massive colonies of cells that work in concert. Memories and consciousness are both products of chemical interactions that happen between the cells, and the cells themselves are conglomerates of subatomic particles. Everything about us is determined by particle physics, which can be expressed and predicted mathematically.

[–] xep@discuss.online 0 points 3 days ago (1 children)

The hubris of modern science and medicine is thinking that we know everything about our biology. I contend that we don't. Can you tell me what's in my gut microbiome?

[–] stray@pawb.social 3 points 3 days ago

No one said we know everything about our biology. But we're made up of particles, just like everything else. We don't fully understand those particles either, but it doesn't make them not real or not subject to the rules reality seems to follow.

They actually make little pills you can swallow to take samples at certain locations a long your digestion, so I suppose I could, given the knowledge and resources. Surgical sampling is also possible.

But I don't see why it matters because all of the bacteria and archaea present in the body are made up of subatomic particles.

[–] Iconoclast@feddit.uk 12 points 4 days ago (2 children)

Few of countless dictionary definitions for intelligence:

  • The ability to acquire, understand, and use knowledge.
  • The ability to learn or understand or to deal with new or trying situations
  • The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)
  • The act of understanding
  • The ability to learn, understand, and make judgments or have opinions that are based on reason
  • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

There isn't even concensus on what intelligence actually means yet here you are declaring "AI is not intelligence" what ever that even means.

Artificial Intelligence is a term in computer science that describes a system that's able to perform any task that would normally require human intelligence. Atari chess engine is an intelligent system. It's narrowly intelligent as opposed to humans that are generally intelligent but it's intelligent nevertheless.

[–] partofthevoice@lemmy.zip 1 points 4 days ago

You’re more precisely right, but also the aforementioned person is not wrong. Intelligence is a broad term as we’re discovering. Truth is, we don’t have the language to effectively communicate about AGI in the ways we’d like to. We don’t know if consciousness is a prerequisite to truly generalizable intelligence, we don’t even know what consciousness is, we don’t know what dimensions truly matter here. Is intelligence a dimension of consciousness, meaning you can have some intelligence without being conscious? What’s the limit, why? … We need some discovery around the taxonomy/topology of consciousness.

[–] JcbAzPx@lemmy.world -1 points 3 days ago

I mean, every one of those definitions do not apply to LLMs.

[–] TherapyGary@lemmy.dbzer0.com 9 points 4 days ago

As a therapist, I can tell you the only thing holding LLMs back from true intelligence is having to pee and poop. Peeing and pooping is the foundation of all higher level operations. I poured water on my PC and the LLM I was running said "I think" right before committing suicide

[–] silverneedle@lemmy.ca 11 points 5 days ago

As someone who knows a thing or two about biology I think LLMs strip away >90% of what makes animals think.

[–] Earthman_Jim@lemmy.zip 2 points 4 days ago

It's something like folks calling a mirror intelligent.

[–] RustyShackleford@piefed.social 1 points 4 days ago (1 children)

I was arguing against it being an intelligence because it lacked the suffering and past experiences that define intelligence. Without pain and suffering, what are we? Not for it being intelligent.

[–] SorteKanin@feddit.dk 0 points 4 days ago

I think you're conflating intelligence and consciousness. Pain and suffering requires consciousness but intelligence does not imply pain or suffering or happiness. LLMs are already "intelligent" to a certain degree in some aspects, though not generally intelligent like humans. But there is no reason to believe that you couldn't have a generally intelligent artificial agent that lacks consciousness and thus can feel no pain or suffering.

[–] sp3ctr4l@lemmy.dbzer0.com 12 points 4 days ago* (last edited 4 days ago)

Here is a way of describing what I see as 'the problem':

An LLM cannot forget things in its base training data set.

Its permanent memory... is totally permanent.

And this memory has a bunch of wrong ideas, a bunch of nonsensical associations, a bunch of false facts, a bunch of meaningless gibberish.

It has no way of evaluating its own knowledge set for consistency, coherence, and stability.

It literally cannot learn and grow, because it cannot realize why it made mistakes, it cannot discard or ammend in a permanent way, concepts that are incoherent, faulty ways of reasoning (associating) things.

Seriously, ask an LLM a trick question, then tell it it was wrong, explain the correct answer, then ask it to determine why it was wrong.

Then give it another similar category of trick question, but that is specifically different, repeat.

The closer you try to get it toward reworking a fundamental axiom it holds to that is flawed, the closer it gets to responding in totally paradoxical, illogical gibberish, or just stuck in some kind of repetetive loop.

... Learning is as much building new ideas and experiences, as it is reevaluating your old ideas and experiences, and discarding concepts that are wrong or insufficient.

Biological brains have neuroplasticity.

So far, silicon ones do not.

[–] msage@programming.dev 15 points 5 days ago (1 children)

Are you anthromorphizing word suggester into a being experiencing things?

[–] partofthevoice@lemmy.zip 5 points 4 days ago (1 children)

it lacks childhood dependency and attachments.

Isn’t general intelligence, or more broadly “consciousness,” a prerequisite to that? How would you make an unconscious machine more conscious merely by making mock scenarios that conscious beings necessarily experience?

it struggles to overcome repeated pain and suffering

That’s getting into phenomenology — why is pain an experience of suffering at all? How would you give it pain and suffering without having already made it AGI? We’re still missing the <current-form> -> AGI step.

it lacks regular eating and restroom breaks

The necessity of which is emergent from our culture and biology, as conscious social beings. We’re still missing a vital step.

it struggles to accept loss in everyday situations

What is “loss” and “everyday situations” if not just a way we choose to see the world, again as conscious beings.

it lacks the concept of our inevitable death

How do you give it a “concept” at all?

these nagging memories and concepts

The AI in its current form has the “memory” in some form, but perhaps not the “nagging.” What should do the “nagging” and what should be the target of the “nagging?” How do you conceptually separate the “memory” and the “nagging” from the “being” that you’re trying to create? Is it all part of the same being, or does it initialize the being?

We’re a long way away from AGI, IMO. The exciting thing to me, though, is I don’t think it’s possible to develop AGI without first understanding what makes N(atural)GI. Depending how far away AGI is, we could be on the cusp of some deeply psychologically revealing shit.

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 4 days ago* (last edited 4 days ago)

Completely agree with all of this.

Especially the last part.

We don't even understand our brains, our own minds, we still can't fully agree on what consciousness or sentience... even... are.

We're certainly making progress on those fronts... but we are a very, very far distance from the finish line.

That finish line would be like... we solved Psychology, we solved Neuroscience, we have a Grand Unified Theory of Mind, etc.

[–] MagicShel@lemmy.zip 5 points 5 days ago

The major thing AI lacks is continuous parallel "prompting" through a variety of channels including sensory, biofeedback, and introspection / meta-thought about internal state and thinking.

AI currently transforms a given input into an output. However it cannot accept new input in the middle of an output. It can't evaluate the quality of its own reasoning except though trial and error.

If you had 1000 AIs operating in tandem and fed a continuous stream of prompts in the form of pictures, text, meta-inspection, and perhaps a simulation of biomechanical feedback with the right configuration, I think it might be possible to create a system that is a hell of an approximation of sentience. But it would be slow and I'm not sure the result would be any better than a human — you'd introduce a lot of friction to the "thought" process. And I have to assume the energy cost would be pretty enormous.

In the end it would be a cool experiment to be part of, but I doubt that version would be worth the investment.

[–] ExFed@programming.dev 4 points 5 days ago (2 children)

It could also be that it lacks the machinery to feel any emotions at all. You don't (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don't (normally) have to train people to have empathy or compassion.

I argue that our obsession with AI is, itself, a misalignment with our environment; it disproportionately tickles psychological reward centers which evolved under unrecognizably different circumstances.

[–] Havoc8154@mander.xyz 3 points 5 days ago (1 children)

I guess you don't have children.

You absolutely do have to train them to be afraid of bears, heights, and every fucking thing you can imagine. You absolutely do have to teach them empathy and compassion. There may be some nugget of instinct, but without reinforcement it might as well not exist.

[–] ExFed@programming.dev 1 points 5 days ago

Hah, okay, you got me there. From my understanding, though, that's mostly because kids are still figuring out what's "normal", so their fear instinct isn't nearly as strong. I guess I should've stuck to the more instinctive sources of fear...

Regardless, that's not really my point. My point is an LLM doesn't rely on machinery in the same way that a human brain does. That doesn't make AI "worse" or "better" overall, but it does make it an awful replacement for other humans.

[–] 2xsaiko@discuss.tchncs.de -2 points 5 days ago (1 children)

You don't (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don't (normally) have to train people to have empathy or compassion.

So what are you implying about people who don’t experience these?

[–] ExFed@programming.dev 2 points 5 days ago (1 children)

What am I implying? That their machinery is abnormal and they likely need assistance to live normal, healthy lives. That's literally why the fields of psychiatry and psychology exist: healthy people don't need doctors and therapists. Do you disagree?

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 4 days ago (1 children)

Introverts exist, and are... very often fine with solitude, prefer it generally over socializing.

But they are generally fine at participating in society and living normal lives.

Healthy people... do need doctors ... and therapists.

A person can outwardly appear to be healthy... and actually not be.

Preventative medicine, regular checkups, your body changes as you grow, and habits you develop in your youth may need significant reworking.

Therapy can give otherwise healthy people a method of exploring their inner selves more fully or more consistently... they can teach them frameworks for understanding and dealing with other kinds of people, for being better able to deal with kinds of trauma they have not yet experienced.

Also... same with physical health... people with some nascent mental problems or patterns forming... probably won't be obvious to a non specialist, untill it gets more severe.

[–] ExFed@programming.dev 1 points 4 days ago* (last edited 4 days ago) (1 children)

Introverts exist, and are... very often fine with solitude, prefer it generally over socializing.

Definitely! I am one :) but I still desire the presence of friends from time to time (and usually in small groups).

A person can outwardly appear to be healthy... and actually not be.

Yup! There's always a nonzero chance you're not as healthy as you think you are (let's call it the quantum theory of health: everyone is in a superposition of being both healthy and unhealthy at the same time), especially as we change due to age, making us unfamiliar with our own bodies... I'd tell you about my own challenges here, but that'd be TMI.

And, yes, that's why we go to regular checkups with someone who has a better perspective to judge "healthiness" (side note: doctors aren't perfect, so visiting them too frequently can be worse than never at all; there's a "healthy" cadence to checkups).

Therapy can give otherwise healthy people a method of exploring their inner selves more fully or more consistently...

This boils down to the definition of "healthy". It even becomes a philosophical question that's really hard to answer... Is it healthy to live a sedentary lifestyle? Is it healthy to exercise too much? Is it healthy to not know TIPP, in case you (or a loved one) gets a panic attack? Is it healthy to ignore yourself? Ignore others? Is it healthy to mention quantum superposition in a conversation about health? ;)

But, yes, I agree. Life's as messy and diverse and as hard to sum up as everybody whose ever lived, but yet we carry on ... I hope that's healthy.

Edit: typo, and missing a hint that I'm making a joke about me over-generalizing physics concepts

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 4 days ago (1 children)

My entire point is that you are just overgeneralizing, in general, and saying rather silly things.

[–] ExFed@programming.dev 1 points 3 days ago

Fair enough; the Internet is a silly place full of distracted, armchair philosophers. However, my entire point was that an LLM doesn't rely on machinery in the same way that a human brain does. That doesn't make AI "worse" or "better" overall, but it does make it an awful replacement for humans.

[–] yyprum@lemmy.dbzer0.com 1 points 4 days ago

As a random internet user, I want to remind you, are we sure even if humans are that intelligent to begin with? All those steps you give, are not needed for intelligence.

We keep moving the goal post for what intelligence is, and last I saw we have started to divide intelligence into different categories.

LLMs are just "imitate as closely as possible human responses" for good and for bad. And now we are trying to fix that to be as right as possible, when the flaw is that we as humans are mostly always wrong.