this post was submitted on 08 Jan 2026
76 points (93.2% liked)

TenForward: Where Every Vulcan Knows Your Name

6911 readers
367 users here now

/c/TenForward: Your home-away-from-home for all things Star Trek!

Re-route power to the shields, emit a tachyon pulse through the deflector, and post all the nonsense you want. Within reason of course.

~ 1. No bigotry. This is a Star Trek community. Remember that diversity and coexistence are Star Trek values. Any post/comments that are racist, anti-LGBT, or generally "othering" of a group will result in removal/ban.

~ 2. Keep it civil. Disagreements will happen both on lore and preferences. That's okay! Just don't let it make you forget that the person you are talking to is also a person.

~ 3. Use spoiler tags. Use spoiler tags in comments, and NSFW checkbox for posts.
This applies to any episodes that have dropped within 3 months prior of your posting. After that it's free game.

~ 4. Keep it Trek related. This one is kind of a gimme but keep as on topic as possible.

~ 5. Keep posts to a limit. We all love Star Trek stuff but 3-4 posts in an hour is plenty enough.

~ 6. Try to not repost. Mistakes happen, we get it! But try to not repost anything from within the past 1-2 months.

~ 7. No General AI Art. Posts of simple AI art do not 'inspire jamaharon'

~ 8. No Political Upheaval. Political commentary is allowed, but please keep discussions civil. Read here for our community's expectations.

Fun will now commence.


Sister Communities:

!startrek@lemmy.world

!theorville@lemmy.world

!memes@lemmy.world

!tumblr@lemmy.world

!lemmyshitpost@lemmy.world

Want your community to be added to the sidebar? Just ask one of our mods!


Creator Resources:

Looking for a Star Trek screencap? (TrekCore)

Looking for the right Star Trek typeface/font for your meme? (Thank you @kellyaster for putting this together!)


founded 2 years ago
MODERATORS
 

Star Trek: Voyager S2E23 "The Thaw"

top 35 comments
sorted by: hot top controversial new old
[–] peteypete420@sh.itjust.works 36 points 2 months ago (1 children)
[–] PolarKraken@lemmy.dbzer0.com 9 points 2 months ago* (last edited 2 months ago) (2 children)

Data can be pretty spooky.

And then I saw a weird movie ("Death to Smoochy") years prior* where his actor played kind of a gross villain and it was REAL disturbing.

[–] aeronmelon@lemmy.world 9 points 2 months ago (2 children)

Brent Spiner does do a good villain. Watch Out to Sea to see him in a campier antagonistic role.

[–] PolarKraken@lemmy.dbzer0.com 3 points 2 months ago (1 children)

I'll have to give it a shot. Certainly not his fault but I pretty much can only see Data acting strange when he does other roles lol

[–] aeronmelon@lemmy.world 2 points 2 months ago (1 children)

Is it like Lore, or worse?

[–] PolarKraken@lemmy.dbzer0.com 2 points 2 months ago

Definitely weirder. Like this loop every few minutes where my brain forgets and goes "wtf is Data doing..." just subconsciously.

[–] Taleya@aussie.zone 1 points 2 months ago* (last edited 2 months ago)

~ Oy yay komo va ~

[–] peteypete420@sh.itjust.works 3 points 2 months ago

Yea when i seached "stng data upset" , I got a lot of great pics. I just wanted mild upset, as im not sure if positronic brain and AI are the same.

[–] sik0fewl@piefed.ca 14 points 2 months ago (1 children)

I appreciate the original memes, but I could really do without the bouncing text 🙂

[–] SatyrSack@quokk.au 4 points 2 months ago (1 children)

Hmm, I have actually had people complement my bouncing text. There are two different types of bounces here. Which do you dislike?

  1. When each line of text enters the frame, it sort of bounces into place by overshooting the final position, then overcorrecting, until it finally reaches the final position.
  2. As each line of text overshoots its final position, it bumps into the line above it, causing the line above to bounce a little bit.

Or are both unwanted?

[–] sik0fewl@piefed.ca 3 points 2 months ago* (last edited 2 months ago) (1 children)

You don't need to change it on my account! But in my opinion, #2 is the bigger annoyance. #1 might be ok without #2.

[–] SatyrSack@quokk.au 7 points 2 months ago* (last edited 2 months ago) (1 children)

You don't need to change it on my account!

It wouldn't just be your account, seeing that your comment has a lot of upvotes. If enough people definitely dislike it, I'll avoid doing it with future posts. But I don't plan on fixing this one unless people are really that disgusted by it lol

#1 might be ok without #1.

I assume you mean "#1 might be ok without #2". While I pretty much always do both, here are some previous GIFs that I made without doing that for some reason. Better?

[–] sik0fewl@piefed.ca 2 points 2 months ago

I assume you mean "#1 might be ok without #2".

Oops, yes. Fixed!

I’m still not a fan, personally. I think just sliding in or maybe reducing the bounce would be better. That being said, I think it’s much better without all of the text bouncing.

[–] TribblesBestFriend@startrek.website 10 points 2 months ago (2 children)

Seems to me like a program masquerading as intelligence could out smart Elon Musk

[–] limelight79@lemmy.world 9 points 2 months ago

She did say "actual brain functions"...

[–] aeronmelon@lemmy.world 2 points 2 months ago

An LG Smart Fridge that simply agrees with everything he says would outsmart him.

[–] Steve@communick.news 6 points 2 months ago (2 children)

That's like saying there's no way a machine can replicate hand sewing.

[–] sundray@lemmus.org 2 points 2 months ago (1 children)

You're right, thinking and sewing are exactly as complex.

[–] Steve@communick.news 2 points 2 months ago* (last edited 2 months ago) (1 children)

Complexity isn't relevant to my analogy.
The lessons learned from the failures and eventual success of machine sewing are.

Unless you're being sarcastic.
Sewing really is surprisingly complex.

[–] sundray@lemmus.org 1 points 2 months ago (1 children)

I was being a little sarcastic 😆 . But I admit I don't understand the analogy; what relationship does human thought have to do with human sewing?

[–] Steve@communick.news 3 points 2 months ago* (last edited 2 months ago) (1 children)

Sewing machines don't make stiches the way people do. People tried for decades and failed to build machines that sewed like humans. They work by making their stiches in ways humans never would, or could realy. They had to invent a whole new way get the job done, not remotely the way a person would do it.

AI will very likely be the same. Expecting machine minds to do things the same way a human mind would, to mimic human thought, strikes me as some kind of human centric bias.

[–] sundray@lemmus.org 2 points 2 months ago (1 children)

Ah, in that case we agree! I also believe that if a genuine AI ever comes about it will be quite alien.

[–] Digit@lemmy.wtf 1 points 2 months ago* (last edited 2 months ago) (1 children)

That’s like saying there’s no way a machine can replicate hand sewing.

Gets me thinking there's no way I could do sewing consistently. My adhd novelty seeking creative side (over powering my autism side) would be switching stiching types constantly, before I give up in the tedium of it. Could a machine do that?

[–] Buddahriffic@lemmy.world 1 points 2 months ago

There are sewing machines that offer didn't stitching modes. In fact, different use cases have different optimal stitches. Like a decorative stitch can be whatever, and a hem doesn't need to handle the same kind of forces as a join, which itself might require different strengths (like a dress shirt sleeve vs a jean's pocket).

[–] MadMadBunny@lemmy.ca 2 points 2 months ago

It can’t; it’s too perfect, too neat…

/s

[–] marcos@lemmy.world 2 points 2 months ago (1 children)

1 - Well, it can simulate them, and we will probably never use simulation when we want intelligence. (We will for understanding the brain, though)

2 - It doesn't matter at all, intelligence doesn't need to think like us.

3 - We are nowhere close to any general one, and the more investors bet all their money and markets sell all their hardware to the same few companies that will burn at their local maximum, the further away we will become.

[–] sundray@lemmus.org 2 points 2 months ago (1 children)

2 - It doesn’t matter at all, intelligence doesn’t need to think like us.

Agreed, but look at the history of how humans have thought about the presumed intelligence (or lack of it) in animals; we seem to be bad at recognizing intelligence that doesn't mirror our own.

[–] marcos@lemmy.world 1 points 2 months ago (1 children)

You think we won't be able to use AI because we can't recognize intelligence?

[–] sundray@lemmus.org 2 points 2 months ago (1 children)

Those are two separate questions, I think.

  1. "You think we won’t be able to use AI" -- If there is some day actual artificial intelligence, I have no idea if humans can "use" it.
  2. "we can’t recognize intelligence?" -- I think you can make the case that historically we haven't been great about recognizing non-human intelligence.

What I am saying is that if we ever invent an actual AGI, unless it thinks and, more importantly, speaks in a way we recognize, we won't even realize what we invented.

[–] marcos@lemmy.world 0 points 2 months ago (1 children)

Recognizing the intelligence is something you pushed into the discussion, I just want to know why you think it's important.

[–] sundray@lemmus.org 2 points 2 months ago (1 children)

Hm? I was agreeing with your 2nd point. I was merely adding to that by pointing out that we've only recently begun to recognize non-human intelligence in species like crows (tool use), cetaceans (language), higher primates (tool use, language, and social organization); which leaves me concerned that, if an AI were to "emerge" that was very different than human intelligence, we'd likely fail to notice it, potentially cutting off an otherwise promising development path.

[–] marcos@lemmy.world 1 points 2 months ago

Oh ok, you have a completely new concern.

I don't think we will fail to spot intelligence in AIs, since they have advocates, something that animals never had. But we have a problem in that "intelligence" seems to be a multidimensional continuum, so until we solve lots of different kinds of it, there will exist things that fit some form of it but really don't deserve the unqualified name.

[–] hopesdead@startrek.website 1 points 2 months ago

Yet a artificial lung can let someone breathe air.

[–] Digit@lemmy.wtf 1 points 2 months ago

Because of the microtubules and quantum effects, right?

[–] samus12345@sh.itjust.works 1 points 2 months ago

Strong disagree. There's no reason why sufficiently advanced AI couldn't replace brain function. Note this is ACTUAL AI, not LLMs, which are not intelligence in any way, shape, or form.