this post was submitted on 14 May 2025
190 points (99.0% liked)

Futurology

3133 readers
8 users here now

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] errer@lemmy.world 11 points 3 months ago (10 children)

Run your LLMs locally if you really want a therapist, you don’t need any of the extra crap the company offer in their online versions.

[–] bobotron@lemm.ee 1 points 3 months ago (9 children)

Can I run anything on a 3090 or I need a beefier gpu

[–] ShellMonkey@lemmy.socdojo.com 2 points 3 months ago

I think I'm at a 3060 or so and it works decently depending on the model. I can generally get away with around 13B, or some 20+ Q4 or so but they get real slow by that point.

It's a lot of messing around to find something that performs decent while not being so limited as to get crazy repetitive or saying loony things.

load more comments (8 replies)
load more comments (8 replies)