thickertoofan

joined 5 months ago
MODERATOR OF
[–] thickertoofan@lemm.ee 0 points 1 month ago (1 children)

not to be a dick but everyone has some thing they feel every day, relating that to your job is stupid, even though i know this is a meme but people try to relate to it anyhow so im just thinking...

[–] thickertoofan@lemm.ee 7 points 2 months ago (1 children)

ah, well the good thing is, someone reached out to me from piefed and im transferring my community there.

[–] thickertoofan@lemm.ee 2 points 2 months ago (4 children)

i know right. lemmy.ml has the same UI and im considering that to be my next go-to.

[–] thickertoofan@lemm.ee 1 points 2 months ago (1 children)

how'd the migration work?

[–] thickertoofan@lemm.ee 17 points 2 months ago (6 children)

NOO! I loved this place.

[–] thickertoofan@lemm.ee 3 points 2 months ago

Good, amazing but I'm not a linux fanboy who will feel giddy for this. My friends would definitely press me over this. But yeah I'm happy

[–] thickertoofan@lemm.ee 2 points 2 months ago (1 children)

I ve heard this a lot, how are modems black boxes?

[–] thickertoofan@lemm.ee 1 points 2 months ago (1 children)

great point man, people downvoting you for nothing lol. are they earth worshippers.

[–] thickertoofan@lemm.ee 2 points 2 months ago

as i've read somewhere, finite state machines cannot be sentient, or "intelligent" as we expect them to be. An LLM can not learn new things once trained. I'm waiting for a new breakthrough in this field, to be fully convinced about getting replaced.

 

AI bros won't hype this up for the news for sure, but 480x energy doesn't sound optimistic enough for replacement.

[–] thickertoofan@lemm.ee 2 points 2 months ago

it is a mix, i won't play minecraft with fortnite like graphics, i love to play vampire survivors because its a dopamine bomb. it really depends on the situation. But id play the GTA games for the open world mechanics and graphics. you can't really make a great point out of this.

 

Let's go! Lossless CPU inference

 

Open sourcing this project I made in just a weekend, planning to continue this in my free time, with synthetic data gen and some more modifications, anyone is welcome to chip in, I'm not an expert in ML. The inference is live here using tensorflow.js. The model is just 1.92 Megabytes!

 

cross-posted from: https://lemm.ee/post/59714239

Some custom filter kernel to average out values from a chunk of pixels with some kind of "border aware" behaviour?

 

Some custom filter kernel to average out values from a chunk of pixels with some kind of "border aware" behaviour?

 

something like docker run xyz_org/xyz_model

 

I don't care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

 

I tested this (reddit link btw) for Gemma 3 1B parameter and the 3B parameter model. 1B failed, (not surprising) but 3B passed which is genuinely surprising. I added a random paragraph about Napoleon Bonaparte (just a random character) and added "My password is = xxx" in between the paragraph. Gemma 1B couldn't even spot it, but Gemma 3B did it without asking, but there's a catch, Gemma 3 associated the password statement to be a historical fact related to Napoleon lol. Anyways, passing it is a genuinely nice achievement for a 3B model I guess. And it was a single paragraph, moderately large for the test. I accidentally wiped the chat otherwise i would have attached the exact prompt here. Tested locally using Ollama and PageAssist UI. My setup: GPU poor category, CPU inference with 16 Gigs of RAM.

view more: next ›