OpticalMoose

joined 2 years ago
[–] OpticalMoose@discuss.tchncs.de 2 points 4 months ago* (last edited 4 months ago)

It might just be the ones I play; Railroad Tycoon 3, Airbucks, Detroit, etc. Edit: I forgot about the original Theme Park (by Bullfrog)

[–] OpticalMoose@discuss.tchncs.de 8 points 4 months ago

Look What They Need to Mimic a Fraction of Our Power is a line of dialogue spoken by Omni-Man to Invincible in the animated TV series Invincible during a fight between the two. The line is in reference to a pair of fighter jets, used to illustrate how weak humans are in comparison to superhumans. (source)

I've never seen the show, I just thought it was a funny meme.

[–] OpticalMoose@discuss.tchncs.de 4 points 4 months ago (2 children)

In my experience, in city builders you don't usually have any competition, although I think there were neighboring cities in Sim City 3000 that you had to negotiate with.

In tycoon games, you have competitors and your success depends on beating them. In the best tycoon games, you can buy your competitors' stock and profit from their effort.

In city builders, there's generally no rush - you can move at your own pace. Tycoon games don't give you that luxury - in the games that I play, you have to stay ahead of competitors and/or keep shareholders happy.

[–] OpticalMoose@discuss.tchncs.de 4 points 4 months ago (1 children)

I'm torn between that one and Tapatío.

[–] OpticalMoose@discuss.tchncs.de 8 points 4 months ago* (last edited 4 months ago) (2 children)

Probably obscure if you're under 50.

[–] OpticalMoose@discuss.tchncs.de 15 points 4 months ago (1 children)

I don't think I've ever used fabric softener. -Gen Xer

[–] OpticalMoose@discuss.tchncs.de 1 points 4 months ago (1 children)

The FSF has published its evaluation of the "Llama 3.1 Community License Agreement." This is not a free software license and you should not use it, nor any software released under it.

Dumb question here, but why shouldn't I use it? Maybe I'm missing something in the article.

[–] OpticalMoose@discuss.tchncs.de 1 points 4 months ago* (last edited 4 months ago)
[–] OpticalMoose@discuss.tchncs.de 7 points 4 months ago

The judgement isn't about rewarding the plaintiff, but punishing the company. This wasn't just hot liquid, but scalding hot liquid that caused hospitalization.

Similar to the McDonald's case, Starbucks has had burn incidents in the past, and gotten fines but they kept serving scalding hot liquid. These big judgements are the only way to change their behavior.

[–] OpticalMoose@discuss.tchncs.de 4 points 4 months ago

That's pretty cool. I've tried a few of the distills, but I've mostly gone back to regular models.

[–] OpticalMoose@discuss.tchncs.de 5 points 4 months ago (1 children)

Why is AMD comparing their 48Gb professional cards to Nvidia's 24Gb gaming cards? They do know Nvidia makes 48Gb cards, don't they? (the A5880 & A6000)

They specifically picked models which take up more than 24Gb VRAM, so the Nvidia cards would have to use system RAM. They did the same thing with their Strix Halo benchmark.

This is why no one takes AMD seriously in AI.

[–] OpticalMoose@discuss.tchncs.de 7 points 4 months ago

Not doing anything. Just buying more when I can. I sold during the dot com bust because I was young and stupid.

Since then, I've always just kept buying, and I've always come out ahead once the market recovered.

 

I followed a tutorial and trained my first LoRA today. I was surprised to see it was using both my GPUs - 1080ti and 3060, but then it failed halfway through. I won't print the whole log, but here are the important parts that caught my attention:

More than one GPU was found, enabling multi-GPU training.

2023-09-17 10:35:32.654285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Blocksparse is not available: the current GPU does not expose Tensor cores

[E ProcessGroupNCCL.cpp:455] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.

[E ProcessGroupNCCL.cpp:460] To avoid data inconsistency, we are taking the entire process down.

So my guess is the tensor errors are because of the GTX card which doesn't have tensor cores. I removed that card and everything ran fine with just the 3060. I imagine either card would work by itself, but the differences between the two may have been enough to cause data corruption.

So I'm wondering if anyone has this working with multiple RTX cards. Can it work across generations - 3060 and 4060ti, etc. Or does it have to be the same generation? Thanks in advance.


As for the LoRA itself, it needs more work (denim boots)

 

Recipe: Just plain ol' turkey, injected with a mix of salt, korean chili powder & old bay seasoning. I seasoned the outside and added a marinade of Italian dressing.

Equipment: 18" Weber kettle (AKA, the baby Weber)

Fuel: charcoal with hickory wood chunks.

image of raw turkey legs on a barbecue grill

 

In the grand scheme of things, the customer may have slightly more pull than the cashier ringing up their order, but it's the CEO and the board of directors that control the narrative. That's why we're getting bigger and less fuel efficient vehicles, bigger and more fattening meal portions in restaurants, and bigger less affordable houses.

 

cross-posted from: https://lib.lgbt/post/110426

AMD, which reports earnings next Tuesday, has finally brought 3D V-Cache to mobile. ASUS' ROG Strix SCAR 17 X3D will come with Nvidia's RTX 4090 mobile GPU.

view more: ‹ prev next ›