JohnBrownsBussy2

joined 2 years ago
[–] JohnBrownsBussy2@hexbear.net 1 points 10 months ago

I've seen numbers of 250,000 - 500,000 as an upper bound.

[–] JohnBrownsBussy2@hexbear.net 64 points 1 year ago (1 children)

If the democrats believed this, and cared about democracy, then they wouldn't keep a dying man as their nominee or they would be using his Supreme Court-affirmed absolute immunity to neutralize this existential threat. Let me know if they have a strategy other than gaslighting and blackmailing the people they claim to represent.

The logic there is that Reform is starting to eat away at the Conservatives, so he called the election ASAP to keep his party as the face of the opposition.

[–] JohnBrownsBussy2@hexbear.net 9 points 1 year ago (7 children)

Since console hardware is converging towards PCs (as opposed to specialized hardware stratified by make), as well as the expansion of the PC market (which has a massive range of hardware of varying price points and capabilities) the benefits of making a game against the limits of a specific console is less and less of a good idea versus targeting a wide market. If you aren't stuck to exclusivity for a single console, then it makes sense to target previous generation consoles if possible in order to maximize the size of the potential market.

 

The lab is available to researchers 24/7, uses real human neurons and uses the Python programming language, creating a “dream bridge between biology and data scientists,” according to Jordan.

After accessing the provided login/password, researchers gain the ability to remotely send electrical signals to neurons and receive their responses. It is then the responsibility of researchers to devise optimal algorithms for controlling the behavior of the organoids.

Users can mimic memory function by using periodic electrical stimulation to reinforce synapses through repetition, thus making desired pathways stronger.

Researchers do this by training the organoids through a reward system. The organoids are rewarded with dopamine, the neurotransmitter responsible for pleasure (and addiction).

Meanwhile, as “punishment,” the organoids are exposed to chaotic stimuli, such as irregular electrical activity.

A live view of the biochips working in real-time can be found at www.finalspark.com/live.

Still think it's kinda neat, but with clearly disturbing implications.

[–] JohnBrownsBussy2@hexbear.net 10 points 1 year ago* (last edited 1 year ago)

I know a lot of people recommend rules-light games for beginners, but if your group has neither roleplaying experience or theater experience, then something more structured and board-gamey may actually benefit. What I find is that players without any experience fall into choice paralysis in rules light games, and having clearer structures can facilitate learning. It really does depend on what sort of experience your players are interested in. If I had to make a blind recommendation, I think that the Free League "Year Zero Engine" games might be a good candidate if you've never played a TRPG before. They have the right balance of rules complexity for new players, good GM support and high production values. There are plenty of different genres (and degrees of complexity) in the ecosystem.

Some examples that you may want to look at:

  • Mutant: Year Zero (post-apocalyptic adventure)
  • Dragonbane (fantasy adventure) (Technically not YZE, but it has similar levels of complexity)
  • Vaesen (mystery, folklore, horror)
  • ALIEN RPG (sci-fi, horror)
  • Tales from the Loop (coming of age, sci-fi adventure).

Most of the these games have a starter kit with one-shot adventures that are meant to introduce players to the system and roleplaying more general.

 

Listening now. Will let you know if there any Matt updates.

[–] JohnBrownsBussy2@hexbear.net 49 points 1 year ago* (last edited 1 year ago) (1 children)

This is actually not true. The use of AI media in the Chinese entertainment industry is just as pervasive and probably more so than the US, and Chinese universities and private firms are developing their own AI image/video generators at an equivalent pace to the Western firms. For example you have Chinese-developed SOTA DiT txt2img models like Pixart, Hunyuan and Lumina, and even SOTA video models like Kling. Tencent, Alibaba and Bytedance are putting out various models, optimizations and distillations in this space as well. Even back in April of last year, there were articles indicating a 70% decline in illustration jobs in sectors like video game development.

[–] JohnBrownsBussy2@hexbear.net 27 points 1 year ago* (last edited 1 year ago)

Red Markets is a game about ~~poverty~~ a zombie apocalypse where the US federal government has written off the western states and the people that live there as "The Loss" and attempts to enforce a sense of "normality" on the remainder of the country (although the game takes place in The Loss and I don't think that the eastern US has that much detail on it.)

I use diffusion models a fair bit for VTT assets for TTRPGs. I've used LLMs a little bit for suggesting solutions for coding problems, and I do want to use one to mass produce customized cover letter drafts for my upcoming job hunt.

Neither model class is sufficiently competent for any zero-shot task yet, or at least has too high of a failure rate to run without active supervision.

As for use in a socialist society, even the current version of the technology has some potential for helping with workers' tasks'. Obviously, it would need to be rationed per its actual environmental and energy costs as opposed to the current underwriting by VCs. You'd also want to focus on specialized models for specific tasks, as opposed to less efficient generalized models.

[–] JohnBrownsBussy2@hexbear.net 19 points 1 year ago (1 children)

I wouldn't be surprised if they're using a variant of their 2B or 7B Gemma models, as opposed to the monster Gemini.

[–] JohnBrownsBussy2@hexbear.net 25 points 1 year ago* (last edited 1 year ago)

The LLM is just summarizing/paraphrasing the top search results, and from these examples, doesn't seem to be doing any self-evaluation using the LLM itself. Since this is for free and they're pushing it out worldwide, I'm guessing the model they're using is very lightweight, and probably couldn't reliably evaluate results if even they prompted it to.

As for model collapse, I'd caution buying too much into model collapse theory, since the paper that demonstrated it was with a very case study (a model purely and repeatedly trained on its own uncurated outputs over multiple model "generations") that doesn't really occur in foundation model training.

I'll also note that "AI" isn't a homogenate. Generally, (transformer) models are trained at different scales, with smaller models being less capable but faster and more energy efficient, while larger flagship models are (at least, marketed as) more capable despite being slow, power- and data-hungry. Almost no models are trained in real-time "online" with direct input from users or the web, but rather with vast curated "offline" datasets by researchers/engineers. So, AI doesn't get information directly from other AIs. Rather, model-trainers would use traditional scraping tools or partner APIs to download data, do whatever data curation and filtering they do, and they then train the models. Now, the trainers may not be able to filter out AI content, or they may intentional use AI systems to generate variations on their human-curated data (synthetic data) because they believe it will improve the robustness of the model.

EDIT: Another way that models get dumber, is that when companies like OpenAI or Google debut their model, they'll show off the full-scale, instruct-finetuned foundation model. However, since these monsters are incredibly expensive, they use these foundational models to train "distilled" models. For example, if you use ChatGPT (at least before GPT-4o), then you're using either GPT3.5-Turbo (for free users), or GPT4-Turbo (for premium users). Google has recently debuted its own Gemini-Flash, which is the same concept. These distilled models are cheaper and faster, but also less capable (albeit potentially more capable than if you trained model from scratch at that reduced scale).

 

Very funny enshitification coming from OpenAI. In exchange for access to real time data, it looks like they're planning to bake in partner ads. Wouldn't be surprised if this becomes just as pay-to-play as Google search.

Details from the pitch deck

The Preferred Publisher Program has five primary components, according to the deck.

First, it is available only to “select, high-quality editorial partners,” and its purpose is to help ChatGPT users more easily discover and engage with publishers’ brands and content.

Additionally, members of the program receive priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments. Finally, through PPP, OpenAI also offers licensed financial terms to publishers.

The financial incentives participating publishers can expect to receive are grouped into two buckets: guaranteed value and variable value.

Guaranteed value is a licensing payment that compensates the publisher for allowing OpenAI to access its backlog of data, while variable value is contingent on display success, a metric based on the number of users engaging with linked or displayed content.

The resulting financial offer would combine the guaranteed and variable values into one payment, which would be structured on an annual basis.

“The PPP program is more about scraping than training,” said one executive. “OpenAI has presumably already ingested and trained on these publishers’ archival data, but it needs access to contemporary content to answer contemporary queries.”

 

Obviously done so Elon looks like less of a hypocrite in his quixotic OpenAI lawsuit. However, it is notable in that it's the largest LLM to date with open and commercially licensed weights (314B params). It's way too large for any consumer to actually run, but having direct access to a model this big may benefit researchers looking into safety and bias in AI.

10
WKUK Old Folks Home (www.youtube.com)
submitted 1 year ago* (last edited 1 year ago) by JohnBrownsBussy2@hexbear.net to c/music@hexbear.net
 

RIP Trevor Moore

 

Due to a power issue, it looks like the lander may now no longer have sufficient fuel to make a controlled landing on the moon. This was the lander that was set to carry human remains to the moon despite objections from the Navajo nation. Hopefully, this discourages any future attempts at such a stunt, since instead of a permanent mausoleum your ashes may instead be stranded in orbit or scattered amongst the moon dust if the thing crashes.

 

I am still fascinated with The Product, even if I have no intention of purchasing or playing it. Still listening, but some interesting tidbits so far are Braces reiteration that he isn't (legally) making any money on the game, and that one thing that inflated the cost of the game were the legal fees to have it reviewed to make sure it wouldn't be actionable in court. He also confirmed that courting negative media attention was 100% intentional and part of the game's marketing strategy.

 

Hey folks,

I've put out feelers before, but it's the new year and I wanted to look at offering to run some tabletop RPGs. Right now, I'm feeling pretty open system-wise (although would prefer the lighter-weight side of the hobby) and can help figure out schedule at some point. Also not necessarily looking for a long term commitment, but thought it might be fun to meet some folks and try out some games together.

78
submitted 2 years ago* (last edited 2 years ago) by JohnBrownsBussy2@hexbear.net to c/chapotraphouse@hexbear.net
 

Looks like they're scraping up some more copies somehow, but it looks like the game literally sold out within an hour or so of going live. Must have either been a very small print run, or there are more irony pilled folks willing to part with ~$80 for what looks like a kinda jank roll-and-move game than I thought.

EDIT: Apparently the additional copies have also sold out.

EDIT2: If you actually want the thing, it looks like they are accepting pre-orders for an another printing.

 

Pretty neat/horrifying.

 

Main.

 

Lol.

view more: next ›