EarlTurlet

joined 2 years ago
[–] EarlTurlet@lemmy.zip 4 points 2 months ago (1 children)

Is this the one that can use mouth noises like tongue clicking to perform actions? I saw one video a long time ago with someone writing code via voice software and it has some kind of neat way of using sounds other than words to control things. Haven't been able to figure out what it was.

[–] EarlTurlet@lemmy.zip 5 points 2 months ago (1 children)

A belt and a doorknob

[–] EarlTurlet@lemmy.zip 4 points 4 months ago

The "Target experience" for me has always been "find product, wait 30 minutes for a cashier". Couldn't pay me to shop there anymore. Such a hostile environment.

[–] EarlTurlet@lemmy.zip 2 points 4 months ago

I usually get bored with games like this, but went in blind since it was on Game Pass and now it's the only game I play. I love just roaming around and doing side quests.

[–] EarlTurlet@lemmy.zip 6 points 1 year ago (1 children)

You may be fewer irritated by this with age

[–] EarlTurlet@lemmy.zip 10 points 1 year ago (8 children)

Misusing words like "setup" vs "set up", or "login" vs "log in". "Anytime" vs "any time" also steams my clams.

[–] EarlTurlet@lemmy.zip 11 points 2 years ago

I use Fossil for all of my personal projects. Having a wiki and bug tracker built-in is really nice, and I like the way repositories sync. It's perfect for small teams that want everything, but don't want to rely on a host like GitHub or set up complicated software themselves.

[–] EarlTurlet@lemmy.zip 104 points 2 years ago (16 children)

I had this set up the day it was available in my area. Never got an alert. I find it difficult to believe I wasn't "exposed" during the pandemic, so I assume this didn't really provide much value.

[–] EarlTurlet@lemmy.zip 9 points 2 years ago (1 children)

Google cases always seem hit-or-miss. I just buy the same Spigen case for every phone. I know I like it.

[–] EarlTurlet@lemmy.zip 12 points 2 years ago

This is a good reason to use Dvorak

[–] EarlTurlet@lemmy.zip 3 points 2 years ago

But I'm bi-testual

 

Body, if you don't like le' Twitter:

After 20 incredible years, I have decided to take a step back and work on the next chapter of my career. As I take a moment and think about all we have done together, I want to thank the millions of gamers around the world who have included me as part of their lives. (1/3)

Also, thanks to Xbox team members for trusting me to have a direct dialogue with our customers. The future is bright for Xbox and as a gamer, I am excited to see the evolution.
Thank and I’ll see you online Larry Hryb (2/3)

P.S. The official Xbox Podcast will be taking a hiatus this Summer and will come back in a new format. (3/3)

 

Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.

For me:

  • GPT-3.5: 8191
  • GPT-4: 4095
  • GPT-4 with Code Interpreter: 8192
  • GPT-4 with Plugins: 8192

It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.

Here's the response I get from /backend-api/models, as a Plus subscriber:

{
    "models": [
        {
            "slug": "text-davinci-002-render-sha",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4",
            "max_tokens": 4095,
            "title": "GPT-4",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-code-interpreter",
            "max_tokens": 8192,
            "title": "Code Interpreter",
            "description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools2"
            ]
        },
        {
            "slug": "gpt-4-plugins",
            "max_tokens": 8192,
            "title": "Plugins",
            "description": "An experimental model that knows when and how to use plugins",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools3"
            ]
        },
        {
            "slug": "text-davinci-002-render-sha-mobile",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5) (Mobile)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "mobile",
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-mobile",
            "max_tokens": 4095,
            "title": "GPT-4 (Mobile, V2)",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4",
                "mobile"
            ],
            "capabilities": {}
        }
    ],
    "categories": [
        {
            "category": "gpt_3.5",
            "human_category_name": "GPT-3.5",
            "subscription_level": "free",
            "default_model": "text-davinci-002-render-sha",
            "code_interpreter_model": "text-davinci-002-render-sha-code-interpreter",
            "plugins_model": "text-davinci-002-render-sha-plugins"
        },
        {
            "category": "gpt_4",
            "human_category_name": "GPT-4",
            "subscription_level": "plus",
            "default_model": "gpt-4",
            "code_interpreter_model": "gpt-4-code-interpreter",
            "plugins_model": "gpt-4-plugins"
        }
    ]
}

Anyone seeing anything different? I haven't really seen this compared anywhere.

view more: next ›