Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.
For me:
- GPT-3.5: 8191
- GPT-4: 4095
- GPT-4 with Code Interpreter: 8192
- GPT-4 with Plugins: 8192
It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.
Here's the response I get from /backend-api/models, as a Plus subscriber:
{
"models": [
{
"slug": "text-davinci-002-render-sha",
"max_tokens": 8191,
"title": "Default (GPT-3.5)",
"description": "Our fastest model, great for most everyday tasks.",
"tags": [
"gpt3.5"
],
"capabilities": {}
},
{
"slug": "gpt-4",
"max_tokens": 4095,
"title": "GPT-4",
"description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
"tags": [
"gpt4"
],
"capabilities": {}
},
{
"slug": "gpt-4-code-interpreter",
"max_tokens": 8192,
"title": "Code Interpreter",
"description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.",
"tags": [
"gpt4",
"beta"
],
"capabilities": {},
"enabled_tools": [
"tools2"
]
},
{
"slug": "gpt-4-plugins",
"max_tokens": 8192,
"title": "Plugins",
"description": "An experimental model that knows when and how to use plugins",
"tags": [
"gpt4",
"beta"
],
"capabilities": {},
"enabled_tools": [
"tools3"
]
},
{
"slug": "text-davinci-002-render-sha-mobile",
"max_tokens": 8191,
"title": "Default (GPT-3.5) (Mobile)",
"description": "Our fastest model, great for most everyday tasks.",
"tags": [
"mobile",
"gpt3.5"
],
"capabilities": {}
},
{
"slug": "gpt-4-mobile",
"max_tokens": 4095,
"title": "GPT-4 (Mobile, V2)",
"description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
"tags": [
"gpt4",
"mobile"
],
"capabilities": {}
}
],
"categories": [
{
"category": "gpt_3.5",
"human_category_name": "GPT-3.5",
"subscription_level": "free",
"default_model": "text-davinci-002-render-sha",
"code_interpreter_model": "text-davinci-002-render-sha-code-interpreter",
"plugins_model": "text-davinci-002-render-sha-plugins"
},
{
"category": "gpt_4",
"human_category_name": "GPT-4",
"subscription_level": "plus",
"default_model": "gpt-4",
"code_interpreter_model": "gpt-4-code-interpreter",
"plugins_model": "gpt-4-plugins"
}
]
}
Anyone seeing anything different? I haven't really seen this compared anywhere.
Is this the one that can use mouth noises like tongue clicking to perform actions? I saw one video a long time ago with someone writing code via voice software and it has some kind of neat way of using sounds other than words to control things. Haven't been able to figure out what it was.