EarlTurlet

joined 1 year ago
[–] [email protected] 6 points 3 months ago (1 children)

You may be fewer irritated by this with age

[–] [email protected] 10 points 3 months ago (8 children)

Misusing words like "setup" vs "set up", or "login" vs "log in". "Anytime" vs "any time" also steams my clams.

[–] [email protected] 11 points 8 months ago

I use Fossil for all of my personal projects. Having a wiki and bug tracker built-in is really nice, and I like the way repositories sync. It's perfect for small teams that want everything, but don't want to rely on a host like GitHub or set up complicated software themselves.

[–] [email protected] 106 points 9 months ago (16 children)

I had this set up the day it was available in my area. Never got an alert. I find it difficult to believe I wasn't "exposed" during the pandemic, so I assume this didn't really provide much value.

[–] [email protected] 10 points 10 months ago (1 children)

Google cases always seem hit-or-miss. I just buy the same Spigen case for every phone. I know I like it.

[–] [email protected] 13 points 10 months ago

This is a good reason to use Dvorak

[–] [email protected] 3 points 11 months ago

But I'm bi-testual

[–] [email protected] 6 points 11 months ago

Looks like he realized he left the oven on

[–] [email protected] 2 points 11 months ago
 

Body, if you don't like le' Twitter:

After 20 incredible years, I have decided to take a step back and work on the next chapter of my career. As I take a moment and think about all we have done together, I want to thank the millions of gamers around the world who have included me as part of their lives. (1/3)

Also, thanks to Xbox team members for trusting me to have a direct dialogue with our customers. The future is bright for Xbox and as a gamer, I am excited to see the evolution.
Thank and I’ll see you online Larry Hryb (2/3)

P.S. The official Xbox Podcast will be taking a hiatus this Summer and will come back in a new format. (3/3)

 

Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.

For me:

  • GPT-3.5: 8191
  • GPT-4: 4095
  • GPT-4 with Code Interpreter: 8192
  • GPT-4 with Plugins: 8192

It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.

Here's the response I get from /backend-api/models, as a Plus subscriber:

{
    "models": [
        {
            "slug": "text-davinci-002-render-sha",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4",
            "max_tokens": 4095,
            "title": "GPT-4",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-code-interpreter",
            "max_tokens": 8192,
            "title": "Code Interpreter",
            "description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools2"
            ]
        },
        {
            "slug": "gpt-4-plugins",
            "max_tokens": 8192,
            "title": "Plugins",
            "description": "An experimental model that knows when and how to use plugins",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools3"
            ]
        },
        {
            "slug": "text-davinci-002-render-sha-mobile",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5) (Mobile)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "mobile",
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-mobile",
            "max_tokens": 4095,
            "title": "GPT-4 (Mobile, V2)",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4",
                "mobile"
            ],
            "capabilities": {}
        }
    ],
    "categories": [
        {
            "category": "gpt_3.5",
            "human_category_name": "GPT-3.5",
            "subscription_level": "free",
            "default_model": "text-davinci-002-render-sha",
            "code_interpreter_model": "text-davinci-002-render-sha-code-interpreter",
            "plugins_model": "text-davinci-002-render-sha-plugins"
        },
        {
            "category": "gpt_4",
            "human_category_name": "GPT-4",
            "subscription_level": "plus",
            "default_model": "gpt-4",
            "code_interpreter_model": "gpt-4-code-interpreter",
            "plugins_model": "gpt-4-plugins"
        }
    ]
}

Anyone seeing anything different? I haven't really seen this compared anywhere.

view more: next ›