Lower GPT-4 cap? - eviltoast

I just got usage capped on GPT-4 after 20 messages – when I clicked “Learn More” on the message, I saw:

Thanks for your interest in GPT-4!

To give every Plus user a chance to try the model, we’re currently dynamically adjusting usage caps for GPT-4 as we learn more about demand and system performance.

We’re also actively exploring ways for ChatGPT Plus subscribers to use GPT-4 in a less constrained manner; this may be in the form of a new subscription level for higher-level GPT-4 usage, or something else.

Please fill out this form if you’d like to stay posted.

Now admittedly I paste massive chunks of code into GPT-4 as part of my daily workflow and it’s understandable if they’re wanting to make the amount users get match with the price they’re paying… but I was still a little taken aback by the customer-facing bullshit of that whole “To give every Plus user a chance to try the model” and “as we learn more”. Like bro if you feel like setting a limit based on use then just tell me what the limit is and how I can get more if I need it.

Anyone else run into this? Anyone have a good alternative (besides just sending it all to the platform API and paying out the ass)? GPT-4 is actually capable with code in my experience in a way that 3.5 and Copilot are not.

  • viking@infosec.pub
    link
    fedilink
    arrow-up
    8
    ·
    7 months ago

    With the API I’m paying less than 10% of the subscription fee.

    Just how massive are we talking about?

    • Grimy@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Don’t you need a subscription to use the gpt-4 API?

      Last I checked, gpt-4 is 0.02 for 1000 tokens. Every message in chat also has a summary of the whole convo plus the most recent messages. I feel like that’s busting the 10% pretty quickly if it’s intensive daily use.

      • viking@infosec.pub
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        The tokens don’t have a fixed price, the 2 cents are an average depending on the complexity. I’m using it moderately almost every day, and have rarely paid more than $2/month.

        And no subscription needed, it’s prepaid.

      • mozz@mbin.grits.devOP
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        You need a subscription either way

        GPT-4 costs from 0.01 to 0.12 per 1000 tokens depending on some details – but regardless of that, it’s not like chat type chat where you might have tons of small messages which each depend on the full 32k or whatever of context; each singular message usually has an explicit context for the stuff you want to tell it, and no more than 50-100 of them per day to implement your thing at most, so like 50 cents to a few dollars a day even at an obscene level of usage. Might be more than $20/month in total but more likely less.

      • habanhero@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        You don’t need a subscription. You just buy credits and it’s pay-as-you-go.

        Source: me as that’s how I’m using it. No subscription fee / recurring payment needed.

    • brbposting@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Do you use DALLE via API?

      TypingMind w/lifetime license works beautifully for cheap simultaneous GPT-4 Turbo & Claude-3-Opus when it comes to text. And can upload images. Generating would be interesting, don’t believe it can do it.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      So you just set up your own interface and then make requests there? I did set up a MERN stack app for chatting with it as an experiment, but I never did anything else with it after that.

      • viking@infosec.pub
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Not even that, I’m using chatbox where you can simply add the API code in the settings and be done with it.

        The software integrates a bunch of other AI’s, some of them in Chinese, but I’ve removed most from the quick access menu and only really work with GPT.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 months ago

      This morning was 177kb in and out, so call it 2/3 of it is input and 1/3 output, would mean roughly:

      118k bytes input ≈ 29k tokens = 29 cents 59k bytes output ≈ 15k tokens = 45 cents

      I think you may be correct in your assessment

  • AwkwardLookMonkeyPuppet@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    So you pay for it and you received that message? I pay for it and paste hundreds of lines of code into it, making request after request for hours at a time and have never received that message. Maybe the system was overloaded? I’ve found that when I try using it through the app it will error out sometimes, and take a long time other times, but I’ve never encountered that on a desktop.

  • DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    I unsubscribed because they blasted me with 5 min “are you a robot” tests for every request.

    Now I’m using Perplexity, that sucks too but at least it is usable.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      Don’t forget “we have detected suspicious activity coming from your system” and “there was an error constructing a response” (or whatever the wording is) which form roughly one-third and one-third respectively of the responses I get from it, along with the one-third that are successful responses.