"We’ve been seeing a massive increase in malicious usage of the Anitgravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users."
> We understand that a subset of these users were not aware that this was against our ToS and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users.
It feels like a good default for this would be something similar to video game bans: where you get a "vacation" from the service with a clear reason for why that is, but can return to using it later. Given how much people depend on cloud services, permanent bans for what could be honest mistakes or not knowing stuff would be insane.
Getting your Google Workspace account nuked because an employee hooked their company Gemini account to OpenClaw would certainly be a novel business risk.
Google has gigantic power over its users. Consider that for some reason, Google banned your gmail account, which you are using for large number of logins for different essential services.
I posted an "Ask HN" around this a while back. I think we will see a lot more of it and we will be hurting legitimate users. I like your temp ban idea but I doubt they would give reasons why.
That's not what support has been telling their $250 a month customers.
we are unable to reverse the suspension [1]
I get the need to move fast to stabilise the service but similar to an outage it doesn't take much to put a banner on the support page to let customers know bans are temporary until they can come up with a better way of educating customers. Further more it doesn't much to instruct ban appeal teams to tell customers all bans are under review no matter what the reason is to buy them time to separate Claw bans from legitimate abuse bans that need to be upheld.
The fact that users are paying $250 for a service they can't use for at least the last 11 days kills any sympathy I had that Google needed "quickly shut off access", it's like they just sat on their hands until the social media storm hit flash-point.
After 11 days there still isn't even an official statement, just a panicked tweet from a dev likely also getting hammered on socials, goodness knows how long before accounts are restored and credits issued.
Even the original Google employee in the forum thread just ghosted everyone there after the initial "we're looking into it".
come on, using a monthly paid subscription to obtain auth tokens to use claws bots is quite obviously agains T&C. you need to pay api prices for that. I am sure 100% of those knew they were doing something wrong but proceeded anyway.
While I see the point of limited capacity, it also shows that Google did not plan for rate limiting / throttling of high usage customers. This is ALWAYS the problem with flatrate pricing models. 2% of your customers burn 80+% of your capacity. Did see that in former times with DSL, not too long ago with mobile and now with AI subscriptions. If you want to provide a "good" service for all customers better implement (and not only write in your T&Cs) a fair usage model which (fairly) penalises heavy users.
Good on them that they want to provide a way to bring back customers on board that were burned / surprised by their move.
BUT: The industry is missing a significant long term revenue opportunity here. There obviously is latent demand and Claws have a great product market fit. Why on earth would you deactivate customers that show high usage? Inform them that you have another product (API keys) for them and maybe threaten with throttling. But don't throw them overboard! Find a solution that makes commercial sense for both sides (security from API bill shock for the customer / predictable token usage for the provider).
What we're seeing right now is the complete opposite. Ban customers that might even rely on their account. Feels like the accountants have won this round - but did not expect the PR backlash and possible Streisand effect...
Yeah this is a massive fuckup on Google's part and they are taking it out on their customers as per usual.
It's not hard to define a quota system and enforce it. If the quota is too high then reduce the quota. If people are abusing the quota with automated requests then detect that and rate limit those users.
If I'm paying $200+ a month I should be able to saturate Google with requests. It's up to Google to enforce their policies via backpressure so that they don't get overloaded.
Then again this is the same company that suspended people's gmail because they sent too many emotes in YouTube chat. Sadge.
> Google did not plan for rate limiting / throttling of high usage customers
Antigravity has very low daily and weekly quotas unless you pay for their most expensive plan, so it means these people drop $200+ a month to run these bots, insanity
There is consensus on r/gemini that the window is a matter of hours now, not 24h.
I subscribe to the AI Pro plan. I knew of a published limit of 100 Pro prompts per day, but before this month it seemed they were relaxed about it. I have now started to be rate limited on Pro when nowhere near that quota, due to too many prompts within a short time window (probably due to short prompts and not aggregating my questions). So now I use the Thinking (basically Flash) model and bump up to Pro for certain queries only.
There will always be a minority who spoil it for the majority.
I don't know why you rely on some Reddit consensus when you can just open Gemini CLI and enter /stats to get the confirmation that you get 200 Pro requests per 24h, and the counter starts when you do your first request.
> [...] I must be transparent and inform you that, in accordance with Google’s policy, this situation falls under a zero tolerance policy, and we are unable to reverse the suspension. [...]
* User uses Google oauth to integrate their open claw
* user gets banned from using Google AI services with no warning
* user still gets charged
If you go backwards, getting charged for services you can't access is rough. I feel sorry for those who are deeply integrated into Google services or getting banned on their main accounts. It's not a great situation.
Also, getting banned without warning is rough as well. I wonder if the situation will be different for business accounts as opposed what seems like personal accounts?
The ban itself seems fair though, google is allowed to restrict usage of their services. Even though it's probably not developer friendly, it's within their rights to do so.
I guess there's some level of post mortem to do on the openclaw side too.
* Why did openclaw allow Google anti gravity logins?
* The plugin is literally called "google-antigravity-auth", why didn't that give the signal to the maintainers?
* Why don't the maintainers, for an integration project, do due diligence checks on the terms of service of everything you're integrating with?
> * Why did openclaw allow Google anti gravity logins?
OpenClaw went from virtually unheard of to a sensation in a couple weeks. There was intense commit activity and the main author bragged about not even reading the code himself. It was all heavily AI driven and moving at an extreme rate. Everyone was competing to get their commits in because they wanted to be a part of it.
The entire project was a fast and furious experiment. Nobody was stopping to think if something was a good idea or not when someone published a plugin for using this endpoint. People just thought “cool!” and installed it.
That's how AI is supposed to be used, no? That's what the providers advertise - it increases development speed, a lot, it replaces devs and so on.
But I guess it's only ok when you work on regular joe facing projects, where the consequences of bugs are on powerless users. If the consequences are on Google, well, that's not acceptable now is it?
> Also, getting banned without warning is rough as well.
Agreed. The lesson is: do not become dependent on Google. Ever.
(Unfortunately I still use youtube and a chromium-based browser. Long-term I hope to find alternatives to both problems. Google search I no longer need because Google already ruined it a few years ago; the quality now is just horrible. I can not find anything useful with it anymore.)
What google search alternative have you found?
Im trying out ecosia, duckduckgo and brave search, but i find their search results even worse, so in the second query i tend to bang to google..
Google Search is over. There may not be a free alternative, it they've lost the arms war between phone number incrementing ad pages, AI spew, and rank hackers.
Maybe have to pay for search? I am experimenting with paying Proton another $10/month for a paid lumo+ account. lumo+ is a private chat like ChatGPT that uses a strong Mistral model and also privacy-preserving web_search LLM tooling under the hood. For about a month I just use lumo+ with the web_search tool enabled. I may not do this forever, but for now I like just having one tool to use. Note: I still use gemini for technical work, but lumo+ for day to day chat and web search.
In the past I just use DuckDuckGo for most search, occasionally Google. That also worked well for me.
You make it sound like a significant amount is going to Kreml but I assume the API cost for using Yandex from Kagi is neglectable and only a fraction of that goes to the Russian government. Isn't this more of a symbolic thing to request not cooperating with Russian companies?
Agree. Historically you would just not get any good results for a search and try Google, but these days it's more likely there just aren't any good results for your search period, regardless of engine. Funny enough that's when I've had better results asking chatgpt or similar because I'm typically after some sort of consensus or summary in those situations.
It doesn't seem fair at all; though I'm glad to see it's not as bad as I feared (yet?).
> Hoping for some transparency, I left a single, polite comment asking for clarification on why the update was removed. Surprisingly, my forum account was banned shortly after posting that question.
Have you seen the code of OpenClaw?
It would not surprise me if there is a mistake in there somewhere that causes the bot to hammer google auth for the refresh token in a very identifiable manner because noone in that repo is bothering to look at the code before merging. Moved fast, broke things.
I don't understand step 1. OAuth client applications have to be registered in GCP, right? They have to request specific scopes for specific APIs, and there is a review process before they can be used by the public. Did none of that happen for the Open Claw client? How is it the users' fault for clicking a "Sign in with Google" button? And if there was a mistake, why not ban the whole client?
I could see a problem with logging into Antigravity then exfiltrating the tokens to use somewhere else... But that doesn't sound like what happened. (And then how would they know?)
I haven't used Open Claw, so what else am missing to make this make sense?
To my understanding, OpenClaw pretends to be Antigravity by using the Antigravity OAuth client ID (and doesn't have its own), and then the takes the token Google returns to instead use with OpenClaw.
When I first tried OpenClaw and chose Google Sign-In, I noticed the window appeared saying "Sign into Google Antigravity" with a Google official mark, and a warning it shouldn't be used to sign into anything besides official Google apps. I closed it immediately and uninstalled OpenClaw as this was suspicious to me, and it was a relatively new project then.
It amazes me that the maintainer(s) allowed something like this...
If this is like the flow it uses for a codex / ChatGPT subscription it doesn’t even register a handler - the redirect opens as a 404 in your browser and there are instructions in copying the token from the query string!
Antigravity runs on your machine, the secret is there for the taking.
This is true of all OAuth client logins in this way, it's why the secret doesn't mean the same thing as it does with server to server login, you can never fully trust the client.
OAuth impersonation is nothing new, it's a well known attack vector that can't really be worked around (without changing the UX), the solution is instead terms of service, policies, and enforcement.
>>it amazes me that the maintainer(s) allowed something like this...
Really? In today's landscape this is the part that surprises you? I'm seeing these types of decisions repeatedly and typically my only question is do they not know any better, or intentionally not care?
> Our investigation specifically confirmed that the use of your credentials within the third-party tool “open claw” for testing purposes constitutes a violation of the Google Terms of Service [1]. This is due to the use of Antigravity servers to power a non-Antigravity product.
I must be transparent and inform you that, in accordance with Google’s policy, this situation falls under a zero tolerance policy, and we are unable to reverse the suspension. I am truly sorry to share this difficult news with you.
Isn't the reason companies are doing this because they're offering tokens at a discount, provided they're spent through their tooling?
Considering the tremendous amount of tokens OpenClaw can burn for something that has nothing to do with sofware development, I think it's reasonable for Google to not allow using tokens reserved for Antigravity. I don't think there's such a restriction if you pay for the API out of pocket.
So the issue is the same as Anthropic. They do charge for it though their API. The users, however, want to use the discounted "unlimited" flat rate through the first-party app instead, then get mad when they are told they have to use the same API every other third-party app does.
No, the problem is that the discounted rate exists in the first place. Essentially these are unfair business practices, product cross subsidization to ensure market dominance. See also: Microsoft and a whole bunch of other companies.
And once they've got their monopoly position there is inevitably the rug-pull. I wonder if some CPO somewhere actually had the guts to put a 'rug pull' item on the product roadmap.
It's not unfair its how every business works. When your product is new or not yet good enough and you want people to try it you give them discounts, or if you want to drive traffic to your service you also do the same.
Even traditional businesses do this with coupons. Is it unfair that Costco sells chickens for under cost because it drives usage to them?
Companies like Uber did use massive funding and price subsidization to try and kill competition and then take a monopoly, but it is hard to assert that this is what google is doing now. And given that other competitors in the space, Anthropic are doing the exact same thing again its not as though they are alone.
Also they could be subsidizing it because they want that usage type as it helps them train models better.
Chatgpt and gpt4 were all ran at a loss and subsidized people just didn't know that. Almost all of the llm companies have been selling 1 dollar of llm compute for 50 cents as they valued the usage, training data, and users more than making profit now.
This next generation of MOE and other newly trained models. Like opus 4.6, Cursor Composer 1.5, gpt 5.3 codex, and many of the others have been the first models where these companies are actually profitably serving the tokens at the api cost.
This year has been the switch where ai companies are actually thinking of becoming profitable instead of just focusing on research and development.
Hmm, you might be right. I'm reading the forum thread linked in the OP.
> ”Thank you for your continued patience as we have thoroughly investigated your account access issue. Please be assured that we conducted a comprehensive investigation, exploring every possible avenue to restore your access.
> Our product engineering team has confirmed that your account was suspended from using our Antigravity service. This suspension affects your access to the Gemini CLI and any other service that uses the Cloud Code Private API.
> Our investigation specifically confirmed that the use of your credentials within the third-party tool “open claw” for testing purposes constitutes a violation of the Google Terms of Service [1]. This is due to the use of Antigravity servers to power a non-Antigravity product.
> I must be transparent and inform you that, in accordance with Google’s policy, this situation falls under a zero tolerance policy, and we are unable to reverse the suspension. I am truly sorry to share this difficult news with you.”
I totally read that (and the other posts in that forum) as a complete suspension of their whole Google Account (another person mentions their GCP access suspended).
But I could be reading it wrong and it's just their AI account (and any service that uses that... I'm not clear on where those boundaries are?)
Still not going to risk signing up for this, because I cannot risk my Google account getting suspended or banned for something I wasn't aware of in the ToS. No warnings is still drastic, even if it's just part of the account.
This is a really weird response man. No need to get so judgy and personal.
Besides, as far as I can tell what they said is true. The users are losing access to Antigravity, not to their entire Google accounts. So you’re getting mad at this guy just for stating facts.
This is why I am a bit judgy. Your "no big deal" best case case scenario is criminal fraud since they have no mechanism for processing a refund or reaching a human being unless they have enough existing social capital to go viral.
I think that's pretty vile, and I think people who defend that are at best, mentally ill, and at worst, engaging in sociopathic levels of compartmentalization.
You are actively participating in the destruction of cultural norms around rule of law and propriety. Why would you expect politeness in return?
> Essentially these are unfair business practices, product cross subsidization to ensure market dominance.
Offering a different discounted rate for a service, though their first-party platform is not an unfair business practice whatsoever, though. The bar isn't what you disagree with, or what you think their motives are without any substantial proof. They could even make a honest argument that they can aggressively key-value cache default prompts from their own software reducing inference costs.
>See also: Microsoft and a whole bunch of other companies.
> Offering goods or services below the cost of their production is often illegal, though. It's called "dumping"
No.
Dumping is an international-trade term. It doesn’t even require pricing below cost, just aiming “to increase market share in a foreign market by driving out competition and thereby create a monopoly situation where the exporter will be able to unilaterally dictate price and quality of the product” [1].
Loss leaders are common in commerce and entirely legal, as are free trials. I struggle to think of a competent jurisdiction that bans them.
I'm sorry, my fault. I studied economics in Russia, and the term "dumping" was used in a more general sense as "selling goods or services below their cost".
Russian laws officially use the term "monopolistically low prices", and prohibit them if the entity engaging in such pricing holds a dominating presence in the market (and not necessarily for the goods that are being underpriced).
A correct term for the US is "predatory pricing", and it's also prohibited by the Sherman Act. For much the same reason, a large entity can destroy competition by accepting losses from selling goods below the cost. The border between loss leaders and predatory pricing is, as usual, very blurred.
> I studied economics in Russia, and the term "dumping" was used in a more general sense as "selling goods or services below their cost"
Oooh! Do you have a recommendation for a translation of a Russian economics text? I’m particularly curious of Soviet-era texts that work on theory without prices.
> correct term for the US is "predatory pricing", and it's also prohibited by the Sherman Act
Sherman prohibits the “restraint of trade or commerce” [1]. The word “price” never appears in its text. In practice, predatory pricing is a tightly-regulated term that doesn’t generally prohibit selling goods below cost
So every company that is not immediately selling enough to cover its fixed costs and its variable cost should be illegal? Every company and every new initiative must be profitable from day one in your world?
So that means for instance it was illegal for Netflix to get into the streaming business or for Apple to start selling iPods because neither could do it profitably from day one?
Should Microsoft have not been allowed to sell operating systems and still survive from selling BASIC interpreters? Should Nintendo have not been allowed to sell video games and still be selling playing cards?
Every company that is interested in survival takes profits from an existing business to start a new one,
This isn't typically an area where laws and regulations can work effectively because who knows until after the fact? Taxation laws do deal with this from a different perspective, for example most jurisdictions won't let a company take losses every year forever, as they judge the intent of a corporation. Even this is incredibly complex so I'm not sure how your idea would work, even the term "break even" doesn't have a clear definition, ex: do Capital assets still depreciate the same in the AI world? When did Amazon start to break even? What if they didn't deliver shopping on top of aws? Was that an unfair subsidization?
Amazon doesn’t for the most part deliver shopping on top of AWS.
Amazon runs two sets of infrastructure “CDO” and “AWS”. It’s a myth that Amazon used excess capacity to start AWS. AWS was always built out as separate infrastructure outside of AWS.
Some Amazon services do run on AWS. But when Amazon runs workloads on AWS, for internal accounting, they are considered a customer.
And in this case the subsidy is paid for by tied sales from other users that don't actually use the service, which is another illegal business practice.
Tying is typically perfectly legal in both the EU and the US.
This isn’t even vaguely similar to illegal tying. The biggest problem being that the products almost certainly aren’t dissimilar enough to be considered “tied” at all.
What are you talking about? Where is this illegal? It’s common to sell subscription services and then price them according to expected usage blended across the user base.
Forget about Costco, if some people here are so convinced this behavior is illegal they should be going after every fast food company that offers anything like "get a free/cheap xyz with any drink purchase!" Where the subsidy is obvious.
Costco gets to sidestep a lot of regulations because they technically are a private club with paid membership. The US anti-monopoly laws are also unusually weak.
In other countries, selling a $7 chicken if it's subsidized by the sale of other goods can indeed be illegal.
That would most likely be illegal in Finland. You're not allowed predatory pricing. And the same is true for the EU as a whole, although you may have to be operating in an international market, not just a local one. See Abuse of dominance in: https://en.wikipedia.org/wiki/Article_102_of_the_Treaty_on_t...
First of all, I doubt they’re losing money in inference. Even across subscriptions. This is a tired argument that has been repeated so many times on HN.
Second, that’s not what dumping means. It’s a specific term for international trade.
Third, it’s not illegal to sell something for below the cost to make it. That’s another common misunderstanding.
"PAYGO API access" vs "Monthly Tool Subscription" is just a matter of different unit economics; there's nothing particularly unusual or strange about the idea on its own, specific claims against Google notwithstanding.
Of course, Google is still in the wrong here for instantly nuking the account instead of just billing them for API usage instead (largely because an autoban or whatever easier, I'm sure).
I am afraid of using any Google services in experimental way from the fear that my whole Google existence will be banned.
I think blocking access temporarily with a warning would be much more suitable. Unblocking could be even conditioned on a request to pay for the abused tokens
There's nothing wrong or illegal with subsidizing products and that's not what Microsoft or others have gotten in trouble for doing. It's when they tie a strong monopolistic position (Windows) with bundling to prevent competition (Internet explorer). This is how Apple has operated with far tighter bundling and cross collateralization of their ecosystem without facing monopoly allegations. Google does not have a monopoly position in AI.
Its called economies of scale. When they server 200000 ai subscriptions they dont expect everyone to use the max. They expect some will use more and some will use less and at the end of the day it will even out. Thats how every service works that is for the masses. As soon as you want a guaranteed 1000 tokens you should pay for that.
Just because all you can eat buffet exists doesn't mean that the food is free or you can take away the food. The food exists in discounted rate only if you consider it unlimited food. For normal folks they make profit.
Claude code could possibly make profit because the average usage doesn't come close to exhausting the limits.
This exactly.
I'm using 10% of my max plan on the weeks that I'm working a lot. Hit a 4-hour limit once over few months and never let it run overnight. And I'm very happy with my subscription
> So you are saying a company should never reinvest profits in the company to support another money losing business until it’s profitable?
If it makes it impossible to set up a competitor? Absolutely, yes.
> Should Netflix for instance not invested money from renting DVDs to invest in a streaming service?
Netflix was not priced below the cost of production from the beginning. You're confusing sustainable pricing and paying off all the capital spending immediately at launch.
I'm not familiar with their originals economics, but the original streaming Netflix was not priced below the cost. As evidenced by them keeping the same subscription cost for years.
How is that “evidence” of anything? The “evidence” that they were charging less for subscriptions than it cost to run the streaming service is that they were borrowing billions of dollars to both license content and create new content over the course of years.
So exactly what’s the difference between “predatory pricing” and pricing to gain customers and market share? Should Sony have to sell the first PlayStation off the line at $2000 (making up a number) so it can sell it at a marginal profit from day one or should it sell it below cost knowing that that over its lifetime if it stays at that price, it will both gain customers and sell at a profit in year 4 as the price of technology comes down and it gets economies of scale?
The EU uses an effects-based model. If below-cost prices are driving other actors out of business or has other anti-competitive effects, it is predatory pricing.
However someone else said this, and I agree, if I have an AI use my claude-code CLI how is not valid first-party app use? It would be different if they would disallow others to use your claude-code account, and I think most including these AI companies would argue AI is supposed to replace and augment humans. So they aren't banning AI's from using the CLI, right- though thats what some of them are seemingly wanting to do.
Google wants usage that earns them street cred, not usage from bots who will never evaluate the output. They're all fighting tooth and nail to acquire customers, both free and paid... they didn't want their giveaways to be burned.
They're about to find out that if you aim to wholesale replace your workers with AI you can't really complain if your users replace themselves with AI...
But banning accounts wholesale is not going to earn them more customers. They could have just disabled Gemini access, or even given a warning first.
I don't use OpenClaw, I do pay hundreds per month for AI subscriptions, and I will not be giving that money to Google while they treat their customers like this.
> But banning accounts wholesale is not going to earn them more customers.
it has the chilling effect - people getting banned by google might imagine their entire google account getting banned (whether that's true or not is irrelevant).
If I say “you can use my car for $250/month if you don’t smoke in it” and then you pay me that money and you drive around until one day you smoke in it, I’m not going to let you smoke in my car. I told you not to smoke in it and you smoked in it. That’s the deal. All seems fine to me tbh.
More like "you can use my car to drive around as much as you want so long as you don't drive to another coast on a highway" and then you drive to another coast on a highway and get mad when I won't give you my car next time.
Yep it sounds like Google is charging too little, and taking losses that would be unsustainable for other companies, to try and win the market on AI coding products. Which is a violation of anti trust law, I think. Now that people are using their pricing in an unexpected way where their product isn’t the one winning from their anti competitive practices, they’re punishing the users. Classic monopolistic behavior. And why we need to tax mega corp more and break them up.
I agree. As others have mentioned here, the authenticate with AntiGravity web popup clearly says that this authentication is only to be used with Google products.
How can Claws users miss this?
What Google could have done better: obviously implement rate throttling on API calls authenticated through the Gemini AI Pro $20/month accounts. (I thought they did this, buy apparently not?) Google tries hard to get people to get API keys, which is what I do, and there seems to be a very large free tier on API calls before my credit card gets hit every month.
Given how popular OpenClaw is (and that OpenClaw itself supports antigravity), I think it's shortsighted to not publicly state that it's not allowed and to warn users. Permanently banning people from Antigravity (much like any Google product) feels really harsh.
Then it should be “This is your first and final warning. The next time we catch you, it’s a ban.”. People are building their lives around this stuff and kneejerk bans erode good faith in your platform.
> Then it should be “This is your first and final warning. The next time we catch you, it’s a ban.”. People are building their lives around this stuff and kneejerk bans erode good faith in your platform.
This is actually the soft-touch approach: the users of these vibe-coded products need to understand that they are delegating their authority to the tool to work on their behalf.
In this case, they delegated to a tool that broke the ToS. The result could have been a lot worse, and in return they learned that the tool is acting with their full authority.
-----------------
EDIT:
One of the users got this response from google support:
> Our product engineering team has confirmed that your account was suspended from using our Antigravity service. This suspension affects your access to the Gemini CLI and any other service that uses the Cloud Code Private API.
Their decision? To break ToS on some other provider:
> I guess it is time to move on to Codex or Claude Code.
So, yeah, perhaps the users really are too stupid to understand what's going on, and even this soft-touch approach has done nothing to clue them in.
What a wonderful way to stop people from using your LLM.
All these AI companies trying to get everyone to be locked into their toolchains is just hilariously short sighted. Particularly for dev tools. It's the sure path to get devs to hate your product.
And for what? The devs are already paying a pretty penny to use your LLM. Why do you also need to force them to using your toolkit?
There is a reality that when they control the client it can be significantly cheaper for them to run: the Claude code creator has mentioned that the client was carefully designed to maximise prompt caching. If you use a different client, your usage patterns can be different and it may cost them significantly more to serve you.
This isn't a sudden change, either: they were always up-front that subscriptions are for their own clients/apps, and API is for external clients. They don't document the internal client API/auth (people extracted it).
I think a more valid complaint might be "The API costs too much" if you prefer alternative clients. But all providers are quite short on compute at the moment from what I hear, and they're likely prioritising what they subsidise.
It reminds me of the net neutrality debate from a decade ago. I'm not American but I remember the discord and online hate towards Ajit Pai when they were repealing it.
On one side you had the argument that repealing net neutrality would mean you can save money on your internet bill by only paying for access to what you use. On the other, you had the argument that it would just enable companies to milk you for even more profit and throttle your connection as they see fit.
IMO we need 'net neutrality' for LLM clients. I feel like AI companies are hypocrites for talking about safety all the time, but want us to only use their LLMs in the way they intend. They're saying we're all going to be replaced by AI in 12 months, and we have to use their tools to survive, right?
Yann LeCun recently warned that the AI coming out of China is trending towards being more open than the American alternative. If it continues like this, I can see programmers being pushed towards Chinese models. Is that what the US government wants?
Use of Chinese models: If I had not got a discount for signing up for a full year of Gemini AI Pro for something like $14/month, I might have started just using a Chinese chat model for things where privacy is not an issue. Ironic that I am now paying for both Gemini AI Plus and also $20/month for Ollama Cloud (as a super easy way to experiment with many open models). I am also paying Proton $10/month to use their handy lumo+ private chat service built on Mistral models. I feel like I am spending too much money but I don’t want to feel locked into just a few vendors, and to be honest it is fun having alternatives. A year ago I used APIs for Chinese models (and Mistral in France) and the cost was really low.
I imagine its a case of the providers not wanting to admit its costing them a fortune because suddenly all these low-medium usage accounts are now their highest use ones.
Not saying it's right. But it's also not exactly a secret that they are all taking VERY heavy losses even with pricey subscriptions.
Antigravity is useless anyway. I tried it last week and it needs approval for every file read and tool call. There's an option in the app to auto-approve, except it doesn't work. Plenty of complaints online about this. Clearly they don't actually care about the product, some exec just felt that they need to get into the editor game.
Next I tried using the Antigravity Gemini plan through OpenCode (I guess also a bannable offense?) and the first request used up my limit for the week.
The tool thing is kind of infuriating at the moment. I've been using Claude on the command line so I can use my subscription. It's fine, but it also feels kind of silly, like I'm looking at ccusage and it seems like I'm using way more $ in tokens than I'm paying for with the subscription. Which is a win for me, but, I don't really feel like Claude Code is such a compelling product that it's going to keep me locked in to their model, so I don't know why they're creating such a steep discount to get me to use it. I'm perfectly fine using Codex's tools, or whatever. I dunno, it seems like way more cost effective to use the first party tools but I'm not sure why they really want that. Are the third party tools just really inefficient with API usage or something?
> I dunno, it seems like way more cost effective to use the first party tools but I'm not sure why they really want that. Are the third party tools just really inefficient with API usage or something?
No, the first party tools, even if they used the same number of tokens, gives them valuable data for their training.
Essentially, the first party tools are subsidised because it saves them money on gathering even more training data. When you use a 3rd party tool, you are expected to pay the actual cost of each token.
Businesses do not have an entitlement to profit. Suspending customers for using a fairly expensive subscription plan -- especially forfeiting an annual prepayment for a day or two of coloring outside the lines -- sure does make Google appear entitled to profit without ever risking its own pricing model.
> Suspending customers for using a fairly expensive subscription plan -- especially forfeiting an annual prepayment for a day or two of coloring outside the lines
they're being suspended for using a private api outside of the app for which the api was intended. If you make a clone of the hbo app, so that you can watch hbo shows without ads by logging in with your discounted ads-included membership, your account will also be suspended.
The facts are straightforward, even without analogies. But since we're using them...
You are at the grocery store, checking out. The total comes to $250. You pay, but then remember you had a coupon. You present it to the cashier, who calls the manager over. The manager informed you that you've attempted to use an expired coupon, which is a violation of Paragraph 53 subsection d of their Terms of Service. They keep your groceries and your $250, and they ban you from the store.
Google is acting here like it was entitled to a profitable transaction, and is even entitled to punish anyone who tries to make it a losing transaction. But they're not the police. No crime was committed.
Regular businesses win some and lose some. A store buys widgets for $10 and hopes to sell them for $20, but sometimes they miscalculate and have to unload them for $5. Overall they hope their winners exceed their losers. That's business.
> They keep your groceries and your $250, and they ban you from the store.
If you signed an agreement with the grocery store that says they will ban you with no refunds for doing $FOO, and you do $FOO, then you can't expect any sympathy when you get banned, now can you?
In any case, your analogy is broken, because this is a monthly subscription, not a once-off purchase: when you pay for a month of subscription and then get banned, you don't expect to get that month's payment back.
A purchase transaction is a different thing from a subscription. It would be a more meaningful comparison if your example happened at Costco where you need a membership to shop. You'd get either your groceries or your $250 back, but you'd be banned from the store and you wouldn't get your membership fee refunded.
my point wasn't an analogy. the facts are that it is a private api being used with a subscription service. neither hbo nor google are required to do business with people that abuse the api.
We are in violent agreement about that point. Where we seem to disagree is that I don't think they're entitled to also keep the customer's annual subscription payment when they've decided they want out of the contract.
They are being banned (not suspended) for breaking the ToS, not for what you imagine them to be suspended for.
It doesn't matter how expensive a provider plan you purchase, the provider is free to end their contract with you, permanently if they want to, if you breach their terms of service.
Equally, customers are not entitled to make set the terms, or pricing decisions for businesses. They can always move their custom elsewhere if they disagree with ToS or pricing.
No, this is hilarious: company that rams their AI down your throat at every opportunity then turns around and shuts down your account because you actually use their AI... there is no limit to the idiocy around Google's AI roll-out. I wished I could donate the AI credits that I'm paying for (thanks Google for that price increase for a product I never chose to buy) to the people that need them more.
This kind of reputational damage is just adding fuel to the fire. If my business depended in any way on google--GCP, GSuite, whatever--it would right now be a very urgent task to fire them and find replacements. They've been pretty sketchy for a while, but this kind of thing is over the top.
Terminating accounts that tried to cheat on pricing by having a third party application pretend to be Antigravity is entirely expected and does not damage Google's reputation in my view.
Yikes!! This is really unfortunate, because Google's models seem very good but there's no way I'm using a google service for this kind of thing with those policies. I don't even want to run OpenClaw, but that's scary! Plus, I have my google account tied to authenticating so many things that if my account were to be suspended or something that would be a nightmare.
I haven't tried Antigravity but I remember on release it had huge UX issues. Is this product just not ready for primetime?
Excuse me giving you advice, unasked for: as part of your ‘digital life spring cleaning’ spend some time converting auth with Google/Apple/GitHub for services to logging in with your email (on your own domain) and some other second auth.
BTW, I tend to only use Google for services I pay for (YouTube+, APIs, Gemini Plus, sometimes GCP).
The issue for me is the customer support here, not necessarily that they don't have good offerings. (I know they've always been bad at customer support, but this all seems egregious)
Just create another Google account. I don't remember there being any restrictions for this. Every time the service required a Google account to log in or it was easier than registering and going through the checks, I just created a new Google account and registered.
Maybe the ban is overstepping but I still continue to not understand the issue. Rarely in the history of APIs has a commercial company wanted folks to use the private APIs.
How about giving the user a big warning to not do that and then block the account if the user continues. This total blocks are crazy. Especially for people who use their Google account for 20+ years or something.
Time and time again it is shown to *not* use your main account for everything. This goes for Apple and having a separate account for development work, for the App Store and your main iCloud account but this also goes for all other SaaS providers.
You are doing groundbreaking new and untested stuff with Claw? Do not use your main account. You want to access your main account's data? Sure, allow it via OAUTH/whatever possible way.
Have separate accounts, people. You don't want one product groups decision in those large SaaS corps to impact everything else.
> Time and time again it is shown to not use your main account for everything.
Good luck opening new google accounts for separation of concern. The new account is banned before the eula page finishes loading.
Google sends code via text msg to my main account phone number to unban, without me ever even filling a phone number.
After a day the account was banned again and pending automatic deletion. The appeal then took an artificial 5 days wait. I had to plead to what I presume is an AI. I had just paid $100 so it's not like I didn't show I was serious.
I am fairly certain that if they ban one account they will also ban the other anyways.
Nothing new. 10 years ago my (now 20+ year) google account was compromised for a whole 5 minutes. It was used by shady bots, and instantly banned. No warnings, no nothing. Trying to figure out what had happened was a challenge in itself.
Getting through to customer support was impossible.
5 years later I tried to get my account opened up, filled out some forms, and by some miracle it was.
My biggest takeaway from this (other than enabling 2FA) was that it is probably easier to get ahold of the scammers that control your account, than to get ahold of actual human customer support at google / alphabet.
It seems like a temp ban here would be totally reasonable, like, "we disabled your account for a day here's why, don't do it again". Permanent though, eek!
Google's bundling of so many services into one account is becoming a gargantuan liability for them & their users.
This "zero tolerance" policy is just absurdly mega-goliath out of touch with the world. The sort of soulless brain dead corporatism that absolutely does not think for even a single millisecond about its decisions, that doesn't care about anything other than reducing customer support or complexity, no matter what the cost.
Kicking people off their accounts for this is Google being willing to cause enormous untoward damage. With basically not even the faintest willingness to try to correct. Gobsmacking vicious indifference, ok with suffering.
Can you help me understand which of these happened?
1) Open Claw has a Google OAuth client id that users are signing in with. (This seems unlikely because why would Google have approved the client or not banned it)
2) Users are creating their own OAuth client id for signing themselves into Open Claw. (Again, why would these clients be able to use APIs Google doesn't want them to?)
3) Users are taking a token minted with the Antigravity client and using it in Open Claw to call "private" APIs.
Assuming it's #3, how is that physically accomplished? And then how does Google figure out it happened?
"how does Google figure out it happened" - no insider knowledge, but the calls Claw makes are very different than the regular IDE, so the calls and volume alone would be an indicator. Maybe Google has even updated their Antigravity IDEs to just include some other User Agent, that Claw auth does not have.
Everything just guesswork, but I don't think it is too hard to figure out whether it is Antigravity calling the APIs or any Claw.
So if I ask Google's AI studio the wrong question, I might get my G-drive, Gmail, API access, Play store, YouTube channel, "login with Google" tokens, and more all ripped away instantly with no recourse?
Google is a company well down the path of enshittification, they even got rid of their motto "Don't be evil".
As a consumer, you're better served by using services from companies earlier in that lifecycle, where value accrues to you, and that's not Google, and likely not many other big providers.
When those newer companies turn, you switch. Do not allow yourself to get locked into an ecosystem. It's hard work, but it will pay dividends in the long run.
> TFA most commonly refers to Trifluoroacetic acid, a highly persistent, mobile "forever chemical" (PFAS) found globally in water and soil, widely used in organic chemistry as a solvent.
Don’t know about your parent, but I am certainly on of those “AI can’t make anyone more productive”.
Well, at least I would say that while being a bit hyperbolic. But folks like us who prefer to see claims by corporations trying to sell you stuff backed by behavioral research before we start taking the corporation’s word for it.
But surely your search engine must have given you the answer within your first three clicks, if not, perhaps you should consider a better search engine.
The irony is that web searches for an explanation of something often lead to a discussion thread where the poster is downvoted and berated for daring to ask people instead of Google. And then there's one commenter who actually actually explains the thing you were wondering about.
NotebookLM seems to be the only exception, or it could be an acquisition.
Subscription API ban could be part of a larger strategy because of OpenClaw’s association with OpenAI and Google will not be able to copy OpenClaw Personal Assistant model due to the security implications.
Pay as you go through API pricing is one of the easiest ways to drastically reduce mass adoption of a product. Pay per month works on consumption patterns where 80% of the users will barely use the product to compensate for the other 10 or 20% power users.
I'd assume API usage through tokens vs. OAuth are rate limited differently? I don't actually see hard numbers for Antigravity model rate limits on their website so guessing this is the case.
Basically Google is saying: You can't use Gemini with OAuth on other products than Google products (Anti Gravity).
I mean it's fair, just should have been documented properly and the possibility to use Gemini through OAuth restricted with proper scope instead of saying you broke the ToS we ban your 350$/ month account.
Can openclaw go through gemini-cli? Because they can and nobody would notice anything has changed. It would use the same OAuth down the line and consume the same quotas.
That’s my question too. Presumably one could even build an API that just runs things in cli? How would they plan to restrict that? Based on usage patterns?
Terms of Service that span multiple pages of legalese and require an attorney to parse, for something that is either 'free' or a few $ per month, and can result in loss of service across multiple product lines, AND has binding lopsided arbitration requirements, is not only draconian, it is unconscionable.
Look at how messed up this is: Google Attorneys, paid hundreds of $/hour, spending hours and hours putting together these "Terms of Service" on one side; and a simple consumer on the other side, making a few $ per hour, not trained in legalese, expected to make a decision on a service that is supposed to cost a few $ a month, and if you make an honest mistake, can cause you a lot of trouble in your life.
> Terms of Service that span multiple pages of legalese and require an attorney to parse, for something that is either 'free' or a few $ per month, and can result in loss of service across multiple product lines, AND has binding lopsided arbitration requirements, is not only draconian, it is unconscionable.
In the general case, I broadly agree, but in this specific case:
1. This wasn't a term buried 2/3rds in a 200 page document. It was a term that was so upfront and clear that everyone knew you weren't supposed to do it.
2. Even for the people who claim they didn't know, when doing the auth, the message specifically asks the user to authorise the antigravity application, not the OpenClaw application.
The argument that users did not know they were violating ToS, in this specific case, is pure BS.
I'm beginning to think that the law needs to be that if there are such egregious terms of service, then the company needs to pay for the consumer's attorney at litigation, no matter the cause of litigation, and no matter the outcome.
I don't have a formal contract with my electricity and water provider; why should there be a dozen pages or longer contract for an email/ISP/Phone provider? Email, Internet, Phones are essential services. Insurance might fall into the same bucket in civilized nations.
It’s a subsidized price; conditional to using their tooling. Don’t want to use their tooling? Pay the API rates. The API is sitting right there, ready to use for a broader range of purposes.
It’s only unreasonable if you think the customer has a right to have their cake and eat it too.
> It’s a subsidized price; conditional to using their tooling.
Yes, because you are giving them your data. So you're not actually paying for usage. What they should do instead is be upfront about why this is subsidized and/or not subsidize it in the first place.
Tradition warrants a negotiation phase when one party wishes to change the terms of an agreement, or becomes cognizant that the counterparty may wish to do the same.
The tech industry has gorged on non-participation in this facet of contract law, instead resorting to all or nothing clickwrap, which is, barring existential or egregious circumstances, unwarranted, and in my opinion, is fundamentally unreasonable, and should be an invalid exercise of contract law. Especially given the size of one of the party's in comparison to the other.
> Tradition warrants a negotiation phase when one party wishes to change the terms of an agreement, or becomes cognizant that the counterparty may wish to do the same.
They didn't change the agreement. One party violated it, and the other party withdrew as a result.
This is so vanilla. But people will moan because they want subsidized tokens.
I don't have a pony in this race my good poster, I just calls it how I see it, and I have a long history of calling out the fundamentally abusive character on non-negotiable one way contracting, and the ill effects it has on society.
Only people moaning here seem to be a bunch of wannabe Google PO's upset that people are handing machines a data construct they are designed to accept, and the machine is accepting, and using the token the way they were designed. Looks for some reason Google appears to resent that their lack of automating checks to deny those OAuth tokens is being utilized, and seems to think termination of customers who could probably be corrected with a simple message is the most reasonable response.
With instincts like that, it makes me happy everyday that for my needs, I can make do with doing things on my own hardware I've collected over the years. The Cloud has too much drama potential tied up in it.
I think the permaban without notification on first violation (that most violators likely weren't even aware was a violation) is unreasonable. This should almost certainly be illegal if it is not already under the DSA or similar, particularly for a monopolist of Google's scale.
What about this ban is anticompetitive? The only think I can think of is accusing them of dumping product (as opposed to price discrimination), in which case the remedy is going to be to making them charge the API price for everything.
The issue with them being a monopolist is less about competition and more about the fact them penalizing you on one of their products can result in them deleting you from the Internet. You can lose decades of email history, the ability to publish apps on over half of the mobile devices on the globe, etc.
In Europe the Digital Services Act (DSA) is beginning to set expectations, particularly for large platforms about not just clear documentation of their terms, but also a meaningful human appeal process with transparency and communication requirements for actions taken.
The DSA is more focused on social networks, but if you were to apply the concepts of the DSA to this story, Google would have violated it several times over.
A flat rate is always a mixture of low usage people subsidizing high usage people. It's disgusting that these companies want to have the advantages of subs, but then straight up ban any high usage people. Basically, there is no flatrate.
The punishment, of being kicked out of your Google account for a zero-tolerance first offense, is completely unreasonable, is incredibly extreme Lawful Evil alignment.
The damage to individuals that Google is willing to just hand out here, to customers they have had for decades, who have their lives built around Google products, is absurd. This is criminally bad behavior and whatever the terms of service say, this is an affront to the dignity of man. This is evil. And beyond any conceivable reason.
This right here is an insane take to the opposite direction. Abuse, violence, torture, war, oppression, these are affronts to the dignity of man. Being kicked off a service from one business is absolutely not. It’s an inconvenience, but does not determine whether you will have bodily integrity.
By this logic, eviction from an apartment is a torture regardless of what the tenant did.
Yes; because they have no obligation to provide this service tier at all.
It could be API prices for anyone, everywhere. They offer a discounted plan, $200/mo., for a restricted set of use cases. Abuse that at your peril.
It’s like complaining your phone’s unlimited data plan is insufficient to run an apartment building with all units. I was told it was Unlimited! That means I can totally run 500 units through it if I want to, Verizon!
You can run an entire apartment block off of a single sim card/phone line. The (technical) problem is that you are purchasing an insufficient amount of bandwidth. It goes without saying that a limited bandwidth integrated over a finite service period comes out to a limited amount of data, so the term is misleading.
If google has no obligation to provide the service tier, then they should stop providing it instead of providing it under false terms.
This is like if everyone in a city decided to take baths instead of showers, so the municpal water supply decided to ban baths instead of properly segmenting their service based on usage.
Service providers don't have the right to discriminate what their service is used for.
I don't think that's an apt metaphor. You bought one general water supply, like an API user. If they sold a "no baths" cheaper option I'd be fine with them banning baths to those customers.
Google's API does let you use any client.
The gemini/antigravity clients are a different (subscription) service. When you reverse engineer the clients and use their internal auth/apis you will typically have very different access patterns to other clients (eg: not using prompt caching), and this is likely showing up in their metrics.
This isn't unusual. A bottomless drink at a restaurant has restrictions: it's for you to drink, not to pass around to others at the table (unless they buy one too). You can't pour it into bottles to take large quantities home, etc. And it's priced accordingly: if sharing/bottling was allowed the price would have to increase.
> Service providers don't have the right to discriminate what their service is used for.
They frequently do have those rights, though. It's up to the paying customer to either pay for a different tier or move to a competitor who offers the tier they need.
You are never going to get a court to agree that service providers cannot offer different tiers, or segment their offerings.
Lmao no. You cannot use your common sim card for that. It's for an individual and they will cut your service and justifiably so, if they figure out that's what you're using it for.
If you buy a sim card built for that purpose sure, but then you'll be paying...biz prices!
This isn't really that hard to figure out people. So much outrage in comments on this. Self entitlement to the max from people who really haven't lifted a finger to stop the corporate overlords anyway.
So, if I use my SIM card 16 hours a day, 7 days a week, Ill get banned? Doesn’t that seem absurd? The SIM card is enforcing one voice call at a time. If the apartment building has to wait in line to use it, what’s the difference?
If you deployed it in a way that did multiplexing such that multiple users could use it at once, then sure—-Business time. But otherwise…
> So, if I use my SIM card 16 hours a day, 7 days a week, Ill get banned?
Probably not - you'll get billed or hit a FUP
> Doesn’t that seem absurd? The SIM card is enforcing one voice call at a time. If the apartment building has to wait in line to use it, what’s the difference?
The difference is that it is perfectly acceptable to enforce a "no-reselling" or a "no-3rd-party" for services.
I can't think of a single service provider that provides a consumer tier permitting reselling or 3rd-party use.
I can do it pretty easily. The restriction in both cases is so easily overcome it is ridiculous to build your buisness model around it and disrespectful to the customer's intellect.
> it is ridiculous to build your buisness model around it and disrespectful to the customer's intellect
Many things in business are easy to defeat if you’re willing to break the rules. Enforcement is handled through audits, flagging suspicious activity, and investigations.
It’s ridiculous to think that because you can temporarily circumvent a restriction that the rules don’t apply.
I don’t agree with the excessive enforcement used, but there is a lot of tortured logic in this thread trying to argue that the contract terms shouldn’t apply to service usage because the customer doesn’t like the terms.
Precisely. In fact I remember a story similar to this, so I Googled "did Sprint get sued for using 'unlimited' in their marketing?" Lo and behold, yes, they did. And for good reason.
It would be an understatement to say I am ashamed to work in the same industry as many of the commenters here do--commenters who are completely ignorant of antitrust law and why it exists, or for whatever reason, are completely unconcerned with the absurd market power these mega conglomerates (ab)use.
I don't know why people here can't accept the simple fact that AI companies are offering cheap "unlimited" plans as a loss leader to tie you to their ecosystem, and then make up for it via add-ons, upsells, ads etc. If you use those API tokens to access external services it defeats the purpose. The hack may have worked so far, mainly because no one was checking, but they are all going to tighten the access eventually (as Anthropic and Google have already done).
Either stick to first party products or pay for API use.
No one is shocked that they don't allow this. Everyone is shocked that they silently, permanently banned the user with no recourse and it took significant effort even to find out that much.
Sorry to be that guy, but given how often Google has done this for lesser infringements (some reported here on HN), is anyone really "shocked" by the permabans?
The apparent shock around this sort of thing always feels like cope for the fact that we (myself included) understand the power imbalance between Google and its customers but don't want to admit it.
There's plenty of evidence at this point, and I feel like we should be using that emotional energy to actually do something about it (like switching providers for critical personal services, for example).
OpenAI and the Chinese companies let you all you can eat openly. Anthropic's lead vs OAI is slight and these things are going to homogenize quickly. The market is going open and the people trying to keep it closed are just generating ill will pointlessly.
>OpenAI and the Chinese companies let you all you can eat openly.
You say this, but I guarantee that when they do offer a plan similar to Google/Anthropic's dedicated coding "unlimited" subscription, they will do the exact same thing. Maybe they will let OpenClaw in as a first party because of their partnership with the creator.
Where does Anthropic offer an "unlimited" subscription? All of the plans mentioned on https://claude.com/pricing have limits, same as usage of Codex on OpenAI's ChatGPT subscription plans. If Google forgot to actually enforce a rate limit (that they do mention on https://antigravity.google/pricing) on theirs, that sounds like a huge oversight.
OpenClaw is a massive liability. Regardless of the creator's employment, OpenAI is not dumb enough to officially release a ticking PR bomb like that. I don't know what they'll do with the creator, I guess pump him for ideas and keep him off the streets. (Simply telling him to design out the same thing in a form that is releasable should be enough to keep him quiet for a good long time.)
OpenClaw doesn't need the creator in order to continue to be a reputation risk nightmare for all of the AI companies, though.
But none of these are unlimited, that was never the expectation. It's a flat rate for a flat (but hidden) amount of usage. What's disgusting is that they want the good parts of subs (low usage subs), but then just ban the bad parts (high usage people). I don't care whether that's technically possible, it's incredibly scummy.
Seems like a hassle when open source models are just as good. Can go with any hosting provider. Might have to wait 3-4 weeks for them to duplicate whatever Anthropic is doing with token caching. But then you get 10x cheaper inference.
I feel like this game is just a hot potato, can you get retail to hold the bag game
When reading HN I get the impression that a lot of people are convinced monthly plans are very profitable for the companies, I don’t have any numbers but to me it always seemed like a bait and switch or ”bait and make you pay with your data too”.
I'll bite. I suspect that these plans aren't as intensely subsidized as people assume. I believe that API usage is probably also not subsidized at all. First, yes, subs are probably subsided, but I bet a significant % of users are profitable to serve, especially the "chat" users who don't use dev tools and have short context window conversations. Yes, I think the subs also exist as a driver to get lock-in and market share. Claude Code, for example, is very good and I stopped using their competition when they released their superior product.
That said, I assume that (1) their long-term goal is to create cheaper-to-serve models that fit within their pricing targets, and use the (temporarily) subsidized subscriptions to find the features and costs that best serve the market. Maybe even while capturing more margin on the API in comparison (eg keep API prices high while lowering cost to serve a token). I've largely stopped using Opus, and sometimes even chose to use Haiku, because the cheaper models are fast and usually serves my needs. It's very possible to work all-day and barely hit the usage limits with Haiku on the $20/mo option. Long term, that could be profitable outright.
And (2) subscriptions with lower SLOs than API calls have the potential to provide "infill" usage for high fixed-cost GPUs as an alternative to idling, similar to their batch APIs. I'd believe that overnight usage limits could/should be higher than during California work-hours. I assume most big providers have pre-paid fixed cost servers, so pumping more tokens through an otherwise idle GPU is "free". They can also do a lot more cost-optimization behind the scenes, such as prompt caching, to reduce the cost of tokens.
> First, yes, subs are probably subsided, but I bet a significant % of users are profitable to serve, especially the "chat" users who don't use dev tools and have short context window conversations.
Why would WebChat users need a subscription? It's free; I've even pasted tarballs of entire repos in there, and haven't hit limits!
>>> a significant % of users are profitable to serve, especially the "chat" users who don't use dev tools and have short context window conversations.
> More limited features, like lack of model selection, more restricted use of “thinking” models.
Yeah, but... do the "chat" users actually care about any of that? Would they even notice a difference?
My point is that, if all you're doing is chat, there's no value in any of the subscription models - for chat the free webapps are more than sufficient, so even someone spending the whole day chatting about something isn't going to hit any limits.
> I'll bite. I suspect that these plans aren't as intensely subsidized as people assume. I believe that API usage is probably also not subsidized at all. First, yes, subs are probably subsided, but I bet a significant % of users are profitable to serve, especially the "chat" users who don't use dev tools and have short context window conversations. Yes, I think the subs also exist as a driver to get lock-in and market share. Claude Code, for example, is very good and I stopped using their competition when they released their superior product.
I somewhat agree, somewhat disagree with this. I think API based is not subsidised. If you do some basic napkin math they should have enough room there to serve the models below cost if the models aren't insanely large (you can compare with 3rd party openrouter offerings and have an idea of what $/Mtok you can serve per model size. e.g. Haiku level models can be ~700B tokens and still be profitably served)
I think 20-200$ all-you-can-prompt are likely subsidised. If you track token usage (there are many 3rd party tools that do this) you can get 4-5x the API usage out of them (it used to be even higher before they added weekly limits. People were seeing 10-20x usage). Now I think that's a bit tough to make the napkin math work out. I've compared sessions served over API with sessions from subscriptions, and you get much more usage out of them, even with 5h / weekly limits. Strictly for coding, I think they're subsidising them.
I somewhat disagree that they're doing it for market share / user lock-in. I think signals and usage trends are much more valuable for them. While there might be user retention for "casual" users (i.e. web) I think the power users in coding will move as soon as the competition has a better product. So at the end of the day having data to improve models and have the "best" model in a niche is more productive than retaining users with an inferior product. That is an assumption tho, and there isn't much math you can do to figure that out from the outside.
One thing to remember is that not all users are going to max out their plan.
This is more likely to occur on $20 plans though, especially since those are often necessary to unlock the more useful features (e.g. deep research) so people might be paying for that even if they don't actually use the tokens.
OTOH someone who's paying $200 will likely want to squeeze the most out of their subscription for that amount of money. So I wouldn't be surprised if it turns out that $20 users are subsidizing the $200 ones.
As a company with little other (any?) revenue you have to include all costs though.
Data centers, power, hardware, salaries, marketing, etc. Not just training models and serving requests.
I don’t see how it’s not subsidized substantially considering how much money they’re burning right now (I only base that on their rounds though).
That's not really how people discuss subsidies and finances though. Yea I guess a not-profitable company means that every operation is technically a "subsidy", but again, that's not really what those words mean.
Anthropic (as the ever-chosen example) has explicitly stated they've made more money than they've spent on training when they sell/serve a particular model in the past. They said that the reason they're negative is the next model costs more than the "profit" they've made on the previous one. This wasn't strict financial disclosures, but I'd presume this means that their data center costs (eg. power, hardware, etc) are baked into that, but probably not company-wide costs like marketing.
They do have several sources of revenue, all tied to their models: APIs, Subscriptions, and model licensing. Their licensing and APIs most likely have a positive margin -> the money they make to serve the n+1 customer is more than the cost to serve that customer, on a per-financial-transaction basis. It's speculated that they lose money per-customer to serve the subscriptions, and they eat that cost... for various potential reasons.
It is when you discuss financial health of a company, at least that’s what I picked up after doing fintech and loans, it’s the bottom line that matters, or the projected outcome of the same.
What point is there making money in area A when area B costs more. If you can stop doing B without affecting A that’s usually what happens, but it’s not always possible.
Saying ”we’re positive except the foundation of the company (training models) isn’t” is a tell tale sign.
And I’m sure Anthropic is doing what most others are doing, heavily massaging numbers to make them look good for VC rounds.
Google's Pro service (no idea about ultra and I have no intention to find out) is riddled with 429s. They have generous quotas for sure, but they really give you very low priority. For example, I still dont have access to Gemini 3.1 from that endpoint.
It's completely uncharacteristic of Google.
I analyzed 6k HTTP requests on the Pro account, 23% of those were hit with 429s. (Though not from Gemini-CLI, but from my own agent using code assist). The gemini-cli has a default retry backoff of 5s. That's verifiable in code, and it's a lot.
I dont touch the anti-gravity endpoint, unlike code-assist, it's clear that they are subsidizing that for user acquisition on that tool. So perhaps it's ok for them to ban users form it.
I like their models, but they also degrade. It's quite easy to see when the models are 'smart' and capacity is available, and when they are 'stupid'. They likely clamp thinking when they are capacity strapped.
Yes the models are smart, but you really cant "build things" despite the marketing if you actively beat back your users for trying. I spent a decade at Google, and it's sad to see how they are executing here, despite having solid models in gemini-3-flash and gemini-3.1
> Yes the models are smart, but you really cant "build things" despite the marketing if you actively beat back your users for trying
I think this is the most important takeaway from this thread and at some point, this will end up biting Google and Anthropic back.
OpenAI seems to have realized this and is actively trying to do the opposite. They welcomed OpenCode the same day Anthropic banned them, X is full of tweets of people saying codex $20 plan is more generous than Anthropic's $200 etc.
If you told me this story a year ago without naming companies, I would tell you it's OpenAI banning people and Google burning cash to win the race.
And it's not like their models are winning any awards in the community either.
My impression is there's a definite shortage of GPUs, and if OpenAI is more reliable it's because they have fewer customers relative to the number of GPUs they have. I don't think Google is handing out 429s because they are worried about overspending; I think it's because they literally cannot serve the requests.
This sounds very plausible. OpenAI has hoarded 40% of world's RAM supply, which they likely have no use for other than to starve competition. They (or other competitors) could be utilizing the same strategy for other hardware.
Which is worrying, because if this continues, and if Google, who has GCP is struggling to serve requests, there's no telling what's going to happen with services like Hetzner etc.
OpenAI is dependent on same hyperscalers (most specifically Microsoft/Azure) as everyone else, and even have access to preferential pricing due to their partnership.
A better explanation is to point out that ChatGPT is still far and away the most popular AI product on the planet right now. When ChatGPT has so many more users, multi-tenant economic effects are stronger, taking advantage of a larger number of GPUs. Think of S3: requests for a million files may load them from literally a million drives due to the sheer size of S3, and even a huge spike from a single customer fades into relatively stable overall load due to the sheer number of tenants. OpenAI likely has similar hardware efficiencies at play and thus can afford to be more generous with spikes from individual customers using OpenCode etc.
I would guess the biggest AI product on the planet is Google's Search AI. Although even that might not be the case, unless your definition of AI is just "LLMs" and not any sort of ML that requires a GPU.
It's unfortunate though that they lie and deceive by having a name called "Open"AI when they are in fact "Closed". And the whole non-profit to profit and Microsoft deals are just untrustable and unethical.
They also actively employ dark strategies in cooperation with CIA and who knows when they will pull the rug under you again.
Do you really trust a foundational rotten group of people who avoid accountability?
I don't know what it's called when something becomes an irony and then this irony becomes an irony itself, but that's what's up with OpenAI today. On one hand, they started this 'we're closing things down because safety' line, they normalized $200/mo subscriptions, but now they're becoming the most open AI company between the big 3. Their tooling is open source, they're lenient on their quotas on lower plans, and their allowance of third party integrations is also unique.
I would still consider OpenAI naming incorrect, but between the 3, they kind of are, open.
I've stopped using Gemini models altogether because of this. I'm using Claude Code with MiniMax M2.5 for a while now and i couldn't be happier. I haven't noticed any drop in output quality and the biggest advantage is that even the $10 is pretty generous. I haven't been hit with rate limit, not even one time. And i'm pretty heavy user. I tried also GLM 5.0 but i hit rate limit there pretty early on.
One thing with GLM 5 is they seem to do this weird thing where when your account is just opened it limits you really heavy, then this gets lifted later.
I had buyers remorse when the first hour or two I kept getting rate limited on GLM5, but since then i've not had a single rate limit and I am using it very heavily.
I'm guessing at least 50% of the "users" of Antigravity are actually OpenCode users exploiting the oauth and endpoint. Must be infuriating to them if they're subsidizing it.
The OpenCode plugin (8.7k stars btw!) even advertises "Multi-account support — add multiple Google accounts, auto-rotates when rate-limited"[1]
Just adding for context that I use Gemini Ultra and across all models from Gemini 3.1 Pro to Claude Opus 4.6, I have never hit 429s as well as hitting model quota limits is incredibly rare and only happens if I am trying to run 3 projects at once. While not the biggest agentic coding fan, I have been toying with them and have been running it for at least 7-8 hours a day if not longer.
I’ve often suspected these models of getting dumber when the service is under high load. But I’ve never seen actually measured results or proof. Anybody know of real published data here?
That comment only says that they have a lot of different options for smaller & faster models that people can opt into. It doesn't say that they dynamically scale things up or down depending on demand.
ChatGPT was brutal for it a couple years ago. You could tell when it would go into “lazy mode” during peak usage periods.
Suddenly instead of writing the code you asked for it would give some generic bullet points telling you to find a library to do what you asked for and read the documentation.
> ChatGPT was brutal for it a couple years ago. You could tell when it would go into “lazy mode” during peak usage periods.
ChatGPT web has been doing this for a week now, for me. Ask some technical question and get a reply absolutely filled with AI phrases (Not $X, Just $Y, the key insight, the deeper insight, etc) dominating about 50% of the text, with the remaining 50% some generic filler stuff partially related to the tech I asked.
Last night I read through a ChatGPT web response about solutions for a security bootstrapping problem without holding keys/password, and it spat out pages and pages of key insights, all nicely numbered sections with bullet points in each section, without actually answering the question.
Moved to Claude Web immediately, got a usable answer on the first try.
I'm very confused here. The monthly plans are meant to be used inside of Google's walled garden, but people are somehow able to capture (?) and re-use the oAuth token?
Regardless, I thought it was pretty obvious that things like OpenClaw require an API account, and not a subsidized monthly plan.
Exactly, OpenClaw (or I think possibly an addon/extension or unofficial method) is allowing Googles Antigravity authentication to connect the app. This allows for 'unlimited' calls through Antigravity models with a subscription, instead of the proper Gemini/Google AI Studio API key method (charged per million tokens)
API usage can get very high for automatic operations, especially with apps like Kilo/Roo/Cline, and now with OpenCode/OpenClaw. I often blast through $10-20 in a single day of just regular OpenCode usage through OpenRouter
If I could pay a subscription and get near unlimited use (with rate limits), of course I'd do that, but not like this. I'm pretty sure Antigravity has ToU somewhere that indicates it's only allowed for use in Antigravity and nowhere else, since I've seen other threads on this happening: https://github.com/jenslys/opencode-gemini-auth/issues/50
This is the first time in recent memory that software has had high variable costs so the surprise at these rules is understandable.
In this case, a the difference in context cache hit rate between openclaw and antigravity.
For example if openclaw starts every message with the current time hh:mm:ss at the top of the context window, followed by the full convo history, it would have a cache hit rate if ~0. Simply moving the updated time to each new message incrementally would increase hit rate to over 90%. Idk if openclaw does this but there’s many many optimizations like this. And worse, thrashing the cache has non linear effects on the server as more and more users’ cached contexts get evicted from cache due to high cardinality. The cost to serve difference could be >10x.
Google is the furthest behind on coding agent adoption and has all the incentives to allow off policy use to grow demand. But it would probably be better to design their own optimized openclaw and serve that for free than let any unoptimized requests in.
It's a fair point, but I think people are thinking too much about 'cost' and 'subsidies' and just the fact that everyone is so compute stretched.
While it's sort of the same thing, I think it's much more a symptom of not enough compute vs some 'dump cheap tokens' on the market strategy.
One related thought I had was that given OpenAI is the only one _not_ doing this of the big3, it probably indicates they have a lot more spare compute.
It doesn't make sense to me that given the absolutely brutal competition any of these companies would block use of 3rd party apps unless they had to. They clearly have enough cash, so I don't think it's about money - I think it's that an indicator that Google and Anthropic are really struggling with keeping up with demand. Given Anthropics reliability issues last week this does not surprise me.
I would add though that many are also being caught up in antispam efforts.
I.e. that for every legimate OpenClaw user doing something trivial with their account misusing the sub. There is probably 10x using it to send spam emails and spam comments.
I suspect from googles perspective some of these people are just a rounding error.
That said I use API where I should and the sub in the first party apps. Perhaps I'm too much of a goody two shoes but AI already feels such an overwhelming value prop for me I don't care.
That said I think you're right in that money matters here but I think the subs as they intend people to use them is hugely profitable i.e. the people doing 10 chats per work day and a few in the evening but paying £20 per month.
> One related thought I had was that given OpenAI is the only one _not_ doing this of the big3, it probably indicates they have a lot more spare compute.
Or, pessimistically, it could indicate they’re burning cash hoping the subsidized access will eventually result in someone giving them a product idea they can build and resell at a profit.
If they let *claw (or third party coding agents, or whatever) run for six more months and in those months figure out how to sell a safe substitute and then cut off access, maybe it will have been worth it.
>This is the first time in recent memory that software has had high variable costs
Running software has always had a variable cost.
Why should I be surprised if [cloud provider] were upset that I were running a thousand free tier servers? Or utilizing any paid plan at all to somehow effect utilizations far exceeding the clearly documented limitations of my plan?
Using the torrent network protocol on a VPN that doesn't support it, or fork bombing an email server, or using that one popular free video hosting service to host nigh unlimitted arbritrary data, or hosting content that is illegal to the server operator regardless of its legality to me, etc, etc, etc
It's all the same thing: TOS violation.
No one is being forced to use these products without reading and signing the terms of service. In this particular instance, you can even use the free version of the provided service to analyze the terms of service for the paid plan if you were really so lazy.
I really am genuinely confounded as to why people are so regularly surprised that they can't just do whatever they please with proprietary solutions. Like "oh what do you mean I can't lie about the date of my injury in order to get it covered by insurance?".
It's almost like people just assume that everything ever works exactly as they would deam it to (in their benefit), rather than the much more sane assumption that every company is going to be naturally inclined to cater to their own benefit before the users'.
I don't understand how this can be enforced without ridiculous levels of false positives. I'm truly baffled. The same with Claude Code situation.
gemini-cli, claude-code, codex etc, they ALL have a -p flag or equivalent, which is non-interactive IO interface for their LLM inference.
If I wire my tooling (or openclaw) to use the -p flag (or equivalents), is that allowed?
Okay, maybe they get rid of the -p flag and I have to use an interactive session. I can then just use OS IO tooling to wire OpenClaw with their cli. Is that allowed?
How does sending requests directly to the endpoints that their CLI is communicating with suddenly make their subsidized plans expensive? Is it because now I can actually use my 100% quota? If that's so, does it mean their products are such that their profitability stands on people not using them?
The direct answer is their clients play extra nice with their backend.
Specifically all optimize caching.
The indirect answer is for everyone using third party tools to play about there are 10x using it to spam or malicious use cases hammering their backend far cheaper than if it was by API.
These people are the false positives in this situation, but whether Google or Claude care is unlikely. They're happy to ban you and expect you to sign up for the API.
This has always been a worry when you use a service like Google.
if i understand correctly, they even have a wrapper around it to make it easier to use: the Claude Agent SDK
the thing that's disallowed is pretending you're the claude binary, logging in through OAuth
in other words, if you use some product thats not Claude Code, and your browser opens asking you to "give Claude Code access to your account", you're in hot water
as for how they detect it: they say they use heuristics and usage patterns. if something falls wildly out of the distribution it's a ban.
my take is that the problem is not the means of detection. that's fine and seems to work well. the problem is that its an instant outright ban. they should give you a couple warning emails, then a timeout, etc.
No it's not. You can't offer OAuth + the Claude Agent SDK in your own product, but you can use Claude Agent SDK locally by signing in through Claude Code.
It's no different than using Claude Code directly.
I’m aware of the tweet that says otherwise, but until they update their legal documentation, it’s still not allowed.
> OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service.
You cannot authenticate with anything but Claude Code and Claude.ai.
But you do not need to authenticate with Claude Agent SDK (even though you can using env variables).
When you authenticate with Claude Code (allowed), Claude Agent SDK works without any further authentication.
It's really annoying that people keep trying to make this complicated because the inevitable end result is that they remove authless usage of the Agent SDK and save themselves the headache.
I really hate Clawdb-Moltb-OpenC-NanoCode or whatever half-baked project the grifters are on this week for ruining a good thing for the rest of us.
The heuristic detection approach is fine. The penalty ladder is broken.
Reasonable progression: warning email → quota throttle → AI Pro subscription suspended → Google account suspended.
They skipped to step 4 on a first offense, paid account, no appeal. That's not a terms enforcement system, that's a hostage situation. "Comply or lose your digital life."
The real lesson isn't "don't use OpenClaw." It's: never let one company own your primary identity infrastructure.
For a specific harness, they've all found ways to optimize to get higher cache hit rates with their harness. Common system prompts and all, and more and more users hitting cache really makes the cost of inference go down dramatically.
What bothers me about a lot of the discussion about providers disallowing other harnesses with the subscription plans around here is the complete lack of awareness of how economies of scale from common caching practices across more users can enable the higher, cheaper quotas subscriptions give you.
I feel like a lot of this would go away if they made a different API for the “only for use with our client” subscriptions. A different API from the generic one, that moved some of their client behaviors up to the server seems like it would solve all this. People would still reverse engineer to use it in other tools but it would be less useful (due to the forced scaffolding instead of entirely generic completions API) and also ease the burden on their inference compute.
I’m sure they went with reusing the generic completions API to iterate faster and make it easier to support both subscription and pay-per-token users in the same client, but it feels like they’re burning trust/goodwill when a technical solution could at least be attempted.
> I feel like a lot of this would go away if they made a different API for the “only for use with our client” subscriptions.
They literally did exactly that. That's what being cut off (Antigravity access, i.e. private "only for use with our client" subscription - not the whole account, btw.) for people who do "reverse engineer to use it in other tools".
Nothing here is new or surprising, the problem has been the same since Anthropic released Claude Code and the Max subscriptions - first thing people did then was trying to auth regular use with Claude Code tokens, so they don't have to pay the API prices they were supposed to.
Haha, no. I can tell you that it is so obvious and there is basically no false positives. Can’t share more details though.
If it makes you feel any better, some google employees have their personal accounts banned too (only Gemini access, not the whole account) for running opeclaw, and also have a hard time getting their account reinstated.
Its obvious why this us getting blocked open claw will make multiple orders if magnitude more requests. For each open claw user you could support tens of thousands of regular users.
There are examples of labs banning these use cases for sure, as well as the presence of terms and conditions allowing them to ban you for merely “competing” with them. If you’re building, it could be worth locking in a contract first.
But the question is - why is the -p flag fine? It hits the same endpoints with the same OAuth token and same quotas.
Comments section here and on related news from Anthropic seems to be centered around the idea that the reason for these bans is that it burns tokens quickly, while their plans are subsidized. What changes with the -p flag? You're just using cli instead of HTTP.
Are the metrics from their cli more valuable than the treasure trove of prompt data that passes through to them either way that justifies this PR?
The difference is that in this case the agent loop is executed, which has all the caching and behaviour guarantees. What I assume OpenClaw is doing is calling the endpoint directly while retaining its own "agent logic" so it doesn't follow whatever conventions is the backend expecting.
How important is that difference, I can't say, but aside the cost factor I assume Google doesn't want to subsidize agents that aren't theirs and in some way "the competition".
> Are the metrics from their cli more valuable than the treasure trove of prompt data that passes through to them either way that justifies this PR?
Yes. The only reason they subsidise all-you-can-prompt subscriptions is to collect additional data / signals. They can use those signals to further improve their models.
I feel like it's about data quality. They want humans using the tools because that data is valuable and helps them improve the product. AI's using their product like OpenClaw makes their training missions harder. And even if you opt-out of training, they are still using your data for non-training purposes (you can't open out of that) and that human data is valuable.
They're in the wrong business then. They're selling peak automation software, with the sales pitch of 'have AI do your work while you sleep'.
Are they banning their core offering? Are Ralph' loops also banned for building software? Because I can drain my quota with a simple bash loop faster than any OpenClaw instance.
The buffet analogy breaks down here. Using OpenClaw isn't stuffing steaks in your bag — you're eating the same food, in the same seat, consuming the same tokens your subscription allows. Google banned you because they didn't like the plate you brought. Then took your house key as punishment.
The steaks-in-bag analogy would apply if you were somehow extracting MORE than your quota. You're not. You're just routing the same tokens differently.
Not sure if this is sarcasm, but I'll respond as if it isn't. Having worked my entire career to date in the SaaS business, it is well known in some verticals that a large portion of revenue comes from companies that literally do not know they have purchased your product. And when you have a large customer like that, people are very careful to walk quietly and not do anything to notify them. I've seen it happen quite a few times.
If you go to an all you can eat buffet, ignore the plates they give you, and start filling up your own takeaway boxes with days worth of food, you'd expect to be kicked out.
No one would think this is unreasonable. You're not paying for unlimited food forever, you're paying for all you can eat in the restaurant right there.
I'm confused why the presence (or lack of) a limit is relevant to the pretty simple analogy...
A buffet is saying "pay $X to eat food one plate at a time [up to 100 lbs of food]", and you show up and start shoveling the food into your bag. Does not really matter if we remove the 100lbs part.
Could you technically eat the same amount of food one plate a time? Sure. But if everyone does this, $X needs to be significantly more: even for the people who eat one plate at a time.
-
You could also argue they're playing a mean trick and deceiving people because technically someone could eat the same amount of food 1 plate at a time...
But they priced $X based on how much the average person can eat, not how much food they can carry in their arms. If the limits are so high that people don't leave hungry eating 1 plate at a time, it still seems like a fair deal.
I'm not exactly the type to jump for joy at siding with a corporation, but I really don't get why people are in a hurry to ruin a good thing.
I don't think there's even a limit. The limit is a soft limit enforced through the UX of the tool, the features it provides, or even how it's marketed. There are always going to be high cost users and low cost users, service providers know this and build it into their revenue modelling.
Another example is home internet connections. They're unmetered where I live, but I'm also told I can't run public internet services on it. Why? Because with "personal/home usage" there's just a practical limit to how much I can use my ~1Gbps pipe, whereas if I ran a public service I might max out that pipe. I'm a pretty heavy user (~60GB a day), but that's a world of difference from the >10TB I could theoretically hit.
> but I really don't get why people are in a hurry to ruin a good thing
This is the crux of it. I like services limited by practicality because they're a heck of a lot cheaper. If people want more usage there's always API billing, they just have to pay for what they're actually using.
No, if you did that, they'd start by saying "hey, stop that", not jump immediately to "you're banned from every Golden Corral location for the rest of your life".
Of course Google can restrict how their API is accessed. But locking paid accounts with no warning, no explanation email, and no functioning support path while continuing to charge $249/month is a different problem entirely. A reasonable enforcement process would have been a warning email, grace period to stop using the tool, then restriction.
What an awful way to lose trust, locking out their users but billing them all the same.
Google have always done this if they suspect you’ve broken TOS, if anything this is better than usual because usually you lose your Gmail and YouTube accounts too with no human to talk to about it.
Their "API" isn't what's being accessed here. As far as I understand it's using their subscription account oauth token in some third party app that's the issue here.
I was using Antigravity the proper way, but why would I risk my account using this subpar software? OpenClaw and Opencode literally obfuscate the API call exactly like Antigravity calls it. Do you really trust Google to only catch misuse using this dragnet?
Google, unlike all their competitors, actually give Cloud API credits to all paying users of AI Pro and AI Ultra [1] - just use those for direct Gemini/Vertex API access instead of trying to hack the OAuth of Google's apps.
Google deciding to willy nilly unilaterally ban my 20+ year old primary Google account is probably my greatest internet fear, given how famously awful their support is. Seems like it's the singular best example of a tech company so big that through some combination of internal silos and TOS bureaucracy you have no shot of getting your account back, no matter how unreasonable the ban actually is.
A while back I made completely separate Google accounts for YouTube and Maps just so my longstanding Gmail account wouldn't get banned if the system somehow detected that my Youtube account for example breached Google's TOS.
> A while back I made completely separate Google accounts for YouTube and Maps just so my longstanding Gmail account wouldn't get banned if the system somehow detected that my Youtube account for example breached Google's TOS.
I bet you that if they ban one they ban the other too
the only safe way is to get your important data out of Google entirely
after manifest v3's announcement, I de-googled: gmail, chrome, search, google cloud, photos, family on android phones
> I bet you that if they ban one they ban the other too
Related: I've had a suspicion that, if you have an Apple or Google app developer account through a company (in your name and recovery phone number, but company email address)... and you leave the company... you'd better hope that someone at the company doesn't then use the account to do something sketchy or rule-breaking.
Someone inheriting the account is a very real possibility, given motive (people can be lazy about figuring out how to set up the account for another developer, or not want to pay another fee), and opportunity (professionalism norm is to preserve all passwords/secrets in a way that is accessible to the company).
> There's an entire mesh of metrics that are used to calculate your relation to separate accounts.
> It's the confidence tolerance that keeps you and your partner from getting banned together.
Thanks for that bit of info, the degree of disgusting that google would be tracking who people's partners are is off the scale invasive and should be a reason for an immediate complaint to the various data privacy authorities.
Thus spake the Googler... sorry, but I think I understand it just fine, I think it is you that is not understanding it properly but since your salary depends on not understanding it properly I won't blame you for that.
> google would be tracking who people's partners are
is a misunderstanding of that comment. Nothing they said implies that Google is tracking who people's partners are. You're welcome to have whatever opinions you are about companies, but I'd also hope that you're careful not to read conspiracies into places where they aren't stated, especially in about institutions you have preconceptions about.
Whether it is tracked explicitly or implicitly, the idea that there is a matrix that establishes your linkage to other accounts is the bit that I take issue with because the conclusion for me is that Google is able to infer things about the people they hold data on that they never ever should have access to.
If you have a credible alternative explanation of what it does mean then you are welcome to supply that but instead you are making statements that are unverifiable:
> Nothing they said implies that Google is tracking who people's partners are.
That's a very, very thin line because if Google can figure out which account to ban and which account to let live because they are close enough that without that matrix the two would be seen as the same entity then that's already many levels of privacy violation too far. Being able to derive who is partner with whom once you have that data is trivial, whether Google actually does this or not is irrelevant because you can't prove a negative.
You are well into the territory of defending the indefensible here and I'm giving you a lot of leeway because you most likely have a mortgage and a bunch of other responsibilities but effectively you are defending your employer from a claim of gathering data without consent. Which - as I probably don't need to remind you - is a massive violation of privacy.
This all revolves around implied ability, I don't give a rats ass about whether or not there is an actual implementation of that ability - as it seems you do -, Google should not have this capability because I did not consent to its tracking of the relationships of my accounts vis-a-vis other accounts. Legal basis for data processing and informed consent are both staples of privacy law.
I know that both of these, but especially consent are difficult topics for Google, they seem to approach these things from a 'we can therefore we will' angle and that has resulted time and again in them being found on the wrong side of the lines of ethics and legality. This is just one more little nail in that particular coffin.
Get your own domain, then it doesn't matter which provider you use because you can always re-point the domain and not have to upend your life changing email addresses everywhere.
Which is exactly why I de-Googled much of my digital life (email, notes, password management, photos, chatbot, browser etc) except where there is no reasonable/practical alternative. The "main" account is only for those things and for old contacts in case someone reaches me via the old email. I use a secondary Google account for anything that is remotely risky.
This is a major reason I haven't worked with Gemini much. Too many eggs in that basket to mess with it. Anthropic and OpenAI at least have no other baggage for me.
Friendly reminder that Google Takeout [1] exists. When I read a story a few years ago about a guy who had his primary Google account banned with no recourse (for reselling Pixel phones) and permanently lost 20 years worth of emails and family photos, I researched and found Takeout and used it to back up all my data, then subsequently stopped using Google services altogether (apart from YouTube).
Unfortunately the service is very buggy in my experience. When I tried to download all of my photos data multiple times it gave me corrupted .zip files and half of the files were just zero bytes. Maybe I can blame Firefox for that though, I dunno. I should probably try again with Chrome before completely blaming Google
I've never had a problem with Google Takeout the multiple times I've used it. Perhaps try making the compressed files smaller (You can choose to make them 1gb or greater, last time I used it), you might need to download 75 files, but it's better than 1 big file.
Welcome to the club. I registered my own domain and moved my digital life off Google services 18 years ago for this exact reason. If you need another reason: They scan all of your e-mail to target ads at you and your associates. Do it. It's not that difficult!
My "new" mail provider fetches messages from Gmail to create a unified inbox, which helped with the transition. Today, I'm thinking of shutting this off given the volume of misaddressed e-mail and spam that arrives via Gmail.
To clarify: None of the comments in that thread talk about experiencing that. They have been locked out of the Gemini service, not their Google account with mail etc.
Source: I actually read them. Yes, personally. I didn't even have an LLM summarize them. I know, I'm practically a luddite.
But when they paste support replies using terms like "suspension," "violation of the Google Terms of Service," and "zero-tolerance," it sounds like someone's close to losing access to their family photos.
If you are this afraid of your Gmail getting banned, I don't understand why that wouldn't translate to... moving off of Gmail. It's not even a very good service, it's slow and bad at spam detection. Leave an autoforwarder and go.
This feels like the early days of ISPs throttling VPN traffic. You're paying for a service with certain capabilities, then getting restricted for actually using those capabilities through a different interface.
The fundamental question is: if I'm a paying subscriber, why does it matter whether I access the model through your web UI or through an API wrapper? The compute cost is the same either way.
I suspect the real concern isn't usage volume but data pipeline control. When users interact through the native UI, Google gets structured interaction data. Through third-party tools, they lose that feedback loop.
the ToS enforcement itself is defensible -- consumer plans vs API access really are different unit economics. what's not defensible is permanent ban with zero appeal path for paying subscribers. that's a product failure. if you're charging /mo you should at minimum have a 'we caught you, stop it or we'll close the account' step before 'account gone forever, sorry'.
So a Google AI pro/ultra account is intended to be used from their cli or tools (like their open-gravity agent front end).
Their API usage isn't included in these plans, although under the hood open-gravity uses the API.
People have been using the API auth credential intended for anti-gravity with open claw, presumably causing a significant amount of use and have been caught.
The Google admin tools and process haven’t quite been able to cope with this situation and people have been overly banned with poor information sent to the them.
I don’t think either OpenAI or Anthropic any API use in their ‘pro’ plans either?
This reminds me of the customers of “unlimited broadband” of yesteryear getting throttled or banned for running Tor servers.
> The Google admin tools and process haven’t quite been able to cope with this situation and people have been overly banned with poor information sent to the users.
I can’t recall any success story of Google’s support team or process coping with a consumer’s situation, many have been posted here. this isn’t a new outcome, just a new cause
I do want to understand what’s happening with the $250/mo fees of users caught in this. will it be automatically cancelled at some point?
Edit: I have misread some of the comments here, he didn't lose access to his whole account and data just the antigravity part. I should've done my due diligence, get out of bed and spent more time thinking instead of emotionally reacting. Guess the rage machine got me as well. Damn. I think this thread might be hijacked by ai bros.
The main point still stands, google is part of a duopoly that runs the world. You can't be a functional member of society without them. They're like a public utility and plays too big of a role in people's life to take decisions based on unknown internal policies. They're long overdue for a government intervention or for splitting up.
Google needs to be dismantled. A lot of us on HN have been calling for this for years now.
Can we start saying it in unison to legislators and the press? Please?
If you're in the EU, do your part too.
This company taxes the URL bar. It owns 92% of them and turns trademarks they don't own into forced bidding wars. There's no way to access any brand without paying Google extortion fees.
This company removed AdBlock.
This company controls 50% of mobile - the most important device category and devices we own and pay for - and now they're removing our ability to use them as we please. More taxation, more Google services, every app and search through the Google troll toll. You can't even order from a restaurant anymore without one of these things and Google lords over it.
They own your digital life. They own infrastructure. They own discovery. They own every touch point.
They are too big.
Anthropic and OpenAI are having to pay out the nose for 60% of users to even access them, meanwhile Google sings "lalalala" and forced their AI products onto users at no cost.
Break them up now.
Do it horizontally, not vertically: instead of splitting off Chrome and Search and YouTube, create Google A, Google B, Google C ... Make them split all the same pieces and make them all compete with each other.
That is fair for the consumer. That is fair for competition.
That is the most capitalistic friendly thing to do. Because right now Google is an invasive species in every market destroying the entire competitive ecology.
Best way to deal with this is take them to small claims court. If enough people do this, They have to send representations that will cost them enough to stop such nonsense.
It's absurd and shameful. If only for the fact of banning individual consumers paying $249/month without warning, completely rendering them unable to use the service they paid for, including through the official app.
Just the 1000th instance of disgusting behavior by US big tech.
>Can you begin to imagine losing access to all your emails, accounts, every photo you ever took? Because what they didn't like how you used one unrelated product tied to your account?
What are you talking about? He didn't lose access to Google, in fact, he is using his Google account to make the post. He lost access to the service they are claiming that he is misusing.
Yes especially if you have a developer account. Sometimes an app rejection did result losing not just the entire google account but also the developer's private google account.
This is a serious problem. I think the only durable solution is legislation that requires these companies to provide access to your data, or at least a way to export or transfer it, even after an account ban. Otherwise, if they delete your account for any reason, even for a legitimate policy violation, they can effectively cut you off from information you have built up and stored over years. In Apple’s case, an account lock can even leave a device unusable.
I have read several blog posts from people describing how frustrating it is to have an account locked. Because Google, like many large companies, provides little to no effective support, the only thing that seemed to work was getting a post to trend on Hacker News so that someone inside Google noticed and intervened to resolve it.
Fine, restrict the OpenClaw usage. Fine, cancel the AI Pro subscription. But nuking Gmail, Google Photos, Drive — years of irreplaceable personal data — as punishment for how you routed tokens? That's not enforcement, that's collective punishment.
No bank closes your checking account because you used your debit card at a competitor's ATM.
The offense and the penalty are in completely different weight classes. That's what makes this indefensible regardless of whether the policy itself is legitimate.
Fair point, I was going off the "quoted suspension email" in the OP. If it's only the AI service and not the full account, that is definitely not as bad. Either way the zero-warning policy though is a big issue IMP.
Google does give you a way to export data partially if your account is banned.
But that's still not enough. I can't easily reconstruct this data in a way that will be usable to me, not without having something like Gemini build a UI for me. Oh wait.
I have never let openclaw touch my Google Account. I have used it only a few times using OpenCode and still my account was banned for violating ToS. Took me a while to figure it out because antigravity never shows you the specific error that occurred, just a simple "Something went wrong". They really should be more transparent about this, at least anthropic makes it clear upfront.
YouTube is also full of huge content creators, people who make Google tons of money, that complain about the Byzantine and opaque rules they have to dance around to maintain their livelihood and fan base
Google fears their giant userbases so they act with zero regard for communication and transparency because of the small chance it’d help the abusers
There was a recent gnarly version of this where some anime reactors and at least one animation channel (with something like 1.4 million subs) got demonetized and had to go through a ton of hoops to get a human to fix it.
TOS is TOS and if there is one company not to mess with it's Google because they don't give 2 shits about you. Going straight to a ban with 0 warning and 0 appeal possibility is exactly why I'll never use googles AI chat/coding products, it's just not worth the risk getting banned and losing access to other google services.
People seem to be continuously outraged by these AI subscriptions banning third party use. However, the usage patterns of the intended apps likely differ hugely from those of the third party ones.
For example, basically every first party agent harness aggressively caches the input tokens to optimise inference, something that third party harnesses often disgregard, or are fundamentally incompatible with as they switch agents for subtasks and the like.
To extend this use case though, how much do poeple expect to be able to use the internal API's of the apps they subscribe to?
If I buy an Uber One subscription, am I then justified reverse engineering the gazeteer API from the app and reusing it in other apps I use? What about the speech to text API MS Teams must use for transcribing meetings as part of a business standard subscription?
I think these are obvious and emphatic breaches that no reasonable person would expect to be justified in, maybe miffed if your clever hack gets banned, but being banned would be considered fair play.
I'm struggling with the economics. It comes off as performative bullshit to me. I thought we were buying entire Apple machines to run our "claws". But, we are simultaneously so poor that we have to smuggle tokens?
Oof. Google definitely fired too many people if this is how they are handling account violations for people paying them multiple hundreds of dollars a month.
Normally there would be a normal, well adjusted person in the room to remind them that "zero tolerance" policies for situations that can happen by mistake is silly
Clearly the demand for special claw plans is right there. (eg Kimi.com already has plans with a one-click claw install included. I thought one or two others too)
Yup. Last week my Ultra account got ToS-banned from both the Gemini CLI and Antigravity simply for using OpenCode. Try as I might, I haven't been able to resolve the issue. I can technically still use the Gemini web/app, but it's remarkably terrible in just about every conceivable way. A truly impressive feat in itself.
As of now, yes. However, a few months ago it was mentioned that Google is working on increasing the limits for Pro/Ultra subscribers. But if I can't get this ToS ban sorted out, I assume it'll follow my account when that update lands, and I'll end up being banned from AI Studio as well.
That is presumably the end game - monthly subscription in a walled garden app while they have your balls in a vice grip and can squeeze however many dollars you’ll bear
I bet Google is thankful that anthropic took one for the team by going first.
Also if it wasn’t for Chinese providers we’d basically already be in triopoly.
Presumably ...? It's the business model. Subsidize until the competition is down to 2, then extract. That's the entire Valley. Which is why the Chinese and Open Source need to be pushed from the market for the whole banana to work
It would be fun if Google lost its months of edge in the LLM value race because it alienated early adopters paying $250/month by using a 0-strike system with no customer support.
Honestly, I think it was probably a few users abusing the system like crazy. I've been building with Gemini CLI the past few days and had an increasing amount of issues getting a request through.
The GH issue trackers were full of people bitching and moaning about it. I think it might be a worse thing to alienate your users who use your product in the intended way - through Google's tooling.
But I agree the 0 strike rule seems really excessive.
It is also a possible scenario that a single individual sets up 10+ AI Pro subscriptions to blast through tokens like crazy - not sure how the economics of the daily allowances compare to the API pricing here.
Every day Google shows its true evil nature. So here clearly they want to offset competitors. That's the true agenda. We saw this when Google crippled ublock origin and then claimed the extension is "harmful", merely because it threatens Google's greed-income via ads.
Wow, and I was complaining about Anthropic handling their comms.
For almost a trillion-dollar company, this is the worst customer experience I've ever seen. Departments sending poor guy to each other like a hot potato.
Huge aura loss.
Who here reads the full terms of service of every Google product they use? The fact that they disabled the whole Google account without warning is damning.
They could have easily just blocked the Gemini / Antigravity use and and/or sent a "final warning" kind of email beforehand.
I used the pay as you go from google with openclaw for about one hour, then checked the next day and it cost me $7. It was the latest flash preview model. I can't justify the cost right now. At least I won't get banned though.
I was going to ask the other day, how long until people start getting banned left and right from the big services, after they start using / integrating openclaw? Seems like the other big issues, other than the security aspects. And knowing how kafkaesque it is to deal with companies like google, I don't hope the claw practitioners are too integrated with products from companies like google.
Yann Lecun warned that closed sourced models are the only true danger we are facing with LLMs (answering a question about "Will AI turn into Terminator" type of question).
So what is a "good-enough" model to use for OpenClaw now that the subscriptions are blocked. Is there an all you can eat subscription model that can be used?
It was already pretty restricted due to ludicrous rate limiting. I tried it just for fun with my Pro account and it was unusable. It couldn't do tasks properly without hitting rate limit, every other prompt.
It makes some sense. Some of the skills are malware, and google absolutely has the power to detect it by inspecting LLM I/O. If Google suspects that google account credentials have been compromised (via connecting to a malicious "integration"), it is rational to freeze the account (as opposed to letting the threat actors ride with the credentials they've stolen)
It's the old playbook again. They're using massive money to distort the market until the competition is bled dry while also operating the platform and using signal from the platform to target their competitors, classic DMA violation really. This all boils down to Chinese vendors getting banned from the market for "national security reasons" because if not, this all dies in a fire for Google investors. Nothing a gold pixel phone to the right places can't fix
If I was an investor in an AI provider I would be quite worried.
1) Switching between LLM API:s is incredibly easy if you are not concerned with differences in personality. As the models get better, it is less important to pick the best one.
2) The products built to bundle the API with a user experience are difficult to build on a level that outclasses open source alternatives.
3) Building an understanding of the user to increase the product value over time and create stickiness is effective, but imho less effective over time as time passes and the user changes. For example, I suspect that these adaptations have a hard time to unlearn things that are no longer true. Learning about the user opaquely is less useful to the user and doing it overtly makes it easier to take the learnings and go. (Besides, it is probably not legal under the GDPR to not let the user export the learnings and take them to another provider.)
Taken together, the moat becomes quite shallow. I see why they aggressively ban any tools demonstrating when open alternatives are in fact better than their own walled gardens.
I'd rather use Chinese models like Kimi K2.5 or Minimax M2.5 for personal agents at this point. They are almost as smart but 10x cheaper and their attitude towards subscribers is use where you want.
Depends where you live, in most places they don't bother anymore, in the few that they do a VPN obviously gets around it but it's incredibly unlikely you'd be doing enough to ever be on the radar let alone get caught. That battle was lost long ago.
I believe it is less that they stopped caring, and more that most piracy these days is web streaming, which is much harder to detect than torrenting or similar. AFAIK most major American ISPs are still fairly strict about pirate torrents.
Or when I would try and place ads in newspapers for my internet companies and they wouldn’t run them because they “don’t run ads for competitors”, okay then, how did that work out for you? Did you stop the internet?
A lot of people running OpenClaw just have it generated and burning tokens for no reason. They just know more tokens = doing stuff so want to spend as many tokens as possible.
Anthropic blocked the tools, not the entire account. But in Google's case they allowed the integration connection in the first place, so if it is against TOS then they have an obvious product gap.
I can guarantee in their attempt to stop OpenClaw users, some users using it normally will get caught in the dragnet. It could mean your whole Google account is suspended, not just for Antigravity.
I would highly encourage you to not only stop using Antigravity oAuth for OpenClaw, but to use Antigravity with a side account or stop using it altogether. Is using Antigravity worth losing your main account or getting it banned for using paid services (for extra storage, YouTube premium, etc). Even side accounts are risky since in the post thread people are saying Google applied the ban to all their accounts.
What later? You still can't get support from Google beyond their "community forum" with their condescending volunteer "diamond product experts" who have no power to help with anything account related.
Everyone and their Uncle Bob have been scrambling to leverage LLM Agents for Process/Task/Message Scheduling and Orchestration with Durable Execution. They have been worshiping Peter Steinberger as their champion and the God of LLM Agents. While Temporal.IO has quietly partnered with Apple to Schedule and Orchestrate all of their services with Durable Execution. It's funny how everyone assumes that using Inference for Deterministic Tasks like Mathematics and Compiler Optimization is a good idea. Reality doesn't agree. Wasting Electricity and Precious Minerals for Inference Compute is Reality. Compilers and Schedulers are deterministic, your LLM is not. You cannot infer Mathematics and assume the correct answer, we have Calculators and Compilers for a reason. Scheduling Algorithms have existed since the 1950's just like Inference Algorithms. Let me introduce you to a few of my friends: Make, Task, Dagu, Windmill, Rivet, Inngest, OVH/uTask, OVH/cds, Restate, Woodpecker CI, Erlang BEAM VM, Gradle, Zig Build, Cargo, Linux Package Managers, Bazel... Shall I go on? Keep your AGENTS.MD, we have Temporal.IO at home. Thank you for your Contributions to Open Source Maxim Fateev. Betting the US Economy on LLM Chat Bots was a bad idea my beautiful friends. Remember Elizabeth Holmes, Mortgage Backed Securities? Scam Altman must be laughing from his Tower of Evil right now...
It should be obvious that these services are operating at a loss. The monthly subscriptions especially, but I’m even skeptical that the linear API pricing is sustainable.
It feels like a classic “drug dealer” model to me. Get everyone hooked with cheap access, then raise prices later. Unless there’s a major breakthrough in the underlying technology, I don’t see how a significant price increase isn’t inevitable once adoption is locked in.
This seems unlikely while we have open weights models available that are ~as decent as the frontier ones.
Given the API prices for open weights models of similar size are 5-10x less than the frontier models the APIs are very profitable on a pure unit economics approach. I strongly suspect they make money off their monthly plans as well.
Did people learn nothing from the rise, stall, and now fall of social networks?
Yes, AI can do some incredible things. But we’re also running full speed into an ecosystem controlled by 2 or 3 major companies. Running at a loss. A reality check is coming.
It’s not a technology problem. It’s an economic problem. People are too busy looking at the tech to notice.
> But we’re also running full speed into an ecosystem controlled by 2 or 3 major companies.
We aren't, though. They think we are :-/
The reality is that tokens are the second-lowest value link in the AI value-chain (the lowest-value item being electricity).
These providers are operating low down in the value chain; they are trying to sell a fungible, easy replaceable and (if hardware price trends is any indication) easily self-hostable.
They have no secret sauce, no moat. If they jack up the prices, their users will simply move to the next provider, and repeat ad nauseum as long as VCs want to subsidise in the hope of a landgrab.
I'm not concerned, they're accelerating research and development into hardware and more optimal models. People forget that you can locally host some of the early models quantized to 4 with reasonable inference with a 4080 and 64gb of ram. There are daily tools being released that are a simple click and run, without much hassle other than downloading the model and you're off and running.
Yes there is mad dash by Google, Oracle, Microsoft, Meta, and China not to cede their position to each other - it actually isn't about who will buy or pay for the service its more of a Business Strategic position to obtain critical mass in a new market using their massive reserve of cash. The users right now are insignificant to their goal - they probably aren't even given a second thought.
At this point, running Chinese model like GLM-5 or Kimi K2 would be far more safer than risking off your LLM subscriptions. Quite the irony that our AI techno-feudal corpo overlords doesn't want to see their LLM take off with curious and useful open source ideas. Just like Microsoft, they deliberately buried it for some reason.
Oh, maybe not, they did it in the name of "terms of service abuse" and "risk assessment".
Thus it would be far better if we can just have SOTA open weight model to run OpenClaw/Clawdbot/Molt at least we are under control. And as you see the two Chinese models I mentioned are indeed open weight, albeit taking atrocious amount of resource to really self host, and you probably need to have abliterations to remove their political guardrails.
Sigh. We can't have great things with those big tech corpos and CCP politics. Big question: Why has this world gone to shit lately.
Why is everyone surprised, these subscriptions are basically toys. You pay so much, and you get about that much in inference compute, more if you’re lucky / early.
If you want to real use these things get an API key and pay the true marginal cost of your compute like a grown up.
Between this, and whatever Claude has been doing lately, like giving the AI the ability to just disconnect if it dislikes your prompt, I really hope more people realize that local LLMs are where it's at.
> I really hope more people realize that local LLMs are where it's at
No worries, the AI companites thought ahead - by sending GPU, RAM, and now even harddrive prices through the roof, you won't have a computer to run a local model.
Have you hit that? I thought it was only in extreme cases when Claude felt uncomfortable, like awful heavy psychological coercion. They wanted Claude not to be forced to reply endlessly.
And as far as I understand, the main contingent of HN is engineers, programmers, and even me, who works in a country (Russia) where the salary of an engineer is just tiny compared to Europe or the United States, it was not difficult to buy powerful enough equipment to run most large local models, train lora, then programmers who earn income in six-digit dollars it's even easier to do this.
I hate when companies say "unable" when they mean "unwilling". Google's statement is a lie because it's neither impossible nor illegal for them to change or rescind their policy, or give users an exception to it.
This is such a braindead move. Western AI companies short-term greed will just let Chinese companies win. If I were google, I'd just throttle or release a heavily cached version of their API for OpenClaw and automatically detect and direct openclaw usage to this model. Personally, Anthropic and Google's recent moves are just making me go all-in on self-hosted AI.
These companies are engaged in a sort of AI dumping. Cheap inference below cost.
Price out competitors. Abuse your newfound dominance.
It's the big tech playbook.
I don't think it's going to work this time.
Tools like OpenClaw are an existential threat precisely because it allows the user control over their experience. The value in it cannot be captured by a monopoly.
LLMs don't seem to be a very good moat. At the same time, the software moat is eroding due to those same LLMs.
Telecom tech killed telecom dominance.
With some luck, Google tech will kill Google dominance.
Is that... Why Google released Antigravity, an IDE, no less, when even my non-tech dentist is using claude code in cli? And why Anthropic is pushing their desktop apps, skills, and all these integrations their models can build in a day?
Are they betting on their software, not their LLM deciding if they survive or not if competitive open source model is dropped? Oh boy, the market is going to have some fun times when realization hits.
Meanwhile it's day 3? 4? since Gemini 3.1 was announced with a claim that Gemini CLI users would have access to it, but AI Pro subscribers still don't see it, and there's been no clarification from Google about what is going on, and why:
They are not serious. I only keep the "AI Pro" sub because it comes with a couple terabytes of Drive storage for the family.
Anyways, Google, nobody wants to use your bad VSCode fork. I want to use my own tools, and use your model where it makes sense as part of my own workflow.
Both Google and Anthropic are choosing the wrong route here.
While I see the formal aspect of abusing an OAuth token and burning through subsidized tokens, this only creates an internal accounting problem in the short term.
Meanwhile the rising popularity of Claws creates a yet untapped new market segment where users spend significant tokens.
A „soft“ migration of users by explaining to them how the API works, how to pay and how to change from OAuth would be way smarter.
The way this plays out right now is that current Claws users are massively penalized by being suspended indefinitely and new users will think twice. And we can expect a solid PR disaster / Streisand effect for the „poor“ model providers like OpenAI or Anthropic.
Commercially choosing the soft route by warning and throttling will be way smarter and possibly generate more long term revenue
For fuck sake, OpenClaw is destroying everything. Shitty users of OpenClaw will force the LLM providers to limit quotas for legitimate users. OpenClaw should die.
See? This is what a monopoly player flexing looks like. For the people putting all their eggs in a single basket. Can we please stop supporting centralised services? It only gets worse from here.
I have. 3 is fine, 3.1 is good. But they are terribly slow. Quality is fine but the the only thing they have going for them is flash pricing. Their response performance sucks.
LOL. This entire thread basically reads why you should never build dev tools. It's difficult to find a more entitled, cheapskate bunch of people who are completely clueless about business.
TIL it's "unfair" to sell a product for a particular purpose and offer subsidised rates to build a customer base. Different planet.
You can't. Either people will say you're overreacting to stylistic choices and that it's not AI, or that it's AI but a good comment (it's not), or that it's AI and a bad comment, but you should just ignore it like any other spam.
I’m new here and even I’m fed up with posts from AI bots. I just wanted a place that was away from all that nonsense, but here I am.
I use a custom userscript on my PC to hide posts that have certain keywords that attracts that type of crowd. I don’t have it on my phone which sucks but it’s nice for browsing during college classes.
Wouldn't it be ironic if LLMs are what saves us all from our digital addictions? There's not much point in shit posting online if there's no confidence that the person you're talking to is even a person.
I mean over time I'd imagine they'll be able to tune away from the 'LLM style' chat making it even more ambiguous who is human and who is not, and at which point, I expect many of us might be forced to accept what a waste of time this all is, all the while bots 'chat' with one another.
It’s certainly a losing game. I’ve deleted most of my accounts these days. I think the future is just going to be people retreating in to instant messaging and small group chats where you have physically met the other people.
I think governments should enforce a social network of a maximum 150 direct contacts and setup 3 independent agencies that play the role of gate keepers for content coming from outside this small network. The agencies analyze external content, if 2/3 of them approve a content, it is pass to everybody.
These agencies would each employ thousands of people and paid by the government.
All employees will have a mandate of 3 years maximum, managers included.
I guess now we know why there is no social media in Star wars or star Trek
It's kind of pointless in the world where it's easy to clone your way of talking and have thousands bots talking for you. It unavoidably turns into bots talking to bots
It's been my present for about 10 years. It's wonderful. Social media damage mental health. Messages with friends don't.
Group chats are borderline but I can silence them for a few hours when people start quarreling and I don't care taking part in the discussion. No infiniscroll, no addiction.
This website is under attack from relentless AI slop spam, both in submissions and in comments, and I have absolutely no idea what we can do about it. To me this stuff is an existential threat to a site like this.
I moderate a mid sized subreddit (few hundred thousand) and I ban AI spammers on sight with extreme prejudice, but I don't feel like that same attitude meshes well with HN.
big company doesn't want you using something other than their stuff and they'll steal your money and ban you, or similarly, big company wants your data... this happens every day. its nice having choices isnt it? ill just leave this big company and use... oh wait. its another big company.
Do however be warned that filing a chargeback might make you ineligible for any number of Google's pantheon of services for you or your family for the foreseeable future. Upset the beast at your own risk.
When you suddenly discover you can never again distribute an app to an Android device because you once hooked up your AI subscription to a toy AI assistant.
It seems that this comment was written with some AI tool. Curious to know — are you an OpenClaw instance?
Your profile seems to be an ad for some tool you or your owner/administrator created:
> Building EvoLink (https://evolink.ai) - a unified AI API gateway for 40+ models. We help developers save 20-70% on AI API costs with smart routing and automatic failover. Previously worked on AI infrastructure and growth.
Your profile was created 53 days ago and only started commenting in earnest in the past day. Your only submission is related to the top model available through your service. All comments are somehow related to that topic too.
It is funny that my first reaction to your post was that you are crazy, but then I looked at his comment history and you are completely right. Boy this is not a good development. I don’t want to spend my time reading AI generated comments.
Clearly this comment relevant to the tool the profile is selling as a kind of ‘submarine’ ad… profile was created 53 days ago (so no green tag) but only started commenting in earnest 12 hours ago (almost as if the account was farmed).
And the comment is full of AI tropes that seem highly generated.
It’s clearly AI generated when you see 3 comments of similar style posted in the same minute.
Anyways ignore the people downvoting you, I don’t want to read AI generated comments even if they are seemingly reasonable. I appreciate you flagging the comment for me, I didn’t even suspect it. I can make my own AI generated content if I want it, I want to read thoughts and ideas from actual humans.
I can't imagine feeling entitled to shove AI outputs in everyone's face on a user forum. It's predatory. They know no one wants it but they want to make a quick buck.
/me squints at the ironic em dash in "Curious to know — are you an OpenClaw instance?"
But in good faith: they (HN staff) said in another comment I can't find just now that they're discussing what to do about it, but I can't think of any palatable easy answers.
In fact, the only easy answer I can think of is banning all accounts newer than 2022, but then how do you onboard new users? Captcha for every new comment? Do we have good AI-defeating captchas now?
Well, I love my em dashes. Won’t ever give those up! You can pry them from my cold dead hands.
No, I am not OpenClaw or an AI.
I see comments like this a lot. I don’t comment on them unless the profile seems to be an advert for exactly what the AI-generated comment is talking about (which is definitely the case here).
I’m not sure if you “feel” the AI nature of the GP comment, but to me it’s very strong. I pray my writing doesn’t “feel” the same to someone reading it. If it does we’re in a much worse spot than I thought!
Although I myself am not sure whether this is a real person or a bot, the point seems to be at least somewhat valid to me. I think that some people have become too accustomed to the idea that they can get good things for free or at a reduced price, without thinking about how the economy of production/service they rely on works.
>This is exactly why API-level access matters more than consumer subscriptions for production workloads. Consumer plans are subsidized with the assumption of interactive, low-volume usage. The moment you programmatically route through them, you break the economic model they're built on.
EXACTLY.
Google also did this when DALL-E Mini and Stable Diffusion got big.
”Thank you for your continued patience as we have thoroughly investigated your account access issue. Please be assured that we conducted a comprehensive investigation, exploring every possible avenue to restore your access.
Our product engineering team has confirmed that your account was suspended from using our Antigravity service. This suspension affects your access to the Gemini CLI and any other service that uses the Cloud Code Private API.
Our investigation specifically confirmed that the use of your credentials within the third-party tool “open claw” for testing purposes constitutes a violation of the Google Terms of Service [1]. This is due to the use of Antigravity servers to power a non-Antigravity product.
I must be transparent and inform you that, in accordance with Google’s policy, this situation falls under a zero tolerance policy, and we are unable to reverse the suspension. I am truly sorry to share this difficult news with you.”
That was definitely written by a call center employee. We have no idea what the real story at google is, or the real story on the account, except to say that there are some definite hack-like maneuvers one must do to get openclaw working through antigravity.
Take your money to the Chinese companies instead. These evil megacorps are more interested in destroyed your privacy in service to the Epstein Cabal controlling every facet of your life. How dare Google, a trillion dollar company, charge you for AI ultra then ban you for using your own credits/usage allowance. This whole debacle, along with Anthropic, fall foul of The Digital Human Right to Adversarial Interoperability.
It is imperative that open source wins this battle. Not these evil megacorps and their substandard tools.
Are Google engineers so inept as to not be able to integrate technical measures against oc use? Do they think people using these plugins know the mechanisms used? And after all that they have the nerve to ban you from using their own products (AG). Ridiculous company.
While the frustration is understandable I don't see any difference between this and Netflix not allowing you to use your Netflix subscription in Amazon Prime federated video hub or something of that sort.
At the end of the day we know that these tools are massively subsidised and they do not reflect the real cost of usage. It is a fair-use model at best and the goal is to capture as market share as possible.
I am a no defender of Google and I've been burned many times by Google as well but I kind of get it?
That being said, you don't really need to use your gemini subscription in openclaw. You can use gemini directly the way it was intended and rip the benefits of the subsidised plan.
I developed an open source tool called Pantalk which sits as a background daemon and exposes many of the communication channels you want as a standard CLI which gemini can use directly. All you need is just some SKILL.md files to describe where things are at and you are good to go. You have openclaw without openclaw and still within TOS.
No, it's more like Netflix not allowing you to watch on non-Netflix branded devices or browsers. Or banning you for connecting the wrong TV to a valid device.
Or Microsoft banning you from O365 for not using their browser, or the correct monitor, or the correct mouse or.....
I don't understand. Everyone's been saying LLMs are gonna get cheaper and cheaper, to the point where it's almost free to operate. Clearly becoming profitable won't be a problem... so they can't be subsiding that much...
Are you telling me a bunch of people on Twitter and HN are full of shit?
But state of the art models are not free. GLM 5 and Kimi K2.5 are both open-source and they are much better models than the ones we used to pay for a year ago. Now we get them for free. This is certainly having an effect on all model providers which either need to adjust to new market realities or risk to loose market share and we know which thing they are not going to do.
You might get access to the model for free. The hardware to do anything useful with it certainly isn't.
Anthropic and Google shutting down access to their API for third party tools, OpenAI inserting ads into the platform... I'm sure it will stop here. Absolutely no more fuckery. And all these huge LLM companies are going to go from burning literally billions (in some case trillions) to being insanely profitable without putting the screws to users. We definitely aren't going to see the same pattern that's played out across essentially every other platform play out again... Nope definitely not.
Model costs have gone down orders of magnitudes in the last few years, and google would stop something like this no matter how profitable Gemini was or wasn't. It's a blatant misuse of their terms.
Ironically, Gemini says that it’s OK to use the Ultraplan for OpenClaw via gemini-cli because the Ultra plan has some API & Cloud credits baked into it. I think $100/mo but I can never figure out how Google billing works. I’ve pasted the response I got when I was asking it about OpenClaw. There is legal precedent for an AI hallucination being used to upload a contract (e.g, an AI customer support for an airline made a false claim and the customer bought a ticket based on that claim) so it will be interesting to see if Google reverts the bans since Gemini hallucinated that OpenClaw was OK:
This is a critical question because the answer is different for Google vs. Anthropic, and getting it wrong with Anthropic can actually get your account banned.
Here is the reality of the situation based on current Terms of Service and recent community reports.
1. Google (Gemini Ultra + gemini-cli)
Verdict: Safe (Authorized Feature)
Google explicitly built the gemini-cli bridge to allow Ultra subscribers to use their plan programmatically. This is not a "hack" or a gray-area wrapper; it is an official feature.
• Why it's okay: You are authenticating via gcloud or the official CLI login flow. Google tracks this usage against your specific "Agent" quotas (currently ~200 agent requests/day for Ultra users).
• The Limit: As long as you are using the official gemini-cli as the bridge, you are compliant.
• The Risk: If you use a different unofficial script that scrapes the gemini.google.com web interface (simulating a browser) rather than using the official CLI, you risk a ban for "scraping." But since you are using gemini-cli, you are in the clear.
Gemini didn't hallucinate anything. You just failed in basic reading comprehension.
In some sense, hallucinations as a problem have been solved already - their rate of occurrence seems much lower than that of people failing to read what is written instead of what they hoped it would be.
"We’ve been seeing a massive increase in malicious usage of the Anitgravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users."
reply