$1k per day, 50 work weeks, 5 day a week → $250k a year. That is, to be worth it, the AI should work as well as an engineer that costs a company $250k. Between taxes, social security, and cost of office space, that engineer would be paid, say, $170-180k a year, like an average-level senior software engineer in the US.
This is not an outrageous amount of money, if the productivity is there. More likely the AI would work like two $90k junior engineers, but without a need to pay for a vacation, office space, social security, etc. If the productivity ends up higher than this, it's pure profit; I suppose this is their bet.
The human engineer would be like a tech lead guiding a tea of juniors, only designing plans and checking results above the level of code proper, but for exceptional cases, like when a human engineer would look at the assembly code a compiler has produced.
This does sound exaggeratedly optimistic now, but does not sound crazy.
It’s a $90k engineer that sometimes acts like a vandal, who never has thoughts like “this seems to be a bad way to go. Let me ask the boss” or “you know, I was thinking. Shouldn’t we try to extract this code into a reusable component?” The worst developers I’ve worked with have better instincts for what’s valuable. I wish it would stop with “the simplest way to resolve this is X little shortcut” -> boom.
It basically stumbles around generating tokens within the bounds (usually) of your prompt, and rarely stops to think. Goal is token generation, baby. Not careful evaluation. I have to keep forcing it to stop creating magic inline strings and rather use constants or config, even though those instructions are all over my Claude.md and I’m using the top model. It loves to take shortcuts that save GPU but cost me time and money to wrestle back to rational. “These issues weren’t created by me in this chat right now so I’ll ignore them and ship it.” No, fix all the bugs. That’s the job.
Still, I love it. I can hand code the bits I want to, let it fly with the bits I don’t. I can try something new in a separate CLI tab while others are spinning. Cost to experiment drops massively.
Claude code has those "thoughts" you say it never will. In plan mode, it isn't uncommon that it'll ask you: do you want to do this the quick and simple way, or would you prefer to "extract this code into a reusable component". It also will back out and say "Actually, this is getting messy, 'boss' what do you think?"
I could just be lucky that I work in a field with a thorough specification and numerous reference implementations.
I agree that Claude does this stuff. I also think the Chinese menus of options it provides are weak in their imagination, which means that for thoroughly specified problem spaces with reference implementations you're in good shape, but if you want to come up with a novel system, experience is required, otherwise you will end up in design hell. I think the danger is in juniors thinking the Chinese menu of options provided are "good" options in the first place. Simply because they are coherent does not mean they are good, and the combinations of "a little of this, a little of that" game of tradeoffs during design is lost.
I recently asked Claude to make some kind of simple data structure and it responded with something like "You already have an abstraction very similar to this in SourceCodeAbc.cpp line 123. It would be trivial to refactor this class to be more generic. Should I?" I was pretty blown away. It was like a first glimpse of an LLM play-acting as someone more senior and thoughtful than the usual "cocaine-fueled intern."
Yeah this is just trading largely known & controllable labour management risks for some fun new unknown software ones.
You can negotiate with your human engineers for comp, you may not be able to negotaiate with as much power against Anthropic etc (or stop them if they start to change their services for the worse).
If this is successful supply shock will kick in (because of energy/GPU constraints) and we could easily see a 2-4x price increase maybe more if the market will accept it. That's before taking into account current VC subsidies.
This is not a lot competition though. And you need to assume, that like other industries, mergers and acquisitions will happen over time which will put you in an increasingly worse position.
Google, OpenAI, Anthropic, Meta, Amazon, Alibaba (Qwen), Nvidia, Mistral, xAI - and likely more of the Chinese labs but I don't know much about their size.
I guess where I was leading to is who owns the compute that runs those models. Mistral, for example, lists Microsoft and Google as subprocessors (1). Anthropic is (was?) running on GCP and AWS.
So, we have multiple providers, but for how long? They're all competing for the same hardware and the same energy, and it will naturally converge into an oligopoly. So, if competition doesn't set the floor, what does?
Local models? If you're not running the best model as fast as you can, then you'll be outpaced by someone that does.
If there are low switching costs, and if there are multiple highly capable models, and if the hardware is openly purchasable (all of these are true), then the price will converge to a reasonable cash flow return on GPUs deployed net of operating expenses of running these data centers.
If they start showing much higher returns on assets, then one of the many infra providers just builds a data center, fills it with GPUs, and rents it out at 5% lower price. This is the market mechanism.
Looking at who owns the compute is barking up the wrong tree, because it has little moat. Maybe GPU manufacturers would be a better place to look, but then the argument is that you're beholden to NVIDIA's pricing to the hyperscalers. There's some truth to that, but you already see that market position eroding because of TPUs and belatedly AMD. All of these giant companies are looking to degrade Jensen's moat, and they're starting to succeed.
Is the argument here that somehow all the hyperscalers are going to merge to one and there will be only one supplier of compute? How do you defend the idea that nobody else could get compute?
The starting point was that competition would prevent AI providers from doubling the price of tokens, because there's lots of models running on lots of providers.
This is in the context of the article, that paints a world where it would be unreasonable not to spend $250k per head per year in tokens.
My argument is the current situation is temporary, and _if_ LLMs provide that much value, then the market will consolidate into a handful of providers, that'll be mostly free to dictate their prices.
> If they start showing much higher returns on assets, then one of the many infra providers just builds a data center, fills it with GPUs, and rents it out at 5% lower price. This is the market mechanism.
Except when the GPUs, memory, and power are in short supply. The demand is higher than the supply, prices go up, and whoever has the deeper pockets, usually the bigger and more established party, wins.
A tri-opoly can still provide competitive pressure. The Chinese models aren’t terrible either. Kimi K2.5 is pretty capable, although noticeably behind Claude Opus. But its existence still helps. The existence of a better product doesn’t require you to purchase it at any price.
because in all of this change we can’t be bothered to imagine a world where people have money without jobs? Do you think billionaires are just going to want to stop making more money?
The best bull case for us reaching luxury gay space communism is that people not working and having near infinite capital to buy whatever they want to enjoy is the only way the billionaires get to see their pot growing forever.
>because in all of this change we can’t be bothered to imagine a world where people have money without jobs?
We can imagine it all we want, and a free pony too. What we'll get is most of humanity not needed, and living in the edges of society, plus some 10-20 percent still "useful".
>The best bull case for us reaching luxury gay space communism is that people not working and having near infinite capital to buy whatever they want to enjoy is the only way the billionaires get to see their pot growing forever.
Billionaires are about power. The money was just a means for that, if they can get it in another way, they will use that. People "not working and having near infinite capital to buy whatever they want to enjoy" is the last thing they'll want.
I mean it's kind of hard to say because almost all software I use is free, a lot of it is FOSS. The software I bought outright in the last couple of years was well priced because of competition (ex: Affinity Designer 2 for $63 - the new version is free although I stick with v2).
Maybe not worth using then. Your product costs 5x and delivers 0.2x of competing product in the adjacent product class (traditional server/VPS), why use it?
I'm understanding that you have no idea what high availability means, you only know that you need it and the cloud has it. Great marketing by the cloud.
All the big clouds are still in market share acquisition mode. Give it about 5 more years, when they're all in market consolidation and extraction mode.
>> $170-180k a year, like an average-level senior software engineer in the US.
I hear things like this all the time, but outside of a few major centers it's just not the norm. And no companies are spending anything like $1k / month on remote work environments.
I recognize that not everyone makes big tech money, but that's somewhere between entry and mid level at anywhere that can conceivably be called big tech
You might want to review the commenting guidelines, notably the first few.
Like you mention, big tech gravitates to a handful of tech hubs across the US, which drives up salaries for every company in the area. Which is more data suggesting something is wrong with BLS' numbers.
My expectation (based on anecdotal/personal data - if you have better data I'd love to see it) is that the median developer in a tech hub makes more than an entry level big tech kid. So unless there's either an error, omission, or unexpected inclusion in the BLS data, the data implies that nearly all of big tech, plus ~50% of developers in tech hubs, accounts for about 10% of the workforce.
That doesn't make sense. What does seem plausible is that this data doesn't account for bonuses, options, RSUs, and the like, which would put big tech entry level jobs right around the median for developers. I'm not certain if that's the case, but it at least passes the sniff test.
Salary data like this has to be interpreted by someone knowledgeable. Very often it becomes skewed or invalid in various ways. Source: close relative is a compensation analyst.
I think that is easy to understand for a lot of people but I will spell it out.
This looks like AI companies marketing that is something in line 1+1 or buy 3 for 2.
Money you don’t spend on tokens are the only saved money, period.
With employees you have to pay them anyway you can’t just say „these requirements make no sense, park for two days until I get them right”.
You would have to be damn sure of that you are doing the right thing to burn $1k a day on tokens.
With humans I can see many reasons why would you pay anyway and it is on you that you should provide sensible requirements to be built and make use of employees time.
OK, but who is saying that to the llm? Another llm?
We got feedback in this thread from someone who supposedly knows rust about common anti patterns and someone from the company came back with 'yeah that's a problem, we'll have agents fix it.'[0].
Agents are obviously still too stupid to have the meta cognition needed for deciding when to refactor, even at $1,000 per day per person. So we still need the buts in seats. So we're back at the idea of centaurs. Then you have to make the case that paying an AI more than a programmer is worth it.[1]
[0] which has been my exact experience with multi-agent code bases I've burned money on.
[1] which in my experience isn't when you know how to edit text and send API requests from your text editor.
That nobody wants to actually do it is already a problem, but some basically true thing is that somebody has to pay those $90k junior engineers for a couple years to turn them into senior engineers.
The seem to be plenty of people willing to pay the AI do that junior engineer level work, so wouldn’t it make sense to defect and just wait until it has gained enough experience to do the senior engineer work?
Assuming current prices are heavily subsidised (VC money) and there is a supply shock (because we don't have enough GPUs/energy). If that leads to double the price that means 500k/year, and if we see a 4x price increase that's 1000k/year.
Suddenly, it starts to look precarious. That would be my concern anyway.
I took it as a napkin rounding of 365/7 because that’s the floor you pay an employee regardless of vacation time (in places like my country you’d add an extra month plus the prorated amount based on how many vacation days the employee has), so, not that people work 50 weeks per year, it’s just a reasonable approximation of what the cost the hiring company.
This is a simplification to make the calculation more straightforward. But a typical US workplace honors about 11 to 13 federal holidays. I assume that an AI does not need a vacation, but can't work 2 days straight autonomously when its human handlers are enjoying a weekend.
There are no human handlers. From the opening paragraph (emphasis mine):
> We built a Software Factory: non-interactive development where specs + scenarios drive agents that write code, run harnesses, and converge without human review.
[Edit]
I don't know why I'm being downvoted for quoting the linked article. I didn't say it was a good idea.
"If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement" - how exactly is that a weaker statement?
My read of it was "by today", aka cumulative. But you're right that it can also be read as "just today". The latter is an absurdly strong statement, I agree.
I would love to see setups where $1000/day is productive right now.
I am one of the most pro vibe-coding^H^H^H^H engineering people I know, and i am like "one claude code max $200/mo and one codex $200/mo will keep you super stressed out to keep them busy" (at least before the new generation of models I would hit limits on one but never both - my human inefficiency in tech-leading these AIs was the limit)
Only for a few days - but going from $200-400/mo to $1000/day productively seems like a huge stretch.
Also the eat tokens may be compared to single-tasking - when agent swarms move faster, I need to come back to that task sooner, slowing down the multi-tasking that allowed me to use a full 20x max subscription... so the overall usage once that is taken into account is smaller.
This is not an outrageous amount of money, if the productivity is there. More likely the AI would work like two $90k junior engineers, but without a need to pay for a vacation, office space, social security, etc. If the productivity ends up higher than this, it's pure profit; I suppose this is their bet.
The human engineer would be like a tech lead guiding a tea of juniors, only designing plans and checking results above the level of code proper, but for exceptional cases, like when a human engineer would look at the assembly code a compiler has produced.
This does sound exaggeratedly optimistic now, but does not sound crazy.