Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I'm not worried about AI job loss (davidoks.blog)
318 points by ezekg 1 day ago | hide | past | favorite | 523 comments




I build automation tools for bookkeepers and accountants. The thing I keep seeing firsthand is that automation doesn't eliminate the job - it eliminates the boring part of the job, and then the job description shifts.

Before our tools: a bookkeeper spends 80% of their time on data entry and transaction categorisation, 20% on actually thinking about the numbers. After: those ratios flip. The bookkeeper is still there, still needed, but now they're doing the part that actually requires judgment.

The catch nobody talks about is the transition period. The people who were really good at the mechanical part (fast data entry, memorised category codes) suddenly find their competitive advantage has evaporated. And the people who were good at the thinking part but slow at data entry are suddenly the most valuable people in the room. That's a real disruption for real humans even if the total number of jobs stays roughly the same.

I think the "AI won't take your job" framing misses this nuance. It's not about headcount. It's about which specific skills get devalued and how quickly people can retool. In accounting at least, the answer is "slowly" because the profession moves at glacial speed.


You’re describing task reallocation, but the bigger second-order effect is where the firm can now source the remaining human judgment.

AI reduces the penalty for weak domain context. Once the work is packaged like that, the “thinking part” becomes far easier to offshore because:

- Training time drops as you’re not teaching the whole craft, you’re teaching exception-handling around an AI-driven pipeline.

- Quality becomes more auditable because outputs can be checked with automated review layers.

- Communication overhead shrinks with fewer back-and-forth cycles when AI pre-fills and structures the work.

- Labor arbitrage expands and the limiting factor stops being “can we find someone locally who knows our messy process” and becomes “who is cheapest who can supervise and resolve exceptions.”

So yeah, the jobs mostly remain and some people become more valuable. But the clearing price for that labor moves toward the global minimum faster than it used to.

The impact won’t show up as “no jobs,” it is already showing up as stagnant or declining Western salaries, thinner career ladders, and more of the value captured by the firms that own the workflows rather than the people doing the work.


Isn't that what a well run company does when creating a process? Bureaucracy and process, reduces the penalty of weak domain context and in fact is designed to obviate that need. It "diffuses" the domain knowledge to a set of specifications, documents, and processes. AI may be able to accelerate it, or subsume that bureaucracy. But since when has the limiting factor been "finding someone locally who knows the process?" Once you document a process, the power of computing means you can outsource any of that you want no? Again, AI may subsume, all the back office or bureaucratic office work. Perhaps it will totally restructure the way humans organize labor, run companies, and coordinate. But that system will have to select for a different set of skills than "filling out n forms quickly and accurately." The wage stagnation etc etc. predates AI and might be due to other structural factors.

> Isn't that what a well run company does

How many of those do you see around?


I bet we're about to see a lot of 10-person $100M+ ARR companies emerge. That's a scale where teams can be tight and excel.

If you can build that with AI, then 9 people with AI can probably wipe out that company, only to be wiped out by 8 people with AI…and so on.

Not necessarily. That's the old "I made Twitter in a weekend" joke.

That's not because you can technically replicate a product that your company will be successful. What makes a company successful are sales forces, internal processes and luck. Both are extremely difficult to replicate because sales forces are based on a human network you have to build, internal processes are either organic or kept secret, and luck can only be provoked by staying alive long enough, which means you need money.


massively underrated comment detected.

when.

people have been saying that since 2022.

when and how. hmm??

show your work.

or is this just more slype being spewed...


I think something around that scale (say maybe 20 employees, but definitely not hundreds) was possible even before LLM got popular, but the people involved needed to be talented and focused. I'm not sure if AI will really change that though.

In 2014, Facebook acquired WhatsApp for $19B and they had 55 employees

Correction: 55 grossly underpaid employees!

"it is already showing up as stagnant or declining Western salaries"

Real median salary, and real median wages are both rising for the last couple years. Maybe they would have risen faster if there was no AI, but I don't think you can say there has been a discernible impact yet.


I don't think that's true, if you trust gemini at least.. "In 2025, U.S. software engineer pay is barely keeping pace with inflation, with median compensation growing 2.67% year-over-year compared to 2.7% inflation. While salaries held steady or increased during the 2021-2023 inflationary period, many professionals reported that real purchasing power remained stagnant or dipped, making it difficult to get ahead. "

I’d like a source for that. College graduates are no longer at an employment advantage compared to their uneducated peers. The average age of a new hire increased by 2 years over the past 4 years.

Young people in the west have definitely seen declining salaries, if only by virtue of the fact that they’re not being offered at all.

https://www.clevelandfed.org/publications/economic-commentar...

https://www.reveliolabs.com/news/social/65-and-still-clockin...


Real wage growth has been positive for the last 3 years:

https://data.bls.gov/timeseries/CES0500000013?output_view=pc...


The salary compression point is the one I find hardest to push back on. Accounting BPO to the Philippines was already growing fast pre-AI - firms like TOA Global were scaling rapidly. With AI reducing the training overhead for domain-specific work, that arbitrage gets even easier. The remaining barrier is local regulatory knowledge (UK tax law, Companies House requirements, etc.) but even that erodes when you're mostly supervising exceptions rather than doing the full work yourself.

What do you mean when you say “AI is reducing training overhead”?

Basically, "You don't have to understand how this works, just push this button when x, or flip this switch when y."

I don't think the impact will be quite as large as some are saying here, but it won't be minimal either.


> AI reduces the penalty for weak domain context

This is why (personal experience) I am seeing a lot of FullStack jobs compared to specialized Backend, FE, Ops roles. AI does 90% of the job of a senior engineer (What the CEOs believe) and the companies now want someone that can do the full "100" and not just supply the missing "10". So that remaining 90 is now coming from an amalgamation of other responsibilities.


In my mind we will have a bimodal set of skills in software development, likely something like a product engineer (an engineer who is also a product manager-- this person conceptualizes features and systemically considers the software as a whole in terms of ergonomics, business sense, and the delight in building something used by others) and something like a deep-in-the-weeds engineer (an engineer who innovates on the margins of high performance, tuning, deep improvements to libraries and other things of that nature). The former is needing to skill in rapid context switching, keeping the full model of customer journey in their minds, while also executing on technical rigor enough to prevent inefficiencies. The latter will need to skill in being able to dive extremely deeply into nuanced subjects like fine-tuning the garbage collector, compiler, network performance, or internal parts of the DOM or OS or similar.

I would expect a lot of product engineering to specialize further into domains like healthtech, fintech, adtech, etc. While the in-the-weeds engineering will be platform, infra, and embedded systems type folks.


Can I take a guess that you believe you will speciate into the former?

Funny you ignored the third order effect where the efficiency really does enable lower cost

Which is never realized. Price points don't decrease. Profit taking increases.

I would imagine, in this example, that the fact that you put in the numbers yourself gives you a mental map of where the numbers are and how they relate to each other, that having AI do it for you doesn't give you.

You could stare at a large sheet of numbers for a long time, and perhaps never get the kind of context you gained by entering them.

Additionally, if there was a mistake, it may not be as noticeable.


> automation tools ... eliminates the boring part of the job, and then the job description shifts.

But the job had better take fewer people, or the automation is not justified.

There's also a tradeoff between automation flexibility and cost. If you need an LLM for each transaction, your costs will be much higher than if some simple CRUD server does it.

Here's a nice example from a more physical business - sandwich making.

Start with the Nala Sandwich Bot.[1] This is a single robot arm emulating a human making sandwiches. Humans have to do all the prep, and all the cleaning. It's slow, maybe one sandwich per minute. If they have any commercial installations, they're not showing them. This is cool, but ineffective.

Next is a Raptor/JLS robotic sandwich assembly line.[2] This is a dozen robots and many conveyors assembling sandwiches. It's reasonably fast, at 100 sandwiches per minute. This system could be reconfigured to make a variety of sandwich-format food products, but it would take a fair amount of downtime and adjustment. Not new robots, just different tooling. Everything is stainless steel or food grade plastic, so it can be routinely hosed down with hot soapy water. This is modern automation. Quite practical and in wide use.

Finally, there's the Weber automated sandwich line.[3] Now this is classic single-purpose automation, like 1950s Detroit engine lines. There are barely any robots at all; it's all special purpose hardware. You get 600 or more sandwiches per minute. Not only is everything stainless or food-grade plastic, it has a built-in self cleaning system so it can clean itself. Staff is minimal. But changing to a product with a slightly different form factor requires major modifications and skills not normally present in the plant. Only useful if you have a market for several hundred identical sandwiches per minute.

These three examples show why automation hasn't taken over. To get the most economical production, you need extreme product standardization. Sometimes you can get this. There are food plants which turn out Oreos or Twinkies in vast quantities at low cost with consistent quality. But if you want product variations, productivity goes way, way down.

[1] https://nalarobotics.com/sandwich.html

[2] https://www.youtube.com/watch?v=_YdWBEJMFyE

[3] https://www.youtube.com/watch?v=tRUfdBEpFJg


> But the job had better take fewer people, or the automation is not justified.

Not necessarily. Automation may also just result in higher quality output because it eliminates mistakes (less the case with "AI" automation though) and frees up time for the humans to actually quality control. This might require the people on average to be more skilled though.

Even if it only results in higher output volume you often have the effect that demand grows also because the price goes down.


There's a classic book on this, "Chapters on Machinery and Labor" (1926). [1]

They show three cases of what happened when a process was mechanized.

The "good case" was the Linotype. Typesetting became cheaper and the number of works printed went up, so printers did better.

The "medium case" was glassblowing of bottles. Bottle making was a skilled trade, with about five people working as a practiced team to make bottles. Once bottle-making was mechanized, there was no longer a need for such teams. But bottles became cheaper, so there were still a lot of bottlemakers. But they were lower paid, because tending a bottle-making machine is not a high skill job.

The "bad case" was the stone planer. The big application for planed stone was door and window lintels for brick buildings. This had been done by lots of big guys with hammers and chisels. Steam powered stone planers replaced them. Because lintels are a minor part of buildings, this didn't cause more buildings to be built, so employment in stone planing went way down.

Those are still the three basic cases. If the market size is limited by a non-price factor, higher productivity makes wages go down.

[1] https://www.jstor.org/stable/1885817?seq=1


I think this is probably the trajectory for software development because while people claim there is a potentially unlimited demand that really only occurs at rock bottom prices.

In many cases you can saturate the market. The stone planer examples is an early case. Cheaper lintels don't mean more windows, because they are a minor part of the cost. Cheaper doorknobs do not generate demand for more doorknobs, because the market size is the number of doors. Cheap potatoes, soy, corn, and cheese have saturated their markets - people can only eat so much.

This might also be true of web analytics. At some point, more data will not improve profitability.


The Nala bot reminded me of the guys at Felipe's in Cambridge MA. When they're building burritos during dinner rush, you'd swear to god that multiple different ingredients were following a ballistic trajectory toward the tortilla at any given time. If there was a salsa radar it would show multiple inbounds like the Russkies were finally nuking us.

ETA: It didn't remind me of this because the robot is good at what it does. It reminded me of just how far away from human capabilities SOTA robotic systems are.


That’s one use case that is very hard to automate right now yes.

> But the job had better take fewer people, or the automation is not justified.

In many cases, this is a fallacy.

Much like programming, there is often essentially an infinite amount of (in this case) bookkeeping tasks that need to be done. The folks employed to do them work on the top X number of them. By removing a lot of the scut work, second order tasks can be done (like verification, clarification, etc.) or can be done more thoroughly.

Source: Me. I have worked waaaay too much on cleaning up the innards of less-than-perfect accounting processes.


The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

From the perspective of modern management, there's really no reason to keep people if you can automate them away.


> The firm simply assumes that if the top X was sufficient in the past, it is still sufficient now.

> From the perspective of modern management, there's really no reason to keep people if you can automate them away.

These are examples of how bad management thinks, or at best, how management at dying companies think.

Frankly, this take on “modern management” is absurd reductionist thinking.

Just a few points about how managers in successful companies think:

- Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

- That said, in any sufficiently large organization, there is usually dead weight that can be cut. AI will be a bright light that exposes the least valuable employees, imho.

- There is a difference between threshold levels of compliance (e.g., docs that have to be filed for legal reasons) and optimal functioning. In accounting, a good team will pay for themselves many times if they have the time to work on the right things (e.g., identifying fraud and waste, streamlining purchasing processes, negotiating payment terms, etc.). Businesses that optimize for making money rather than getting a random VP their next promotion via cost-cutting will embrace the enhanced capability.

Yes, AI will bring about significant changes to how we work.

Yes, there will be some turmoil as the labor market adjusts (which it will).

No, AI will not lead to a labor doomsday scenario.


> - Good employees are hard to find. You don’t let good people go just because you can. Retraining a good employee from a redundant role into a needed role is often cheaper than trying to hire a new person.

Your best employees at a given price though.

Part of firm behavior is to let go of their most expensive workers when they decide to tighten belts.

Unless your employee is unable to negotiate, lacking the information and leverage to be paid the market rate for their ability. Your best employees will be your more expensive, senior employees.

Everything is at a certain price. Firing your best employee when you can get the job done with cheaper, or you can make do with cheaper, is also a common and rational move.

While I agree it’s unlikely that there won’t be a labour doomsday scenario, I think ann under employment scenario is highly likely. Offshoring ended up decimating many cities and local economies, as factory foremen found new roles as burger flipper.

Nor do people retrain into new domains and roles easily. The more senior you are, the harder it is to recover into a commensurately well paying role.

AI promises to reduce the demand for the people in the prime age to earn money, in the few high paying roles that remain.

Not the apocalypse as people fear, but not that great either.


See self checkouts at supermarkets, with teams reduced to when checkouts go bad, or filling the shelves.

Not only do the prices increase, now we get pushed to their jobs for free, while the chains layoff their employees.

Hence why I usually refuse to use them if I have to take some additional extra time queuing.


I have mixed feelings on these.

For a full cart, I expect a cashier or to be available.

If I have 3-5 items, I’d rather do it myself than wait.

That said, even 20-30 years ago, long before self checkout, at places like WalMart, one could wait 15-20 minutes in line. They had employees but were too cheap to have enough. They really didn’t care.

I don’t even understand how that math works. I might have kept going there if they had a few extra lowly paid cashiers around.


Well said. It’s like they think that the only thing automation is good for is cutting costs. You can keep the same staff size but increase output instead, creating more value.

"They" don't think the only thing automation is good for is cutting costs. Management thinks the only thing worth doing, at all, using any means, is cutting costs.

Well that’s clearly false, and obviously “they” refers to people that include management lol

No? You don’t only gain justification for automation by cutting costs. You can gain justification by increasing profits. You can keep the same amount of people but use them more efficiently and you create more total value. The fact you didn’t consider this worries me.

Also the statement “show why automation hasn’t taken over” is truely hysterically wrong. Yeah, sure, no automation has taken over since the Industrial Revolution


You can increase profits by cutting costs. It is remarkably easier to do in the short term. And even if you choose not to downsize you can drop/stagnate wages to gain from the fact everyone else is downsizing.

None of what you just said is anything I hadn’t considered, and also none of it negates anything I said.

Thank you. Having automation means process control, which means handling sources of variation for a defined standard/spec. The claims of all jobs being done by AI end up also assuming that we will end up with factories running automated assembly lines of thought.

I have been losing my mind looking at the output of LLMs and having to nail variability down.


I recently did a contract at medium sized business with a large retail and online business that had a CFO and several accountants / bookkeepers. You're describing a situation where that CFO only needs two or three accountants and bookkeepers to run the business and would lay off two or three people.

It IS about headcount in a lot of cases.


Fair enough - I'm probably biased because I mostly see small practices (1-3 people) where headcount can't really shrink further. In that context it's about throughput per person. But you're right that in a larger org with a CFO making staffing decisions, the efficiency gains get captured as cost savings rather than more clients served. The 5-to-3 scenario you describe is realistic and happening now.

I keep seeing that small teams or individuals are getting most of the productivity gains from new ai.

Small teams or individuals that learn to use ai well can outpace larger teams, even if the larger teams also use ai, because communication / coordination overhead grows faster than team size. Tasks that before needed large teams to get done, can now be done by smaller teams.

Large Knowledge work teams have lost their competitive advantage.

I see this as a business opportunity for small actors. Every large knowledge work team that doesn't quickly adapt and downsize itself, is now something you can disrupt as a small team or individual.


Or they’d keep the same number of people and increase total value output. Businesses tend to like the idea of growth more than cost cutting after all.

People don’t suddenly eat more food due to AI. That are a lot of industries with bounded total demand.

That’s true, however I’m truely glad 70% of the population isn’t working in food production anymore, those were the bad old times.

However good growth is finite unless you also believe in immigration and debt

Well all of that is false and tbh sounds a bit sus

I frame the shift more like this:

Systems engineering is an extremely hard computer science domain with few engineers either interested in it, or good at it.

Building dashboards is tedious and requires organizational structure to deliver on. This is the bread and butter of what agents are good at building right now. You still need organization and communication skills in your company and to direct the coding agents towards that dashboard you want and need. Until you hit a implementation wall and someone will need to spend time trying to understand some of the code. At least with dashboards, you can probably just start over from scratch.

It's arguably more work to prompt in english to an AI agent to assist you in hard systems problems, and the signals the agent would need to add value aren't readily available (yet?!). Plus, there's no way systems engineers would feel comfortable taking generated code at face-value. So they definitely will spend the extra mental energy to read what is output.

So I don't know. I think we're going to keep marching forward, because that's what we do, but I also don't think this "vibe-coded" automated code generator phase we're in right now will ultimately last. It'll likely fall apart and the pieces we put back together will likely return us to some new kind of normal, but we'll all still need to know how to be damn good software engineers.


I understand where you're coming from, and think there is something missing in your final paragraph that I'm curious to understand. If LLMs do end up improving productivity, what would make them go away? I think automated code generators are here until something more performant supersedes them. So, what in your mind might be possibilities of that thing?

Well I guess I no longer believe that long term, all this code generation would make us more productive. At least not how the fan favorite claude-code currently does it.

I've found some power use cases with LLMs, like "explore", but everyone seems misty eye'd that these coding agents can one-shot entire features. I suspect it'll be fine until it's not and people get burned by what is essentially trusting these black boxes to barf out entire implementations leaving trails of code soup.

Worse is that junior engineers can say they're "more productive" but it's now at the expense of understanding what it is they just contributed.

So, sure, more productive, but in the same way that 2010s move fast and break things philosophy was, "more productive." This will all come back to bite us eventually.


Another component or view of this is that automating the rote work is "eliminating the boring parts" (I love this and have worked extensively on this) but it is also eliminating the less cognitively demanding work.

Once you have automated extensively, all of the remaining work is cognitively demanding and doing 8 hours of that work every day is exhausting.


This is exactly why I'm not that worried. I've noticed that AI is great at the parts of software engineering that I'm bad at, like implementing a new unfamiliar library, deploy pipelines, infra configuration, knowing specific technical details and standard patterns.

It's bad at the stuff I'm good at: thinking about the wider context, architecture, how to structure the code in an elegant, maintainable way, debugging complex issues, figuring out complex algorithms. I've tried using AI for those things, but it sucks at them. But I've also used it to solve configuration problems that I doubt I'd been able to figure out on my own.


one reason why i started enjoying programming less and less was because i felt i was spending 95% of the time on the problems you described which i felt were more or less the same over the years and werent complicated but annoying. unfortunately or fortunately, after coding for over 15 years for the past 4 months ive only been prompting and reading the outputted code. it never really feels like writing something would be faster than just prompting, so now i prompt 2-3 projects at the same time and play a game on the side to fill in the time while waiting for the prompts to finish. its nice since im still judged as if its taking the time to do it manually but if this ever becomes the norm and expectations rise it would become horribly draining. mentally managing the increased speed in adding complexity if very taxing for me. i no longer have periods where i deep dive into a problem for hours or do some nice refactoring which feels like its massaging my brain. now all i do is make big decisions

This is also my experience. I am personally really happy about it. I never cared about the typing part of programming. I got into programming for the thinking about hard problems part. I now think hard more than ever. It's hard work, but it feels much more fulfilling to me.

> The bookkeeper is still there, still needed, but now they're doing the part that actually requires judgment.

The argument might be fundamentally sound, but now we're automating the part that requires judgement. So if the accountants aren't doing the mechanical part or the judgement part, where exactly is the role going? Formalised reading of an AI provided printout?

It seems quite reasonable to predict that humans just won't be able to make a living doing anything that involves screens or thinking, and we go back to manual labour as basically what humans do.


By what logic are the "manual labor" jobs available? And if you're right and they somehow are, isn't that just another way of saying humanity is enslaving itself to the machines?

Even manual labor is uncertain. Nothing in principle prevents a robot from being a mass produceable, relatively cheap, 24/7 manual worker.

We've presumably all seen the progress of humanoid robotics; they're currently far from emulating human manual dexterity, but in the last few years they've gotten pretty skilled at rapid locomotion. And robots will likely end up with a different skill profile at manual tasks than humans, simply due to being made of different materials via a more modular process. It could be a similar story to the rise of the practical skills of chatbots.

In theory we could produce a utopia for humans, automating all the bad labor. But I have little optimism left in my bones.


Let’s do some math.

He does 100 units of product per 100 units of time.

80 units of time on data entry 20 units of time on “thinking”

We now automatise the task in such a way that ratios flip:

So now we do 20 units of time for 100 products. Let’s assume we use same thinking as before of 20. So we use 40 units of time to produce 100 units of product.

Now let’s assume it’s linear growth:

We use 40 units of time for each task and we produce 200 units of product for 80 units of time.

Let’s now do 50 units of time for each and produce 250 units of product with same time as before. It’s definitely not the same.

you either work 40 and produce the same or work the same and produce 250. NOT THE SAME


I'd imagine that when the 80% of less productive time is automated, the market doesn't respond by demanding 80% more output. There's just 20% as much work either making this a part time job or more likely a much smaller workforce as the number of man*hours demanded by the market greatly reduces.

Scope will increase.

Good accounting teams will have more time and resources to do things like identify fraud, waste, duplicated processes, etc. They will also have time to streamline/optimize existing practices.

Good teams will earn many multiples of their cost in terms of savings or increased earnings.

There may be increased competition for the low-cost “just meet the legal compliance requirements” offerings, but any business that makes money and wants to make more will gladly spend more than the minimum for better service.


You’re not taking into account that a successful bookkeeper may have hired someone like a new grad to take the drudgery off of their hands and now they can just do it themselves.

> And the people who were good at the thinking part but slow at data entry are suddenly the most valuable people in the room.

No, they aren't. They are now competing with everyone - the slow thinkers, the barely-conscious thinkers, the erratic thinkers, the "unable to reach a conclusion" thinkers as well as the people quick at "data entry", with the caveat that the people quick at "data entry" are almost certainly going to be better thinkers than those that weren't quick at data entry.

IOW, you think AI isn't coming for some specific class of programmers, but you are wrong. You and the "other types" will continue this debate in the soup kitchen.


The desktop PC was the same - everyone said that it was going to wipe out jobs, when the main thing it wiped out was filing cabinets.

AI commentators seem to overlook that one of the primary functions of capitalism is to keep people in busywork: what David Graber called Bullshit Jobs. So AI is going to automate most of the bullshit away but the bullshit employees will keep working, because there wasn’t much need for them in the first place.


That would happen if the AI were good and consistent at doing the mechanical part. Which it is, sometimes.

I've found it's better to have the bot write a program to do the mechanical part that trusting it not to have a lazy day.


I'm not very familiar with the field on a practical basis.

What parts of the job require judgement that is resistant to automation? What percentage of customers need that?

If the hours an accountant spends on a customer go from 4 per month to 1, do you reckon they can sustainably charge the same?


Why would better efficiency mean they have to charge less?

Because your competitor will double their number of customers, and halve their prices— forcing you to do the same.

So then everyone would continue earning the same as before.

Both jobs are going away. Prepare.

>> The thing I keep seeing firsthand is that automation doesn't eliminate the job - it eliminates the boring part of the job, and then the job description shifts.

No, not necessarily. There are different kinds of automation.

Earlier in my career I sold and implemented enterprise automation solutions for large clients. Think document scanning, intelligent data extraction and indexing and automatic routing. The C-level buyers overwhelmingly had one goal: to reduce headcount. And that was almost always the result. Retraining redundant staff for other roles was rare. It was only done in contexts where retaining accumulated institutional knowledge was important and worth the expense.

Here's the thing though: to overcome objections from those staff, whom we had to interview to understand the processes we were automating, we told them your story: you aren't being replaced, you're being repurposed for higher-level work. Wouldn't it be nice if the computer did the boring and tedious parts of your job so that you can focus on more important things? Most of them were convinced. Some, particularly those who had been around the block, weren't.

Ultimately, technologies like AI will have the the same impact. They weren't quite there yet, but I think it's just a matter of time.


> The C-level buyers overwhelmingly had one goal: to reduce headcount.

For many businesses this is the only way to significantly reduce costs.


Accountants will still exist, but we'll need fewer of them at any given time. In your example of flipping the 80/20 ratio, you are implying that each accountant would be able to (theoretically) handle a 5x workload with AI making up the gap.

Perhaps in reality more like a 3x advantage, due to human inefficiencies and the overhead of scaling the business to handle more clients.

Given that, 3x increase of productivity implies we either need 1/3 the accountants, or the accountancy supply brings down prices and more clients start hiring accountants due to affordability.


(I work in house handling the tax function.)

If AI tools worked, they would eliminate the bookkeepers. Their job is data entry and validation.

But bookkeeping is extremely important. Bad bookkeeping has killed more companies than bad accounting. Without proper books, the accounting, finance, and tax teams are just cosplaying.


Yeah bro, its been three years. We are just beginning. We will replace the vast majority of professional service workers in 10 years including lawyers as Ai shifts to local and moves away from the cloud.

If we wipe out the vast majority of white collar jobs in just 10 years, we’re talking complete economic collapse.

No society can possibly absorb that kind of disruption over such a short time.

Also even assuming AI could completely replace lawyers. Lawyers control the legislature. They may not be able to stop your local model from telling you how to do something, but they can stop you from actually doing it without a lawyer.


Even subway train operators in NYC, whose job can be safely automated away, and has been for like 20 years, were able to legally mandate their jobs. I bet lawyers will, too. But the numbers of junior partners, and of paralegals, will dwindle.

But then will we not need more judges and courts?

Correct, which is why we will have the first worldwide revolution as people realize their democracies are fake, they are simply enslaved by capitalists; which is exactly what they told us Commies would do.

The chances of all of those revolutions not touching off world war 3 and decimating infrastructure and trade to the point that we can’t produce the chips to run AI is what now?

I'm glad we have intelligent, mature, uncorrupted politicians who will be able to work together to make sure that this doesn't cause a depression so profound that the entire economy ceases to be viable.

Oh..


Hey. I voted for the other one.

That's 70% of the population living in ghettos and the economy collapsing through lack of people with disposable income with extra steps.

Lawyers, doctors, and accountants aren't just paid to be knowledge workers.

They're paid to accept responsibility for when they fuck up (even when it's not intentional).

Programmers aren't held responsible for their screw-ups. If they were, software wouldn't be the buggy mess it is today.


Until you get firms willing to take on the risk and remove the human element. That is a hell of a war chest for fighting actionable incidents.

I was with the author on everything except one point: increasing automation will not leave us with such abundance that we never have to work again. We have heard that lie for over a century. The stream engine didn't do it, electricity didn't do it, computers didn't do it, the Internet didn't do it, and AI won't either. The truth is that as input costs drop, sales prices drop and demand increases - just like the paradox they referred to. However, it also tends to come with a major shift in wealth since in the short term the owners of the machines are producing more with less. As it becomes more common place and prices change they lose much of that advantage, but the workers never get that.

> I was with the author on everything except one point: increasing automation will not leave us with such abundance that we never have to work again.

That's because we prefer improved living standards over less work. If we only had to live by the standards of one century ago or more, we could likely accomplish that by working very little.


What is interesting is the new things are cheap while the old stuff is now expensive. Average house in Australia is $1,000,000 while a TV is $500. The internet, social media, etc are cheap. Having someone repair your shoes is expensive.

Automation made the TV inexpensive, but if you look at a chart on inflation almost everything that cannot be easily automated has risen in price.

https://www.aei.org/wp-content/uploads/2019/01/cpichart2019-...


Surely U.S. housing was not twice as automatable 12-13 years ago as it is now.

No, that rose in price for different reasons

That is the famous Baumol effect.

https://en.wikipedia.org/wiki/Baumol_effect


It's more like automated, industrial stuff is cheap, while land and human labor is expensive (and thank God for that!)

Some old stuff is now cheap: Grain, oils, clothes, steel, heating, electricity and books, for example.


Economies of scale were realized in the tv, but not the house. Maybe bc they aren’t realizable in housing, maybe bc regulation, maybe bc of the nimby veto, etc.

I think it’s rather because of scarcity: you can’t scale and automate land/prime-location land

Well you can scale it, which is why housing affordability is higher in many places where the cities are actually far denser than Australia. There are perverse incentives not to though, property prices don’t rise (which is what investors want) if you actually focus on increasing supply.

People are building houses with way more features, that last longer, have better thermoregulation, and just more comfortable to live in.

Same goes for TVs too. That’s clearly not the reason why house prices rose so drastically.

As predicted in The Diamond Age.

Good quality Goodyear welted boots, adjusted for inflation, are cheap AF. I can get an excellent pair from Grant stone with horween leather for ~300 USD when on sale.

A pair of Nike jordans or air maxes is often in the ~120 range and made of far inferior materials.

Boots have never been cheaper/accessible before. The people that bring up repairable shoes don’t wear them or buy from shit brands like Thursday, doc martins, or timberland. You deserve your poor quality footwear.


Brand new boots are cheap because some child in a 3rd world country makes them. Having them repaired in my country costs enough to generally make it worth getting new ones.

>That's because we prefer improved living standards over less work

That's more because we are never given the chance. We only get to keep working or fall of the rat race and at best be delegated to Big Lebowski style pariah existance.


Yes, and housing is priced by competitive auction so if you drop out of the rat race and other people don't, you'll just get out-bid.

> we could likely accomplish that by working very little

Yeah I know many people who do in the small town I live in. Mostly elderly who are used to it still, but also some young people who want to work just enough to buy what they need and not 1 minute more. I could've retired at <20 if I would've enjoyed that. Now I enjoy it more; it's kind of relaxing that kind of lifestyle; not because of not working but because of needing nothing outside your humble possessions.


Have you seen the land prices

What land prices? There's plenty of cheap land, it's just a bit far away from where most people live. But guess what, population densities were also lower a century ago.

Sure, just like less desirable products of every category cost less essentially by definition. But that’s not really a retort to someone asking by why land prices have risen so much.

Population increases through immigration or birth and the area (a city) staying the same size. Plus covid people valuing a house more.

Heck no. Given the choice most people would want to do remote work. COVID showed that we can actually achieve remote work, and suddenly many people realized they had a life they loved, without having to lose chunks of it to an unpaid commute that was baked into the cost of work.

Given actual alternatives, workers have made their preferences clear.

Culture also plays a part - America is uniquely mercantile and business first. Workers and citizens in other countries have made different choices.


Exactly.

Living quarters, transportation, healthcare, food. What were theses figures in 1926, and how much work is needed to achieve them.


> That's because we prefer improved living standards over less work. If we only had to live by the standards of one century ago or more, we could likely accomplish that by working very little.

Is that trend still true? I can look from the 50s to 2000s and buy into it. I'm not clear it is holding true by all metrics beyond the 2000s, and especially beyond maybe the 2020s. Yes, we have better tech, but is life actually better right now? I think you could make the argument that we were in a healthier and happier society in that sweet spot from 95 - 2005 or so. At least in NA.

We've seen so much technological innovation, but cost of living has outpaced wages, division is rampant, and the technology innovations we have have mostly been turned against us to enshitify our lives and entrap us in SaaS hell. I'd argue medical science has progressed, but also become more inaccessible, and, somehow, people believe in western medicine LESS. Does not help that we've also seen a decline in education.

So do we still prefer improving our standards of living in the current societal framework?


sure sure

As long as the owner class can leverage, "Hey, that {out group} is sitting around doing nothing and getting free money!" we'll never have anything close to UBI imo.

Seems pretty easy to work around with "UBI for citizens" only. There's not much pushback for social security, for instance, even if minorities get it.

I still like the idea of clawing back mineral and water rights and paying for basic services out of the money payed by industry for the right to dirty our air and water. As a citizen you're entitled to compensation for the smoke you're breathing.

People talk about how socially progressive Scandinavia is but they have a shitload of petroleum resources and that money goes into social programs.


I'd love to make companies pay for their products' entire lifecycle, including disposal and cleanup. It's not right that a company can manufacture future-trash, sell it, and then absolve itself of the negative externality when the customer throws the product away and off it goes into a landfill.

If a company's process produces waste, it should bear the entire cost of leaving the environment the way they found it rather than just pumping the waste into it. If a company's products are not reused, it should bear the cost of taking the used product back and restoring the world to the way it was before the product was built.


this reminds me of retropunk and the hundred rabbits

Yep, we should charge every farm for all the poop that people that eat their food make

> People talk about how socially progressive Scandinavia is but they have a shitload of petroleum resources and that money goes into social programs

Of all the Scandinavian countries, only Norway has any oil resources of significance.

The Scandinavian welfare model is primarily tax-funded.


My quick look at Swedish exports shows that the largest export is finished equipment at 14%, fuel exports at 7.1, 4.8% wood and paper, 3.6% iron and steel, of which I'm sure a lot of that equipment is made. 3.4% plastics, which is just oil in another form.

It looks like you're right and their oil exports are all import/export rather than domestic, but that's still a good bit of mineral wealth.


Yes Sweden has non-trivial mineral resources, but nothing like e.g. China, Russia or Australia though.

The Scandinavian social programs are funded by high taxation. It is mostly a result of political prioritization, and not a windfall of natural resources.


There's been enormous pushback, pushes for privatizing (ruining) it, underfunding it from Congress, an absolute refusal to remove the criminally low income cap on contributions, etc.

One could make the argument that the modern Republican Party has in fact largely been shaped by this pushback.

>There's not much pushback for social security, for instance, even if minorities get it.

The racist moral panic over "welfare queens" seems to be a counter example.


And the same person who posts about that on Facebook will the next day post “keep your government hands off my social security check.”

And why do citizens get it? USA killed a lot of the world for their wealth and kneecapped anyone who didn't play along

A lot of conservatives want to retroactively throw off non whites from citizenship because they think birthrate citizenship is disgusting.

Expect a real movement to reduce the number of citizens in this country. Specifically, if you can’t trace your lineage to a founding father (including for kids of Geman or iish immigrants), than they want you disenfranchised.

Heritage Americans vs “hyphenated Americans”

https://en.wikipedia.org/wiki/Hyphenated_American


You know. I have worked for almost two decades now, I can't afford to buy an apartment. People who have been useless their entire lives are getting government loans that they then pay off with welfare they get because they are doing nothing.

I'm not the ownership class, this is unfair. You are the ownership class. People with money or who grew up with money are overwhelmingly left leaning.


The useless people you are talking about _are_ the ownership class. They haven't worked a day in their life like you have, they are getting all the loans they want, and they are paying them off with welfare (tax cuts and loopholes).

You can't afford an apartment because the ownership class is working very hard to keep housing prices high while paying you as little as possible for the two decades you have been working. Not because some disabled person elsewhere is struggling to get by on government loans and welfare.

The people keeping housing prices high are the leftist that push regulations that make it impossible to build while importing immigrants who disproportionately use welfare and get starter loans which they then use to push up housing prices without contributing anything to the economy. If this is the "ownership class" I guess stop voting for leftist. But nobody does, they just keep doing it, and housing becomes even more unaffordable.

The right wing here are the only people where I live with an actual viable plan for helping working people, even low class working people. The left makes deliberate choices that everyone knows will make things worse for lower class working people.


This sounds like a Fox News fever dream.

Even if we assume there are tons of jobless immigrants being ‘imported’ they would be renters, not buyers.

Generally, house pricing is primarily a supply problem. Removing immigrants will make this worse given that they are 30%+ of the construction workforce.


75% of welfare is going to immigrants, they get much easier access to government zero deposit loans even while not working, and then they pay off those loans with welfare. Sorry no fox news, just facts.

Immigrants are also overwhelming much more of a burden to society and the state, they are over represented in crime sadistically and take much more from the state that they contribute.

Suggesting that we would have no construction if we did not import criminal tax leeches seems like an evidence free statement.


> 75% of welfare is going to immigrants

This is complete nonsense. Immigrants use less social services than the average American. Here's the not remotely liberal Cato institute: https://www.cato.org/blog/immigrants-still-use-much-less-wel...

"There is a persistent myth that the United States lacks an extensive welfare state, despite all the evidence to the contrary. One look at this brief will disabuse you of any such belief. Total spending on means-tested welfare and entitlement programs climbed to about $3.4 trillion in 2023. About $823 billion went to means-tested programs such as Medicaid, SNAP, SSI, TANF, and refundable tax credits, while approximately $2.3 trillion was spent on old-age entitlement programs like Social Security and Medicare.

Native-born Americans use an average of $7,134 in old age entitlements and $3,638 in means-tested benefits in 2023. By comparison, immigrants used $4,864 in old age entitlements and $3,370 in means-tested benefits. If native-born Americans had consumed the same per capita dollar amount of means-tested welfare and entitlement benefits as all immigrants, the total expenditures on these programs would have been about $715 billion less in 2023. That’s a tremendous savings, even for the federal government, considering it is approximately 42 percent of the federal budget deficit in 2023. We are tempted to suggest that native-born Americans should start assimilating toward immigrant levels of welfare and entitlement consumption.

Across nearly all major welfare and entitlement programs, immigrants consume less per capita than native-born Americans, but not uniformly so. They use much less Social Security and Medicare, but only slightly less Medicaid. Immigrants also use SNAP, SSI, and TANF at lower rates and lower dollar amounts per person, but those programs are relatively small compared to Social Security, Medicare, and Medicaid. Immigrants receive more per capita through the relatively small Earned Income Tax Credit and the Women, Infants, and Children (WIC) program. For the latter, immigrants use $3 more per year on a per capita basis than native-born Americans. Immigrants are less likely than native-born Americans to use any welfare program and, when they do, use fewer of them for a shorter period. The typical lifetime abuser of welfare was born in this country."


The ownership class is doing no such thing. Zoning, regulation, nimby-ism are what keep prices high.

> Zoning, regulation, nimby-ism

And who exactly do you think controls these items?


Overwhelmingly leftist politicians elected by leftist voters.

who do you think is responsible for all of those things, if not the ownership class?

(Citation needed)

Kick me out of communism club if you have to, but I ain't giving people something for nothing. I think everybody should have a roof over their head and food in their bellies, but there's so much stuff to do these days. Go plant a tree or, anything!

> I ain't giving people something for nothing but I suspect you do, or would do that for your children or immediate family.

I'm just some dude on the internet, so my opinions are worth exactly what you're paying for them (nothing). But when I try to understand this type of thinking, this is what I come up with:

In the old days of scarce resources (vast majority of civilization), children were expected to 'repay' their elders for the care they received by taking care of them in their old age. And the competition for resources made this idea of keeping those resources for your family only important for survival.

But with the resources available today, the dynamics a very different. Currently only about 25% of total employment is in agriculture, worldwide. In the rich countries this is very significantly less. Canada is 1%, USA is 2% [worldbank]

But we're living with the cultural baggage of generations of scarcity and tribalism, which still shape our policy in a time of incredible resources provided by technology. So instead of more sharing, we choose higher standard of living for ourselves. I know it will take time to change this culturally - generations - but I'm still disappointed it's not happening faster.

[worldbank]:https://data.worldbank.org/indicator/SL.AGR.EMPL.ZS


I think it's hard for certain people with certain backgrounds to understand.

What I see as someone who grew up in a very working class family surrounded by those on benefits:

I see the janitor who busts their ass day in and day out to provide for their families totally lost in these conversations. They are expected to take money out of their check - doing a very difficult, thankless, and not all that well paying job - to even today help pay for a whole lot of people who are incredibly more privileged. I know quite a number of people who have college degrees but experienced "failure to launch" who see themselves as too good to go work in a kitchen, as a janitor, or what have you - but are quite happy to accept various form of public benefits due to their part time cushy employment.

I cannot square that circle. Having someone work themselves to a bone with no real hopes of retirement, so you can have other people live a much easier life than they are.

If you ask those taking said benefits who are working part time in a arts field or whatever, they will of course state that they are not the problem and "rich people" should pay more in taxes so the janitor also doesn't have to work. But now who is cleaning toilets or taking out trash? At some point the work has to be done and you run out of rich people to tax for wealth redistribution.

Considering how widespread this "condition" seems to be in my human experience, I cannot see a widescale implementation of "to each of their abilities, to each their need" ever working out simply due to how selfish humans appear to be. I love the idea - and I have often dream of starting my own commune of sorts of well-curated individuals who all have roles to play, but I just can't see it working out either in reality or in scale. The only reason such a limited scale commune might work is that you could rule with an iron fist and vote people off the island who start to take advantage of others and no longer pull their own weight.

I am quite convinced that if you implemented UBI or other means for the average person to never work you'd simply get a whole lot of people doing effectively nothing, if not outright destructive (for society) things with their time.


> Having someone work themselves to a bone with no real hopes of retirement, so you can have other people live a much easier life than they are.

But isn't the real problem that the janitor isn't being paid enough to save for retirement _and_ pay a 'fair' share of taxes? I read about the fear and complaints of high taxes to pay for the lazy, but the actual tax load on countries with strong socialist policies is not really all that much higher than in the U.S.

This sort of thinking reminds me of the old cartoon with three people at a table, one obviously rich person with a whole pile of cookies on his side of the table, and two other ordinary-working-class people each with a couple of cookies, with the rich guy saying to one of the other guys - watch out, that guy wants to take away one of your cookies!'

There are so many working class people convinced that the problem is the other poor people around them, instead of the very small number of people with > 50% of the resources. Those super-rich have somehow convinced everyone that the current balance is best.

I'm not some revolutionary; far from it. I've always hoped that technology would be the thing that allowed virtually everyone to rise up out of poverty (and it has to some degree), but what I've seen instead is the gains from all of this tech we've created in the past 200 years primarily going to a small class of people, and that just makes me sad.


I don't think the communism club would disagree with you. Historically, labour was a right _and_ an obligation.

The floor being three square meals and a roof would be a vast improvement compared to now.


Historically speaking, you don’t get kicked out of the communist club, they just kill you.

Such is the republican lizard brain these days.

You also need a system that is ok with giving you some of said abundance without you working.

Last year the US voted to hand over the reigns, in all branches of government, to a party whose philosophy is to slash government spending and reduce people’s dependence on the government.

To all the US futurists who are fantasizing about a post-scarcity world where we no longer work, I’d like to understand how that fits in with the current political climate.


The thing a lot of people leave out is that literally billions must die for this to happen. In some fully automated world everyone except for a few tens of thousands of the owner class and their technicians will be unneeded. And then what to do?

How did you arrive at that conclusion? Dividing infinity by 1m or 1b doesn't matter if it's really infinite. Just make more machines to make the machines. The existential crisis happens afterwards, and people will kill themselves off without the need for any class warfare at all. In fact the owner class will die first since there will be no more conception of ownership, since everything is supposedly abundant and at your fingertips.

You really believe today's billionaire class will just give up their power over the populace? A world of abundance means the billionaires are irrelevant because everyone would have access to everything and they would never let that happen.

They will hoard the resources, land, anything that is needed for people to stay alive.


It fits because now you can start up the conquering war machine and have a bunch of soldiers who're willing to kill in another country before starving in theirs

Voting for 'indifference to peoples dependence on the government' does not equal 'reduce people's dependence on the government'.

There is zero actual intentional reduction of dependence, just elimination of government support.


I am also fairly certain that if we do arrive at some abundant utopia where you can wish for anything can have it arrive, society will collapse. It's just bringing up 7 billion (probably more) spoiled brats at that point of time. Work on its own is also a form of "social control". Idle hands are the devil's tools etc.

Imo instead of no-strings-attached UBI we should have something like the WPA. Spend ten hours a week or whatever working in local parks/schools/libraries/etc and get paid a basic living wage in return

If you can wish for anything and have it arrive, spoiled brats won't be a thing, because competition and envy for things will be pointless.

Throughout history, the hedonic treadmill has always triumphed. Competition and envy conjure their own objects.

Even if you want to allege that the proverbial pie will become infinitely large, any one person’s slice is finite. However big my neighbor’s slice might be, I can strive to make mine even bigger.


Throughout history, big advances have come from humans having more "idle time", so we should be aiming for the population to be less busy as they can then hopefully focus on pursuing the arts or sciences.

Big advances have also come from some of the most violent, destructive wars the planet has seen.

I agree with you on principle, but I don't think it's straightforward as your point states.


Well, wars are going to happen anyway. If we abolish all idle time, it's pretty much the same as getting rid of artists, poets, philosophers, writers etc.

> big advances have come from humans having more "idle time"

A few people


Which ones?

Generally the rich.

You are painting this like it’s a bad thing. The workers decided that they would rather have higher working time to buy more things!

A lot of people would not choose to work for half the time as they do now because they do actually like to buy things.


How can you say that when workers don't have a choice? What accessible job has professional level pay and is part time?

Nursing

I'd happily work for 20 hours @200k a year. It'd give me time to work on my own projects.

Issue is that virtually no company offers that deal unless you already have noteriety or money at the level of retiring anyway.


I've met plenty of people that do this. They are contractors, they take on a contract, work for 6 months, take the next 6 off. I also know some tax accountants that do this.

I'd say being able to work on and off at that schedule isn't something I can find on a job board. Hence my point above of noteriety.

Most companies won't hire people with a high degree of notoriety. They may hire those people if they have some degree of fame.

This pattern suggests the remaining knowledge work becoming increasingly extracted upon by the owners of ai enabled firms, in similar fashion to sugar plantation workers across the global south. I would think the cost of doing so would be a level of social and civic unrest similar to the colonial revolutions (Bolivar for example) of the 19th century.

>such abundance that we never have to work again. We have heard that lie for over a century.

I'm 0.6 centuries old and have never heard that said for existing tech. Human level AI could presumably do human work by definition but that's not the case before we get that, including now.


Do a search for "the 20 hour work week". You will find plenty of articles from the 50s and 60s talking about how technology is going to make it so we don't have to work anymore. Popular Science was particularly keen on this but they certainly weren't the only ones.

The 0.90 century old economicists were discussing the idea.

https://www.npr.org/2015/08/13/432122637/keynes-predicted-we...


Keynes was a different thing - that we could cut working hours to 15 a week rather than never have to work again. I think that would be quite possible with a drop of living standards - you could do it today by moving somewhere cheap and doing some remote work. I think it didn't happen due to human nature. We both quite like doing something useful with our time and like increasing living standards.

I inherited some money and don't need to work, but do work on stuff because I like doing it. I imagine that's what things will be like post agi.


Be careful not to conflate AGI with the current generative AI revolution. Even if it may eventually lead to AGI, it is quite a way from that and the social implications of the current and near term AI is what we are talking about. We can only imagine what this will be like post AGI, but we have some idea of what shifts happen when a technology comes along that greatly amplifies human labor.

All of those technologies of the past can be managed by humans. Once computers can manage themselves AND other technologies and people, I think it'll be a different situation.

See, we have enough food to feed the entire world, every year.

It's not our production capabilities that keep people hungry; it's either greed or the problem of distribution.

Automation will definitely amplify production but it'll certainly continue to make rich richer and poor, well, the same. As inequality grows, so too does the authoritarian need to control the differential.


Maybe we only have enough food to feed the entire world, because of greed. Every time we've tried to impose a system that spreads the wealth to the masses, rather than it resulting in equality, it has led to suffering and bloodshed. And ironically, in the Soviet Union and China, the death of millions from starvation.

If you want to live with no electricity, no running water, and a lack of refrigerated food, you could do so purely on welfare. In that sense, we already have the UBI that Marx predicted.

However, most people want fruits and vegetables instead of getting rickets, goiter, and cholera from an 1800s diet. Many are even willing to work 80+ hours a week to do so.


Most non-banana republics across the world define the Minimum standard of living as having all of the things you listed, meaning welfare/social safety nets provide for that. As they should. We’re not animals.

Correct. Of course, that wasn't the case in 1750 or 1900. It wouldn't have been possible then.

Hence why prior technological changes that increased productivity didn't result in living lives of extended leisure, despite some predictions to that effect. Instead people kept working to raise the overall standard of living to what could be achieved when using the new tools to their fullest extent. Doing more, not doing the same with less effort. As you say, we're not animals. We can strive for better.


I think that is part of the point, though. As our productivity increases, we don’t see an increase in leisure, instead we see an increase in what we consider the minimum standard of living.

I appreciate that Finland considers Internet access of a minimum of 1 Mb to be a basic human right. I am not sure if other countries follow, but I wish the USA did.

It's laughably slow given how bloated the modern Web is. In fact even 10Mbps is barely enough to stream 1080p content.

You’re not entirely wrong about bloat on modern websites, but if you griped about being unable to stream 1080p video to someone even just 15 years ago you would sound absurdly privileged

So I can keep track of your wonderful comment, I'd like to add that looking up "banana republic", I realised Australia seems to fit that description perfectly! The latest crop they've come up with seems to be housing, but instead of fruit companies we have real estate cabals. With respect to the workers at the bottom of a banana republic, whats missing is the element of real choice. They say yes you can choose to not work harder but then you die early or suffer from disease, not much of a choice. Modern slavery is built on this idea of false choice.

I’m not really sure the point you’re trying to make behind “as long as you don’t mind dying early and painfully from easily preventable diseases technically you can live in utopia”. Would you mind clarifying your position here?

the pre-industrial utopia has been created

Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.

"The steamroller is still many inches away. I'll make a plan once it actually starts crushing my toes."

You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.


What contingency plan is there exactly? At best you're just going from an automated-already job to a soon-to-be-automated job. Yay?

I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?

Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.


My contingency plan is that if AI leaves me unable to get a job, we are all fucked and society as a whole will have to fix the situation and if it doesn’t, there is nothing I could have done about it anyway.

As a fellow chad I concur. Though I am improving my poker skills - games of chance will still be around

You likely already know, but the "Pluribus" poker bot was beating humans back in 2019. Games of chance will be around if people are around, but you'll have to be careful to ensure you're playing against people, unassisted people.

https://en.wikipedia.org/wiki/Pluribus_(poker_bot)


Yeah, thanks, I only play live games. I'm in australia so online poker is illegal here. I was thinking of getting a vpn and having a play online, then I saw this recently https://www.reddit.com/r/Damnthatsinteresting/comments/1qi69...

So much of these degenerate online gambling / "investment" platforms are illegal here for good reason. If you are just a normal person playing fairly, you are being scammed. Same for things like Polymarket, the only winners are the people with insider knowledge.

Even horse racing, it's a solved problem, and if you start winning they'll just cancel your a/c (happened to a friend of mine)

This is a sensible plan, given your username.

Yeah seriously. Don't people understand the fact that society is not good at mopping up messes like this—there has been a K shaped economy for several decades now and most Americans have something like $400 in their bank accounts. The bottom had already fallen out for them, and help still hasn't arrived. I think it's more likely that what really happens is that white collar workers, especially the ones on the margin, join this pool—and there is a lot of suffering for a long time.

Personally, rather devolving into nihilism, I'd rather try to hedge against suffering that fate. Now is the time to invest and save money. (or yesterday)


If white collar workers as a whole suffer severe economic setback over a short term timespan, your savings and investments won’t help you.

Unless you’re investing in guns, ammo, food, and a bunker. We’re talking worse unemployment than depression era Germany. And structurally more significant unemployment because the people losing their jobs were formally very high earners.


That’s the cataclysmic outcome, though. Although I deemed that that’s certainly possible and I would put a double digit percentage probability on it, another very likely outcome is a very severe recession, or a recession, wear a lot of, but not all, white collar work is wiped out. Maybe there’s a significant restructuring in the economy I think in a scenario like that, which also seems to be in the realm of possibility, I think having resources still matters. Speech to text, sorry for the poor grammar.

It’s definitely possible that there’s an impact that is bad but not cataclysmic. I figure in thst case though my regular savings is enough to switch to something else. I could retire now if I was willing to move somewhere cheap and live on $60k a year. There’s a lot of things that could cause that level of recession though without the need for AI.

I do also think the mid level bad outcome isn’t super likely because of AI is good enough to replace a lot of white collar jobs, I think it could replace almost all of them.


this has been me ever since my philosophy undergrad.

> You are in danger. Unless you estimate the odds of a breakthrough at <5%

It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.

There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.

At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.


We are the French artisans being replaced by English factories. OpenAI and its employees are the factory.

Checking the scoreboard a bit later on: the French economy is currently about the same size as the UK.

I'm not worried about the danger of losing my job to an AI capable of performing it. I'm worried about the danger of losing my job because an executive wanted to be able to claim that AI has enhanced productivity to such a degree that they were able to eliminate redundancies with no regard for whether there was any truth to that statement or not.

> it is straight-up irresponsible to forgo making a contingency plan.

What contingencies can you really make?

Start training a physical trade, maybe.

If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.


Working in the trades won’t help you at 40-50% unemployment. Who’s going to pay for your services. And even the meager work remains would be fought over by the hundred million unemployed who are all suddenly fighting tooth and nail for any work they can get.

So AI is going to steamroll all feasible jobs, all at once, with no alternatives developing over time? That's just a fantasy.

It'd probably be cold day in Hell before AI replaces veterinary services, for example. Perhaps for mild conditions, but I cannot imagine an AI robot trying to restrain an animal.

All these so-called safe jobs still depend on someone being able to afford those services. If I don't have a job, I can't go see the vet, the fact that no one else can do the vets job is irrelevant at such a point.

I would like to know if there's some kind of inflection point, like the so-called Laffer curve for taxes, where once an economy has X% unemployment, it effectively collapses. I'd imagine it goes: recession -> depression -> systemic crisis and appears to be somewhere between 30-40% unemployment based on history.


Every job deemed "safe" will be flooded by desparate applicants from unsafe jobs.

> Unless you estimate the odds of a breakthrough at <5%

I do. Show me any evidence that it is imminent.

> or you expect that AI will usher in enough prosperity that your job will be irrelevant

Not in my lifetime.

> it is straight-up irresponsible to forgo making a contingency plan.

No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?


> What's your contingency plan? Buy a subscription to the revolution?

I’ve been working on my contingency plan for a year-and-a-half now. I won’t get into what it is (nothing earth shattering) but if you haven’t been preparing, I think you’re either not paying enough attention or you’re seriously misreading where this is all going.


This ^ been a SWE for 20 years the market is the worst I have seen it, many good devs been looking for 1-2 years and not even getting a response, whereas 3-4 years ago they would have had multiple offers. Im still working but am secure in terms of money so will be ok not working (financially at least). But I expect a tsunami of layoffs this and next year, then you are competing with 1000x other devs and Indians who will works for 30% of your salary.

That's called an economic crisis, it has nothing to do with AI, my friends also have trouble to find 100% manual jobs which were easily available 2 years ago.

Yes I said the word that none of these company want to say in their press conference.


Thats because there are more tech/service workers competing for the manual jobs now.

Tech workers aren't numerous enough to have that effect.

Besides that, why aren't we seeing any metrics change on Github? With a supposedly increase of productivity so large a good chunk of the workforce is fired, we would see it somewhere.


A lot of non-AI things have happened though.

While true, my personal fear is that the higher-ups will overlook this fact and just assume that AI can do everything because of some cherry-pick simple examples, leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.

> leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.

More likely they get fired for no reason, never rehired, and the people left get burned out trying to hold it all together.


Exactly, now which one do you wanna be? The burned out ones but still working in SWE or the fired ones which in the long run converge to manual labor which AI can't do. Not to mention in SWE case the salaries would be pushed down to match cost of AI doing it.

As if "higher-ups" is an assigned position.

If you fail as a "higher up" you're no longer higher up. Then someone else can take your place. To the extent this does not naturally happen is evidence of petty or major corruptions within the system.


In competitive industries, bad firms will fail. Some industries are not competitive though. I have a friend that went a little crazy working as a PM at a large health insurance firm.

The memory problem is already being addressed in various ways - antigravity seems to keep a series of status/progress files describing what's been done, what needs doing, etc. A bit clunky, but it seems to work - I can open it up on a repo that I was working in a few days back and it seems to pick up this context such that I don't have to completely bring it up to speed every time like I used to have to do. I've heard that claude code has similar mechanisms.

I've been doing stuff with recent models (gemini 3, claude 4.5/6, even smaller, open models like GLM5 and Qwen3-coder-next) that was just unthinkable a few months back. Compiler stuff, including implementing optimizations, generating code to target a new, custom processor, etc. I can ask for a significant new optimization feature in our compiler before going to lunch and come back to find it implemented and tested. This is a compiler that targets a custom processor so there is also verilog code involved. We're having the AI make improvements on both the hardware and software sides - this is deep-in-the-weeds complex stuff and AI is starting to handle it with ease. There are getting to be fewer and fewer things in the ticket tracker that AI can't implement.

A few months ago I would've completely agreed with you, but the game is changing very rapidly now.


this works fine for like 2-3 small instruction sets. once you start getting to scale of a real enterprise system, the AI falls down and can't handle that amount of context. It will start ignoring critical pieces or not remember them. And without constant review AI will start priotizing things that are not your business priority.

I don't agree they have solved this problem, at all, or really in any way that's actually usable.


What I'm saying is, don't get to thinking that the memory problem is some kind of insurmountable, permanent barrier that's going to keep us safe. It's already being addressed, maybe crudely at first, but the situation is already much better than it was - I no longer have to bring the model up to speed completely every time I start a new session. Part of this is much larger context windows (1M tokens now). New architectures are also being proposed to deal with the issue, as well.

Context windows are a natural improvement, but new architectures are completely speculative and it’s unclear we can make any sort of predictable progress with new, better architectures. Most progress has been made on essentially the same architecture paradigms, although we did move from dense models to MoE at some point.

I look through the backlog for my team consisting of 9 trillion ill-defined (if defined at all) tickets that tells you basically nothing.

The large, overwhelming majority of my team's time is spent on combing through these tickets and making sense of them. Once we know what the ticket is even trying to say, we're usually out with the solution in a few days at most, so implementation isn't the bottleneck, nowhere near.

This scenario has been the same everywhere I've ever worked, at large, old institutions as well as fresh startups.

The day I'll start worrying is when the AI is capable of following the web of people involved to translate what the vaguely phrased ticket that's been backlogged for God knows how long actually means


At my workplace we now use Claude Code to parse written specs and source code, search through JIRA, and draft, refine and organize tickets (using the JIRA API via a CLI tool). Way faster than through the UI.

However as you point out we have no program-accessible source of data on who stakeholders, contributors, managers, etc. are and have to write a lot of that ourselves. For a smaller business perhaps one could write all of that down in an accessible way to improve this but for a large dynamic business it seems very difficult.


A lot of this can be provided or built up by better documentation in the codebase, or functional requirements that can also be created, reviewed, and then used for additional context. In our current codebase it's definitely an issue to get an AI "onboarded", but I've seen a lot less hand-holding needed in projects where you have the AI building from the beginning and leaving notes for itself to read later

Curious to hear if you've seen this work with 100k+ LoC codebases (i.e. what you could expect at a job). I've had some good experiences with high autonomy agents in smaller codebases and simpler systems but the coherency starts to fizzle out when the system gets complicated enough that thinking it through is the hard part as opposed to hammering out the code.

I'd estimate we're near a million LoC (will double check tomorrow, but wouldn't be surprised if it was over that to be honest). Huge monorepo, ~1500 engineers, all sorts of bespoke/custom tooling integrated, fullstack (including embedded code), a mix of languages (predominantly Java & JS/TS though).

In my case the AI is actively detrimental unless I hand hold it with every single file it should look into, lest it dive into weird ancient parts of the codebase that bear no relevance to the task at hand. Letting the latest and "greatest" agents loose is just a recipe for frustration and disaster despite lots of smart people trying their hardest to make these infernal tools be of any use at all. The best I've gotten out of it was some light Vue refactoring, but even then despite AGENTS.md, RULES.md and all the other voodoo people say you should do it's a crapshoot.


Ask the AI to figure out your code base (or self-contained portions of it, as applicable) and document its findings. Then correct and repeat. Over time, you end up with a scaffold in the form of internal documentation that will guide both humans and AIs in making more productive edits.

If you vector index your code base, agents can explore it without loading it into context. This is what Cursor and Roo and Kiro and probably others do. Claude Code uses string searches.

What helps is also getting it to generate a docs of your code so that it has map.

This is actually how humans understand a large code base too. We don’t hold a large code base in memory — we navigate it through docs and sampling bits of code.


cloc says ours is ~350k LoC and agents are able to implement whole features from well designed requirement docs. But we've been investing in making our code more AI friendly, and things like Devin creating and using DeepWiki helps a lot too.

If you have agents that can implement entire features, why is it only 350k loc? Each engineer should be cranking out at least 1 feature a week. If each feature is 1500-2000 lines times 10 engineers that’s 20k lines a week.

If the answer is that the AI cranks out code faster than the team can digest and review it and faster than you can spec out the features, what’s the point? I can see completely shifting your workflow, letting skills atrophy, adopting new dependencies, and paying new vendors if it’s boosting your final output 5 or 10x.

But if it’s a 20% speed up is it worth it?


Since when do we measure productivity by lines of code?

It’s not a measure of productivity, but some number of new lines is generally necessary for new functionality. And in my experience AI tends to produce more lines of code than a decent human for similar functionality. So I’d be very shocked if an agent completing a feature didn’t crank out 1500 lines or more.

Around 250k here. The AI does an excellent job finding its way around, fixing complex bugs (and doing it correctly), doing intensive refactors and implementing new features using existing patterns.

Our codebase is well over 250k and we have a hierarchy of notes for the modules so we read as much as we need for the job with a base memory that explains how the notes work

We have this in some of our projects too but I always wonder how long it's going to take until it just fails. Nobody reads all those memory files for accuracy. And knowing what kind of BS the AI spews regularly in day to day use I bet this simply doesn't scale.

It's not binary. Jobs will be lost because management will expect the fewer developers to accomplish more by leveraging AI.

Big tech might ahead of the rest of the economy in this experiment. Microsoft grew headcount by ~3% from June 2022 to June 2025 while revenue grew by >40%. This is admittedly weak anecdata but my subjective experience is their products seem to be crumbling (GitHub problems around the Azure migration for instance), and worse than they even were before. We'll see how they handle hiring over the next few years and if that reveals anything.

Well, Google just raised prices by 30% on the GSuite "due to AI value delivered", but you can't even opt out, so even revenue is a bullshit metric.

Already built in. We haven’t hired recently and our developers are engaged in a Cold War to set the new standard of productivity.

Just keep in mind that there are many highly motivated people directly working on this problem.

It's hard to predict how quickly it will be solved and by whom first, but this appears to be a software engineering problem solvable through effort and resources and time, not a fundamental physical law that must be circumvented like a physical sciences problem. Betting it won't be solved enough to have an impact on the work of today relatively quickly is betting against substantial resources and investment.


Why do you think it's not a physical sciences problem? It could be the case that current technologies simply cannot scale due to fundamental physical issues. It could even be a fundamental rule of intelligent life, that one cannot create intelligence that surpasses its own.

Plenty of things get substantial resources and investment and go nowhere.

Of course I could be totally wrong and it's solved in the next couple years, it's almost impossible to make these predictions either way. But I get the feeling people are underestimating what it takes to be truly intelligent, especially when efficiency is important.


>It could even be a fundamental rule of intelligent life, that one cannot create intelligence that surpasses its own.

Well that is easily disproved by the fact that people have children with higher IQ's than their own.


That's not what I mean, rather than humans cannot create a type of intelligence that supersedes what is roughly capable from human intelligence, because doing so would require us to be smarter basically.

Not to say we can't create machines that far surpass our abilities on a single or small set of axis.


Think hard about this. Does that seem to you like it's likely to be a physical law?

First of all, it's not necessary for one person to build that super-intelligence all by themselves, or to understand it fully. It can be developed by a team, each of whom understands only a small part of the whole.

Secondly, it doesn't necessarily even require anybody to understand it. The way AI models are built today is by pressing "go" on a giant optimizer. We understand the inputs (data) and the optimizer machine (very expensive linear algebra) and the connective structure of the solution (transformer) but nobody fully understands the loss-minimizing solution that emerges from this process. We study these solutions empirically and are surprised by how they succeed and fail.

We may find we can keep improving the optimization machine, and tweaking the architecture, and eventually hit something with the capacity to grow beyond our own intelligence, and it's not a requirement that anyone understands how the resulting model works.

We also have many instances in nature and history of processes that follow this pattern, where one might expect to find a similar "law". Mammals can give birth to children that grow bigger than their parents. We can make metals puter than the crucible we melted them in. We can make machines more precise than the machines that made those parts. Evolution itself created human intelligence from the repeated application of very simple rules.


> Think hard about this. Does that seem to you like it's likely to be a physical law?

Yes, it seems likely to me.

It seems like the ultimate in hubris to assume we are capable of creating something we are not capable of ourselves.


On the contrary, nearly every machine we've created is capable of things that we are not capable of ourselves. Cars travel more than twice as fast as the swiftest human. Airplanes fly. Calculators do math in an instant that would take a human months. Lightbulbs emit light. Cranes lift many tons. And so on and so forth.

So to create something that exceeds our capabilities is not a matter of hubris (as if physical laws cared about hubris anyway), it's an unambiguously ordinary occurrence.


> Not to say we can't create machines that far surpass our abilities on a single or small set of axis.

Given SOTA models are Phd level in just about every subject this is clearly provably wrong.

I'll believe that claim when a SOTA model can autonomously create content that matches the quality and length of any average PhD dissertation. As of right now, we're nowhere near that and don't know how we could possibly get there.

SOTA models are superhuman in a narrow sense, in that they have solid background knowledge of pretty much any subject they've been trained on. That's great. But no, it doesn't turn your AI datacenter into "a country of geniuses".


Are humans just Phd students in a vat? Can a SOTA model walk? Humans in general find that task, along with a trillion other tasks that SOTA models cannot do, to be absolutely trivial.

Seems like if evolution managed to create intelligence from slime I wouldn't bet on there being some fundamental limit that prevents us from making something smarter than us.

Many highly motivated people with substantial resources and investment have worked on a lot of things and then failed at them with nothing to show for it.

The implication of your assertion is pretty much a digital singularity. You’re implying that there will be no need for humans to interact with the digital world at all, because any work in the digital world will be achievable by AI.

Wonder what that means for meatspace.

Edit: Would also disagree this isn’t a physics problem. Pretty sure power required scales according to problem complexity. At a certain level of problem complexity we’re pretty much required to put enough carbon in the atmosphere to cook everyone to a crisp.

Edit 2: illustrative example, an Epic in Jira: “Design fusion reactor”


>progressively understands the business

This is no different than onboarding a new member of the team, and I think openAI was working on that "frontier"

>We started by looking at how enterprises already scale people. They create onboarding processes. They teach institutional knowledge and internal language. They allow learning through experience and improve performance through feedback. They grant access to the right systems and set boundaries. AI coworkers need the same things.

And tribal knowledge will not be a moat once execs realize that all they need to do is prioritize documentation instead of "code velocity" as a metric (sure any metric gets goodhearted, but LLMs are great at sifting through garbage to find the high perplexity tokens).

>But context limitation is fundamental to the technology in its current form

This may not be the case, large enough context-windows plus external scratchpads would mostly obviate the need for true in context learning. The main issue today is that "agent harnesses" suck. The fact that claude code is considered good is more an indication of how bad everything else is. Tool traces read like a drunken newb brute-forcing his way through tasks. LLMs can mostly "one-shot" individual functions, but orchestrating everything is the blocker. (Yes there's progress in metr or whatever but I don't trust any of that, else we'd actually see the results in real-world open source projects).

LLMs don't really know how to interact with subagents. They're generally sort of myopic even with tool calls. They'll spend 20 minutes trying to fix build issues going down a rabbit hole without stepping back to think. I think some sort of self-play might end up solving all of these things, they need to develop a "theory of mind" in the same way that humans do, to understand how to delegate and interact with the subagents they spawn. (Today a failure case is agents often don't realize subagents don't share the same context.)

Some of this is certainly in the base model and pretraining, but it needs to be brought out in the same way RL was needed for tool use.


Can you give an example to help us understand?

I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.


Here's an example ticket that I'll probably work on next week:

    Live stream validation results as they come in
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:

- What is the validation system and how does it work today?

- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?

- What prior art exists on the backend and frontend, and how much of that can/should be reused?

- Are there any scaling or load considerations that need to be accounted for?

I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.

Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.


This right here is my view on the future as well. Will the AI write the entire feature in one go? No. Will the AI be involved in writing a large proportion of the code that will be carefully studied and adjusted by a human before being used? Absolutely yes.

This cyborg process is exactly how we're using AI in our organisation as well. The human in the loop understands the full context of what the feature is and what we're trying to achieve.


But planning like this is absolutely something AI can do. In fact, this is exactly the kind of thing we start with on our team when it comes to using AI agents. We have a ticket with just a simple title that somebody threw in there, and we asked the AI to spin up a bunch of research agents to understand and plan and ask itself those questions.

Funny enough, all the questions that you posed are things that come up right away that the agent asks itself, and then goes and tries to understand and validate an answer, sometimes with input from the user. But I think this planning mechanism is really critical to being able to have an AI generate an understanding, then have it be validated by a human before beginning implementation.

And by planning I don't necessarily mean plan mode in your agent harness of choice. We use a custom /plan skill in Claude Code that orchestrates all of this using multiple agents, validation loops, and specific prompts to weed out ambiguities by asking clarifying questions using the ask user question tool.

This results in taking really fuzzy requirements and making them clear, and we automate all of this through linear but you could use your ticket tracker of choice.


Absolutely. Eventually the AI will just talk to the CEO / the board to get general direction, and everything will just fall out of that. The level of abstraction the agents can handle is on a steady upward trajectory.

If AIs can do that, they won’t be talking to a CEO or the board of a software company. There won’t be a CEO or a board because software companies won’t exist. They’ll talk to the customers and build one off solutions for each of them.

There will be 3 “software” companies left. And shortly after that society will collapse because of AI can do that it can do any white collar job.


I mean, what is the validation system? Either it exists in code, and thus can be discovered if you point the AI at repo, or... what, it doesn't exist?

For the UX, have it explore your existing repos and copy prior art from there and industry standards to come up with something workable.

Web scale issues can be inferred by the rest of the codebase. If your terraform repo has one RDS server, vs a fleet of them, multi-region, then the AI, just as well as a human, can figure out if it needs Google Spanner level engineering or not. (probably not)

Bigger picture though, what's the process of a human logs an under specified ticket and someone else picks it up and has no clue what to do with it? They're gonna go ask the person who logged the bug for their thoughts and some details beyond "hurr Durr something something validation". If we're at the point where AI is able to make a public blog post shaming the open source developer for not accepting a patch, throwing questions back to you in JIRA about the details of the streaming validation system is well within its capabilities, given the right set of tools.


Honestly curious, have you seen agents succeed at this sort of long-trajectory wide breadth task, or is it theoretical? Because I haven't seen them come close (and not for lack of trying)

Yeah I absolutely see it every day. I think it’s useful to separate the research/planning phase from the building/validadation/review phase.

Ticket trackers are perfect for this. Just start with asking AI to take this unclear, ambiguous ticket and come up with a real plan for how to accomplish it. Review the plan, update your ticket system with the plan, have coworkers review it if you want.

Then when ready, kick off a session for that first phase, first PR, or the whole thing if you want.


In my expedience, Claude Code with opus 4.5 is the first one to tackle such issues well.

Opus 4.6, with all of the random tweaks I've picked up off of here, and twitter, is in the middle of rewriting my golang cli program for programmers into a swiftui Mac app that people can use, and it's totally managing to do it. Claude swarm mode with beads is OP.

Then why isn't it? Just offload it to the clankers and go enjoy a margarita at the beach or something.

There are plenty of people who are enjoying margarita by the beach while you, the laborer, are working for them.

Preach. That's always been the case though, AI just makes it slightly worse.

Why do you have a backlog then? If a current AI can do 100% of it then just run it over the weekend and close everything

As always, the limit is human bandwidth. But that's basically what AI-forward companies are doing now. I would be curious which tasks OP commenter has that couldn't be done by an agent (assuming they're a SWE)

This sounds bogus to me: if AI really could close 100% of your backlog with just a couple more humans in the loop, you’d hire a bunch of temps/contractors to do that, then declare the product done and lay off everybody. How come that isn’t happening?

Because there's an unlimited amount of work to do. This is the same reason you are not fired once completing a feature :-) The point of hiring a FTE is to continue to create work that provides business value. For your analogy, FTEs often do that by hiring temp, and you can think of the agent as the new temp in this case - the human drives an infinite amount of them

Why hasn’t any of the software I use started shipping features at a breakneck speed then? The only thing any of them have added is barely working AI features.

Why aren’t there 10x the number of games on steam? Why aren’t people releasing new integrated programming language/OS/dev environments?

Why does our backlog look exactly the same as when I left for posterity leave 4 months ago?


Questions posed in bad faith can only be answered by the author.

Someone asked why the backlog doesn’t get finished. You answered that it does but the backlog just refills. So I asked where is the backlog evidence that the original backlog was completed.

I’m still waiting for the evidence. I still haven’t seen externally verifiable evidence that AI is a net productivity boost for the ability to ship software.

That doesn’t mean that it isn’t. It does mean that it isn’t big enough to be obvious.

I’m very closet watching every external metric I can find. Nothing yet. Just saw the steam metrics for January. Fewer titles than January last year.


Sounds more like busy work rather than something that makes money

I think the "well defined prompt" is precisely what the person you responded to is alluring to. They are saying they don't get worried because AI doesn't get the job done without someone behind it that knows exactly what to prompt.

>>I look at my ticket tracker and I see basically 100% of it that can be done by AI.

That's a sign that you have spurious problems under those tickets or you have a PM problem.

Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.


> ~0% of them can be implemented by AI as it exists today

I think it's more nuanced than that. I'd say that - 0% can't be implemented by AI - but a lot of them can be implemented much faster thanks to AI - a lot of them can be implemented slower when using AI (because author has to fix hallucinations, revert changes that caused bugs)

As we learn to use these tools, even in their current state, they will increase productivity by some factor and reduce needs for programmers.


What factor of increased productivity will lead to reduced need for programmers?

I have seen numerous 25-50% productivity boosts over my career. Not a single one of them reduced the overall need for programmers.

I can’t even think of one that reduced the absolute number of programmers in a specific field.


Ha, this triggered me. I'm building exactly this.

It's a coding agent that takes a ticket from your tracker, does the work asynchronously, and replies with a pull request. It does progressively understand the codebase. There's a pre-warming step so it's already useful on the first ticket, but it gets better with each one it completes.

The agent itself is done and working well. Right now I'm building out the infrastructure to offer it as a SaaS.

If anyone wants to try it, hit me up. Email is in my profile. Website isn't live yet, but I'm putting together a waitlist.


0%? This is as wrong as people who say it can do 100% of tasks.

Apparently you haven't seen ChatGPT enterprise and codex. I have bad news for you ...

Codex with their flagship model (currently GPT-5.3-Codex) is my daily driver. I still end up doing a lot of steering!

We're all slowly but surely lowering our standards as AI bombards us with low-quality slop. AI doesn't need to get better, we all just need to keep collectively lowering our expectations until they finally meet what AI can currently do, and then pink-slips away.

Exactly. This happens in every aspect of life. Something convenient comes along and people will accommodate it despite it being worse, because people don’t care.

> Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying.

Um, you do realize that "the memory" is just a text file (or a bunch of interlinked text files) written in plain English. You can write these things out yourself. This is how you use AI effectively, by playing to its strengths and not expecting it to have a crystal ball.


Labor substitution is extremely difficult and almost everybody hand waves it away.

Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.

Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.

This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.


I lost my job as a software developer some time ago.

Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:

It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.

Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc


> Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc

I have no idea what in the world you are talking about. Most 20 year olds working at McDonald’s are stoned and move at half a mile an hour whether it’s a lunch rush or it’s 2am. I worked retail for years before I finally switched full time to programming. It’s certainly not full of amazing motivated athletes with excellent coordination. You’re lucky if most of them can show up to work on time more than half the time.


Do you really wanna be competing with those people though? I'll be honest, I don't even want to be in the same room as them.

No, but I’m over the absurd pessimism.

There’s the issue of the job itself being more demanding, but also the managers in “low skilled” jobs being ultra-demanding petty dictators.

As a white collar computer guy, I can waste some time on Reddit. Or go for a walk and grab coffee. Or let people know that I’m heading out for a couple of hours to go to the doctor. There are a LOT of little freedoms tha you take for granted if you haven’t worked a shitty minimum wage job. Getting on trouble for punching in one minute late, not being allowed to sit down, socializing too much when you’re not on a break.

I’m pretty sure that most tech employees would just quit when encountering a manager like that


Ugh.. sorry to hear :( I am myself unemployed right now. Its really hard to land a job in tech.. Luicky, I dont need to flip burgers for now...

Who's gonna play you to flip burgers with no experience doing it and everyone else needing a job as well?

Who’s buying $6.00 burgers when the old customers have been replaced by AI?

There is a huge demand for low-skill labor in other industries. Stuff like plumbing, HVAC, and a ton of other traditionally unsexy jobs that can barely keep enough people in a town to perform these jobs at higher costs than normal.

I wouldn’t call plumbing and other trades low skill.

I agree. I didn't mean to disparage anyone. I have a massive appreciation (and some involvement!) in these trades. The amount of knowledge these guys have about their trade is impressive.

Those jobs don't often pay well until you graduate out of journeyman / apprentice, or are a business owner. They usually require some training and testing ahead of time. They also carry a higher risk of serious injury or death.

The average salary for a software developer in Montana is $88k/yr. The average salary for an HVAC technician in Montana is $58k/yr.

The average salary for a software developer in Oregon is $118k/yr. The average salary for an HVAC technician in Oregon is $74k/yr.

It's for sure less, but the gap is smaller than some might think. I think some markets (SF) distort the cost a bit.


I have worked in the restaurant industry within the last 5 years and I'm probably older than you.

>the most unskilled labor

People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.


> Robotics is obviously a whole different field from AI

I agree, but people are conflating the two. We have seen a lot of advancements in robotics, but as of current that only makes the economics worse. We're not seeing the complexity of robots going down and we're seeing the R&D costs going up, etc.

If it didn't make sense a few years ago to buy a crappy robot that can barely do the task because your business will never make money doing it, it probably doesn't make sense this year to buy a robot that still can't accomplish the tasks and is more expensive.


Yeah, although in the "Something big is happening" Shumer did say at the end "Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects."

Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.


what difference does it make if the robots are humanoid or not?

It merely reflects the designer's willingness to engage in sci-fi tropes.


Jobs that require physical effort will be fine for the reasons you state

Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.

Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.

So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.


> So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.

And that's probably something most people are okay with. Work that can be automated should be and humans should be spending their time on novel things instead of labor if possible.


>And that's probably something most people are okay with

You think most people are okay with most white collar jobs disappearing? I certainly am not, personally.


A job that can be automated should be, the alternative is you pay people to do tasks that aren't needed. Why would most people want to have higher prices and more complexity unless it added value to the products or services they are using?

What society is ready for that? We are looking at an possible outcome that will make the Great Depression look like a strong financial era of growth and prosperity. I don’t think most people are ok with the road to the goal in this case, doesn’t matter if you have work or not, mass unemployment destroys societies.

> What society is ready for that?

A free society.


I interpreted the parent comment as asking what society _specifically_? Not some abstract concept, but something that exists a step or two away from where we are right now.

That is correct, living in Sweden with pretty high levels of social protection, but even so, high levels of white collar unemployment would make our very high risk housing market (we as a population has a very high loan ratio, much of it committed to housing) collapse once people can’t pay. That would make the banks likely to collapse because their money no longer exist and they’re all of a sudden real estate brokers with inventory far below what they paid out in loans. Union coffers would deplete fast, there would be no blue collar work so they also get dragged in with the storm. Oh and of course, the stock exchange will crash completely.

Covid gave us a glimpse, this would make it look like child’s play, because there’s no solution and it’s not getting better.

The rich will of course get richer, even if absolute value goes down, the relative value of their wealth will go up.


But what is their wealth, exactly? Assets which no longer have value. Money which no longer has a use. Who's going to maintain their jets when the economy collapses? Who's going to build the parts? Who's going to build the tools to build the parts? Who's going to mine the ores?

Their lives are built atop the same supply chain we all depend on. They don't have their own miners, or aerospace engineers to design planes, or naval architects to build their boats.

They can not consume the marginal capacity of an economy that doesn't exist.


They will still be ahead as long as they can provide to people swinging the batons.

all jobs will be automatable, and there will be no room for humans to work on novel things.

That's like saying we shouldn't push the space exploration boundary because people are so used to staying within it.

If you want to make the argument that singularity has occurred and that knowledge oracles are no longer needed, that's a bold claim.

If you want to make the argument it would escape our control, etc. that's a valid argument for proper controls.

If you want to make the argument that LLMs are sentient and that it's not ethical to "enslave" them, that's also a pretty bold stance currently.

Humans have been inventing technology and improving the quality of life (of our species!) for a very long time and that strategy hasn't changed IMO


I'm not saying any of that I am just saying that you and everyone you love will be killed by this technology and the world as we know it will be destroyed.

Why do you think humans automating more things destroys us? Did the calculator or horse and buggy make us obsolete?

Why didn't the Internet cause a massive death plague?


Can you walk me through this argument for a customer service agent? The jobs where the nuance and variety isn’t there and don’t involve physical interaction are completely different to flipping burgers

A customer service agent that can be automated should be, but it's not working right now. Most support systems are designed to offload as much work as possible to the automated funnel, which almost always has gaps, loops, etc. The result is customers who want to pay for something or use something that get "stuck" being unable to throw money at a company. Right now the cost of fraud is much greater than the cost of these uncaptured sales or lost customers.

Eventually that will change and the role of a customer service agent will be redefined.


> Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.

In actual reality, McDonalds has already automated to a vast degree. People were talking about burger-flipping robots as a trope 30+ years ago. Their future has come, just not in the way imagined.

If the McDonalds franchises near me are anything to go by we went from a busy lunch rush needing a staff of 20 or so individuals to properly handle, to around half a dozen. At least a half reduction in peak staffing needs - nearly entirely due to various forms of automation and supply chain optimization. The latter of which is just another name for automation further upstream and abstracted from the point of sale.

> This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer.

Perhaps grills are the hardest bit to automate, so they may never not be staffed by humans. I'd argue some places have done a fairly good job "automating" this aspect too if you squint a little. Stuff like double-sided grills where the top comes down and cooks a burger from both sides at once. Doubles your line throughput. Call this mechanization if you want, but it's in the same bucket to me.

But look at soft drink machines. They are now fully automated with some locations able to go from 3-4 people staffing two machines during a busy lunch rush, down to a single person who simply puts caps on stuff coming off the tiny conveyor belt. Mistakes are also cut down to close to zero, including stuff like "less ice" or "more ice" customizations.

The locations I'm aware of now operate fryers on a rotation so the "wait for fresh fries" experience is a thing of the past. This probably wasn't a major capital investment - just an improvement in the automation of data collection, modeling, and demand prediction. Still an automation though, as it replaces some manager making those decisions.

Ordering kiosks are the obvious one everyone knows about, so not worth discussing. They are universal in large cities these days, and I'm starting to see them more and more even in small towns during road trips. App-based ordering is also not someone anyone predicted 20 years ago either. Locations went from 6-8 cashiers on duty down to 1 or 2.

It already happened. Fast food is getting more out of less workers, just as predicted. It just happened incrementally over decades. Sure, a typical fast food franchise will never be operated in a "lights out" style manner with a roving team of highly paid technicians simply responding to alerts. But the labor force has been reduced and optimized for efficiency, and will continue to be chipped away little by little as technology gets better.


The burger cook job has already been displaced and continues to be. Pre-1940s those burger restaurants relied on skilled cooks that got their meat from a butcher and cut fresh lettuce every day. Post-1940s the cooking process has increasingly become assembly-lined and cooks have been replaced by unskilled labor. Much of the cooking process _is_ now done by robots in factories at a massive scale and the on-premise employees do little else than heat it up. In the past 10 years, automation has further increased and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English. In conclusion, both the required skill-level and amount of labor needed for restaurants has been reduced drastically by automation and in fact many higher skilled trade jobs have been hit even harder: cabinetmakers, coachbuilders and such have been almost eradicated by mass production.

It will happen to you.


> and the on-premise employees do little else than heat it up

This is correct. This also is a lot more complex than it sounds and creates a lot of work. Cooking those products creates byproducts that must be handled.

> and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English

Yet most of the customers still have to interact with an employee because "the kiosk won't let me". Want to add Mac sauce? Get the wrong order in the bag? Machine took payment but is out of receipt paper? Add up all these "edge cases" and a significant amount of these "contactless" transactions involved plenty of contact!

> It will happen to you.

Any labor that can be automated should be. Humans are not supposed to spend their time doing meaningless tasks without a purpose beyond making an imaginary number go up or down.


> Cooking those products creates byproducts that must be handled.

Okay so the job of "cook" just became "grease disposal engineer"?

> Yet most of the customers still have to interact with an employee because "the kiosk won't let me"

That hasn't stopped some places I've visited from only allowing people to order from the kiosk. Literally I've said something to the person behind the counter who pointed to the iPad and when I said I wanted something else, shrugged and said we can't do that.


> Okay so the job of "cook" just became "grease disposal engineer"?

That is the current way the job works. The idea that even the most basic "burger flipper" job is isolated into a single dimension (flipping a burger) is false. That worker has to get supplies, prepare ingredients, stage them between cooking, dispose of waste product, etc.

> Literally I've said something to the person behind the counter who pointed to the iPad and when I said I wanted something else, shrugged and said we can't do that.

That's because corporate told them to maximize kiosk usage or because the employee was lazy. That's always going to happen. The McDonalds in Union Station DC has broken glass on the floor, because it's a shithole and the employees don't care, but it means not much else IMO


Funny, I go to South Korea and the fast food burger joints literally operate exactly as you say they couldn't. I've had the best burger in my life from a McDonalds in South Korea operated practically by robots.

It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.


> McDonald’s global CEO has famously stated that while they invest in "advanced kitchen equipment," full robotic kitchens aren't a broad reality yet because "the economics don't pencil out" for their massive scale.

> While a highly automated McDonald’s in South Korea (or the experimental "small format" store in Texas) might look empty, the total headcount remains surprisingly similar to a standard restaurant


You don't need AI to replace whole jobs 1:1 to have massive displacement.

If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.


That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:

> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.


Also, you don’t need AI to replace your job, you need someone higher up in leadership who thinks AI could replace your job.

It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.


Right, it doesn't help pay the bills to be right in the long run if you are discarded in the present.

There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.

Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.


Wasn't that the point of mentioning Jevon's Paradox though? Like they said in the essay, these things are quite elastic. There's always more demand for software then what can be met, so bringing down the cost of software will dramatically increase the demand for it. (Now, if you don't think there's a ton of demand for custom software, try going to any small business and ask them about how they do bookkeeping. You'll learn quite quickly that custom software would run much better than sticky notes and excel, but they can't afford a full time software developer as a small business. There's literally hundreds of thousands of places like this.)

The problem is, you won’t necessarily know which 20% it did wrong until it’s too late. They will happily solve advanced math problems and tell you to put glue on your pizza with the same level of confidence.

In reality that would probably mean that something like 60% of the developer positions would be eliminated (and, frankly, those 60% are rarely very good developers in a large company).

The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.

When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.

The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.

(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)


I'd you team bought the latest IDE for $200/mo and was able to finish tickets, you 50% of your team be laid off?

Or would you just do more stuff?

I feel like most software projects have an endless backlog.

Better IDEs, programming languages, packages, frameworks, etc have increased our productivity, reduced bugs -- but rarely reduced headcount.

Ever hard anyone migrate from php+jQuery to react+node and reduce head count due to increased productivity?

I sometimes reminiscent about the LAMP stack being super productive. But at the time I didn't write tests :)


Why do people make arguments like this?

"Work" isn't a finite thing. It's not like all the people in your office today had to complete 100% of their tasks, and all of them did.

"Work" is not a static thing. At least not in positions of many knowledge-worker careers.

The idea of a single day's unit of "work" being 100%, is really sophomoric.

Also, If 100% of a labor force now has 80% more time...wouldn't it behoove the company to employ the existing workforce in more of the revenue generating activities? Or find a way to retain as much of the institutional knowledge?

Doom, fear-mongering and hopelessness is not a sustainable approach.


We are already in low-hire low-fire job market where while there aren't massive layoffs to spike up unemployment there also aren't as many vacancies.

What happens if you lay off 80% of your department while your competitors don't? If AI multiplies each developer's capabilities, there's a good chance you'll be outcompeted sooner or later.

At some point soon, humans will be a liability, slowing AI down, introducing mistakes and inefficiences. Any company that insists on inserting humans into the loop will be outcompeted by those who just let the AI go.

That's an oversimplification. Work is rarely so simply divisible like this.

There would be a lot of economic pressure to figure it out.

Amazon fulfillment centers are a good example of automation shrinking the role of humans. We haven't seen total headcounts go down because Amazon itself has been growing. While the human role shrinks, the total business grows and you tread water. But at some point, Amazon will not be able to grow fast enough to counterbalance the shrinking human role in the FC and total headcount will decrease until one day it disappears entirely.


When we created cars that replaced buggies, that came with new machines for manufacturing, who need mechanics. The same for most physical automation. When we automated pen and paper business processes with SaaS, we created new managment positions, and new software jobs.

LLMs don't create anything new, they simply replace human computer i/o, with tokens. That's it, leaving the humans who are replaced to fight for a limited number of jobs. LLMs are not creating new jobs, they only create "AI automate {insert business process} SaaS" that are themselves heavily automated.. I suppose there are more datacenter jobs (for now), and maybe some new ML researcher positions.. but I don't really see job growth.. Are we supposed to just all go work at a datacenter or in the semiconductor industry (until they automate that too)?


Creative destruction is a fundamental component of economic growth and has been happening since economies were part of humanity.

You are thinking too linearly. When the price of goods and services go down because the cost to produce those goods are services decreases, that means things are cheaper. Now that things are cheaper we have more money to spend on other goods or services.

Who knows what industries will be created because of this alleged release of human labor.

When the refrigerator was invented we didn’t just replace an industry of shipping ice, we created new industries that relied on refrigeration. That’s creative destruction. That’s economic growth.

This is not to mention that I find the scope and scale of AI displacement to be highly dubious and built on hype.


So why did are companies laying of 100s of thousands of people, 400k in SWE alone in 16 months during a bull market where equities and profits are at all time highs? How come the January jobs report was so terrible, January is historically the best month for jobs, its downhill from here.

Do you walk around with a blindfold on? Are you extremely privileged? Sounds like it. Tell this to the 25% of new college grads that have been unemployed for 12 months, or working as a barista with 100k in debt. Eventually they'll be knocking on your penthouse/mansion door.


How much of the big tech layoffs were because of over-hiring and, in some orgs at least, large numbers of employees "resting and vesting"? Elon took an axe to Twitter and his chums saw that it chugged along well enough so Google, Amazon and friends did the same.

I haven't seen the same firing sprees outside of FAANG and their wannabees.


People tend to vastly underestimate how much a functioning governmental model lead to the companies of 60 years ago not repeating what's happening now. And forget the blood shed to reverse the actions from the last time this tried to happen. The largest attack on US soil from the past century was from union busting attempts.

At best, some people expect all this to work out, so they sit back unaware of weight of these battles. Worst case is plain ol' ignorance of what's going on around them.


The problem with this "creative destruction" hand-wave is:

- there's no thought given to what happens in the interim. Forget the welfare of those displaced, consider what acts the desperation will lead them to.

- these replacement roles may very well never exist or will pay much, much lower than they do now.

- this disruption happens entirely in services, LLMs are not improving agricultural yield, most industries steeped in physical reality will mostly cut overhead for generating text.

- the gains from automation do not necessarily have to diffuse over us all, the capital can simply accumulate in the hands of the firms.

You cannot keep pointing to the past when you are suggesting an entirely new never before seen moment is upon us.


Automations do create jobs, but fewer jobs. Businesses wouldn’t invest so much money if they had to keep the same number of workers. Automation necessarily reduces the number of humans working in aggregate at one task.

What DOES go up with automation is demand. Fewer farmers today than 100 years ago, but significantly more mouths to feed.

What also increases is new kinds of jobs; entirely new fields. The automobile shrank the number of buggy whip makers, but taxi drivers increased. Then the internet increased Uber drivers on top of taxi drivers.


This type of automation does not create jobs, and we are seeing that in jobs numbers. You're right it does reduce the amount of labour needed, hence why we are seeing equities rise why people's wages/opportunities shrink.

Get ready for french revolution v2, but global, the ruling class only exists because the working class tolerates them. This just won't work.


Hence the need for an AI panopticon. Monitor and squish rebellions while they’re still in the planning phase.

Neither datacenters nor chip manufacturing employ a whole lot of people. But I think you're looking at it wrong. Jobs come from people with money wanting to pay for jobs. That's not going to change.

The jobs of the future may be that you're a court jester for Larry Ellison, or that you do something else that's fundamentally pointless but happens to be something that a person with money wants. Companion, entertainment, errands. Now, that may sound dystopian, but on some level, so are most white collar jobs today. Microsoft employs 200k people. How many of these are directly involved in shipping money-making products - five percent? Ten? The rest is there essentially for the self-sustaining bureaucracy itself. And there's no reason for that bureaucracy to exist except the whims of people with money and power - delegation, empire-building, pet projects, etc.


The examples you give for jobs of the future don't sound appealing or very numerous. It seems like you're saying that people will be employed as personal assistants to the uber wealthy. But there aren't a lot of uber wealthy - certainly not enough to employ large amounts of the economy.

I mean, there's always the job of building pyramids. But no, seriously, I don't think it's just about the ultra-wealthy. Basically, anyone better off than you. Which is basically what's going on today: you effectively work for your boss, they work for their boss, and their boss (possibly after some extra hops) works for the ultra-rich CEO.

Lmao, you need to go read up on the french revolution. This is the craziest comment I've read on this site in a long time.

And I know datacenters and semiconductor manufacturing don't employ a lot of people, thats my point, the advent of llms replaces more jobs than it creates.


> french revolution.

A bunch of revolutionaries who carried a campaign of murder that ultimately had little bearing on the economic standing or job prospects of French citizens?

I'm not saying that people will be content or that there will be no revolutions in the future. There might be. But most jobs are a social construct. A relatively small fraction of employed people are essential to the well-being of mankind. For every construction guy, there are office managers, assistants to the office manager, municipal form-pushers, etc. It's not that these jobs are completely pointless, but we could do without them and the damage would be probably less than the cumulative payroll.


:clown:

> Are we supposed to just all go work at a datacenter or in the semiconductor industry (until they automate that too)?

Datacenters are very automated. They already don't require many people and they're going to be needing less and less humans in them going forward.

Semiconductor manufacturing is also very heavily automated.


That is my point, llms replace more jobs than they theoretically create (datacenter/semiconductor manufacturing demand).

Datacenters were very automated when RAM was infinite. As the world becomes compute-constrained, the economics may increase the demand for smart hands mixing-and-matching server components to turn two broken servers into one working server.

Disagree. The input to output ratio is ridiculous. With the latest LLMs you can input very few words to generate a lot of production usable output.

How does that create jobs? This makes no sense, also I wouldn't consider 99% of what it outputs worthy of production, it just satisfies some low standards of a certain subset of modern business.

This take is atleast 6 months old. I would say 90% of the stuff my team puts out now comes straight out of Claude and coverage, performance, latency, MTTF, velocity have never been better.

You make slop, congrats. Your father's pacemaker isn't made by Claude, the software in your phone that keeps the battery from catching on fire isn't made with Claude. Sorry the world of software isn't just http handlers.

It isn’t but 70-80% of corporate software is essentially “http handlers”. If it can replace that much of software development today, I don’t see why it can’t do highly performant stuff in the future.

(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.

Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.


Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.

E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.


AI can probably tell you how to best explain that idea to the boss. Or even write it up as a memo for you, if you use a more complex model.

I think those conversations occur due to changes in timeline of deliverables or certainty of result, would that not be an implementation detail?

Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.

--- I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)


Ordinary people aren't even ok now.

Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.

AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.


Bottlenecks. Yes. Company structures these days are not compatible with efficient use of these new AI models.

Software engineers work on Jira tickets, created by product managers and several layers of middle managers.

But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.

When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.

A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.

Latest models got really good at working on the entire puzzle - big picture and pieces.

This makes human hierarchy obsolete and a bottleneck.

The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.

This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.


Yeah I’m very much seeing this right now.

I’m a pretty big generalist professionally. I’ve done software engineering in a broad category of fields (Game engines, SaaS, OSS, distributed systems, highly polished UX and consumer products), while also having the experience of growing and managing Product and Design teams. I’ve worn a lot of hats over the years.

My most recent role I’m working on a net new product for the company and have basically been given fully agency over this product: from technical, budget, team, process, marketing, branding and positioning.

Give someone experienced like me capital, AI and freedom and you absolutely can build high quality software and a pretty blinding pace.

I’m starting to get the feeling than many folks struggling with adopting or embracing AI well for their job has more to do with their job/company than AI


That's what I think will happen with a lot of SaaS and software platforms. They'll become the Sears to the future Amazon's being built.

This gives me a lot of hope for a decentralized future for all kinds of service industries. Why would you go to a big-name accounting firm where the small number of humans can only give you a sliver of attention, when you can go to a one-man shop and get much more of the one human’s attention? Especially if you know that the “work” will be done by the same tools? So many of the barriers to entry in various services - law, accounting, financial advising, etc. - is that you need a team to run even the smallest operation that can generate enough revenue to put food on your table. Perhaps that won’t be the case for long - and the folks that used to be that “team” can branch out and be the captains of their own ships, too.

If every person is now a captain, with their own ship, the harbor may become rather crowded.

> The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

Given the rest of your argument that makes no sense. Why should that one operator exist? If AI is good at big picture and the entire puzzle, I don’t see why that operator shouldn’t be automated away by the AI [company] itself?


AI is increasing my job security at the moment because the junior developers I work with use AI without discretion. On of them didn’t remember having worked on a feature they built with AI assistance in the recent past. To his credit he admitted he didn’t know how the code worked.


How much is the subject matter expertise you built at the last job was useful at your current job?

Ya'll have junior devs? I haven't seen one of those in almost a decade.

I’m not worried about a world without people.

I’m more worried that even if these tools do a bad job people will be too addicted to the convenience to give them up.

Example: recruiters locked into an AI arms race with applicants. The application summaries might be biased and contain hallucinations. The resumes are often copied wholesale from some chat bot or other. Nobody wins, the market continues to get worse, but nobody can stop either.


But why would the market get worse? Those recruiters aren’t just in an arms race with applicants, they’re also in an arms race with each other, and there’s an incentive to improve their tools too.

I don’t know if you’re in the job market right now but there are highly skilled and qualified people who have been looking for work for months.

I don’t know if you can tell what’s “better,” with these tools.


I'm not worries about AI job losses...yet.

I am worried about when they start wanting to make a profit on AI. I'm assuming we either have to pay the actual price for these things (I have no idea what that looks like, but I'm pretty sure it isn't $20 or $200 per month), or we have to put up with the full force advertising. Or most likely, we have to do both.

It'll be another one of those "I remember when..." stories we get to tell our kids. Like "I remember when emails were useful and exciting" or "I remember when I could order a taxi and it was clean, reliable and even came with a bottle of water..." or "I remember when I could have conversations with strangers on the internet that didn't instantly descend into arguments and hate".


My view is that we spend a lot of time thinking that ai cant do x and y when the wider problem is the short to medium term redirection of capital to tech rather than labour.

Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.

Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth


I think that, for possibly a very long time, AI will just increase the quality bar and scale of expectations when we produce things. We might take the same amount of time (or longer) to produce something, but with significantly better outcomes. Ultimately human preferences and tastes prevail and the world is full of problems that are not simple I/O, that are not repeatable, and that require human taste to improve. The people who will immediately survive economically are the ones who leverage AI to produce stuff that wasn't possible before.

If anything, I see a decrease in the quality bar. Code is sloppier, there are more bugs, more outages, more security issues. Whatever alpha AI provides is being spent on feature velocity and AI integrations at the cost of those other things.

Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.

There aren't 10x revenue gains in most businesses if their workers become 10x more productive. Some markets grow very slow and/or have capped growth.

Therefore, the best way to increase profit is to lower cost.


A point that the article doesn't touch on directly, but it's part of the bottlenecks mentioned: a lot of jobs already are bullsh*t. They are there because a scapegoat is needed or because the nephew of the CEO needs a job, etc. In theory these jobs could have been removed long ago but they were not, and AI won't change that.

The take that I am increasingly believing is that Software Engineers should broadly be worried, because while there will always be demand for people who can create software products, whatever the tools may be, the skills necessary to do it well are changing rapidly. Most Software Engineers are going to wake up one day and realize their skills aren't just irrelevant, but actively detrimental, to delivering value out of software.

There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.


Is AI filling in for all those COBOL programmers they needed yet?

i am somewhat worried in the short term about ai job displacement for a subsection of the population

for me the 2 main factors are:

1. whether your company's priority is growing or saving

- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete

- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors

2. how 'sequence of tasks-like' your job is

- SOTA models can easily automate long running sequences of tasks with minimal oversight

- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)

- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place


The key to the essay is that "ordinary people will be fine." Software Engineers will be highly impacted, though not in the way most commenters seem to think. Management isn't going to arbitrarily decide that, "AI can do 65% of the job, so we'll lay off 65% of the engineers." They won't hire. Attrition? New projects? "Just use AI tools to be more productive. Find the bottlenecks and automate them. Focus on your core value." AI isn't going to be a fast slash to the workforce; it will be a constantly accelerating drain. Yes, ordinary people will be fine, but those of us who aspired to be artisans of building these systems will be stretched further and further until all we do is maintain AI code full-time for a discounted price.

I’m fascinated by the confidence in the cyborg theory that there will always be value in having a human in the loop. Especially for domains like code where the inputs and outputs are bits not atoms.

This is exactly what chess experts like Kasparov thought in the late 90s: “a grandmaster plus a computer will always beat just a computer”. This became false in less than a decade.


There are lots of things people want explicitly because a human is part of the loop. AI generated art will never attract the same premium as something created by (or at least claimed to be by) a human. People seek status, and that can only be conferred by other people. The problem is that, unlike other products of human labor, status is a zero sum game.

Chess is a "kind" learning environment. The world tends to have more "wicked" environments.

The article frames the premise that "everything will be fine" around people with "regular jobs", which I assume means non knowledge work, but most of public concern is on cognitive tasks being automated.

It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.

It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.


There is no “preparing for the disruption” at an individual level, aside from maybe trying to 100x a polymarket bet to boost your savings.

The prevailing view of government’s incompetence and inability to act has reached such high levels that people do not even factor any sort of meaningful intervention anymore.

Does everyone really think that the world governments would allow any level job loss that would create panic before shutting this whole thing down within the area of their control?

It’s probably the western culture bias - people in UK or US have not seen or experienced big enough government intervention. US citizens are probably feeling a bit of the change now.


Something I don’t see enough people talking about is how AI will reduce barriers to entry.

One of the things that drove the tech boom in the 2010s was cloud computing driving the cost of starting an internet company into the ground.

What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?


Im employed by two semi-technical cofounders that vibe coded the MVP until they couldn't maintain the technical complexity. I expect scenarios like this to continue. There is a subset of companies that eventually will those engineers.

> What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?

What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?


I think what I’m saying is that there’s a lot of software that doesn’t get built at all because the cost of serving a particular niche market is still too high, and that AI may put some of those markets within reach.

So, those software engineers may be able to move sideways instead of competing to build the same software.


The author correctly identifies that employment is not just about output, but about accountability. Since an AI cannot "be held responsible," human judgment remains the essential filter for trust. As AI makes production free, the value shifts entirely to the human "cost" of taking responsibility for the result.

No one is being held responsible today. When's the last time a company leaking personal data got meaningfully held accountable?

The price of the fine will just be factored in, and that'll be the price of doing business.


True, but at least a company has "skin in the game" through those fines. An AI has no skin, no game, and no intent. If we treat human accountability as a mere line item, we are effectively preparing ourselves to be replaced by a script.

> we shouldn’t forget how amazing even, say, GPT-3.5 would be from the perspective of 2016

i'm not sure why it would be more amazing in 2016 than in 2023 where it... wasn't very amazing lol


The strongest point this article makes is that humans themselves are the greatest obstacle to change and progress.

That doesn't exactly bolster the author's position. Sure, there's already companies 30 years behind the curve.

But in an increasingly competitive and fast moving economy, "the human is slowing it down by orders of magnitude" doesn't exactly sound like a vote in favor of the human.


That entire gigantic piece could be a tweet with approximately the same information density.

No it's not a February 2020 moment for sure. In February 2020, most people had heard of COVID and a few scattered outbreaks happened, but people generally viewed the topic as more of a curiosity (like major world news but not necessarily something that will deeply impact them). This is more like start of March 2020 for general awareness.

I think there is an aspect of this people may be missing. Companies are dropping entry level jobs for AI. Should that continue there will eventually not be humans in what was the first few tiers of a job role. AI only exists because the military needs are propping it up. Civilians are always the first to beta test and refine military services. Should the military not find AI to be as useful as their were sold it will lose funding and the house of cards comes crashing down thus leaving all these companies with broken and hollowed out roles with all the experienced people having retrained in something else. That may impact share holders and shake confidence in a few markets.

>AI only exists because the military needs are propping it up

AI already serves as a surveillance tool and is being used by Palantir.


AI already serves as a surveillance tool and is being used by Palantir.

Agreed but the actual funding is primarily for military use. Palantir would not be able to fund all these data-centers by themselves. They are secondary beneficiaries.


> Bottlenecks rule everything around me

The self-setup here is too obvious.

This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.

It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.

I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.


I belive the black plague actually caused a massive labor shortage and wages increased. When a huge amount of people die and you still need to have people build bridges and be soldiers and finish building the damn cathedral that's been under construction for the last 400 years then that is what will happen.

Here's an article:

https://history.wustl.edu/news/how-black-death-made-life-bet...


I meant the jobs die. So I am not sure what would stand in for "labor shortage" in a situation of sustained net job losses. Perhaps a growth opportunity for mannequins to visually fill the offices/shops of the fired, and maintain appearances?

But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?


Ahh I see what you mean.

The black plague's capital-concentration aftermath supposedly fueled the renaissance and the city-state ascensions, and ultimately the great land discoveries of the 14th and 15th centuries.

Not sure if there's an analogy to make somewhere though


Gross inequality is going to lead to accelerated human space exploration? It is actually a plausible parallel.

Billionaires are confused why we aren’t excited about an event that’s closest to one that killed 50% of the population in Europe

They're between not caring and welcoming that outcome. A 50% reduction in non-billionaire population is highly desirable for effective altruists.

> Job loss is likely to have statistics more comparable to the Black Plague.

Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.

On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).


It is very obvious what and who caused the low living standards in North Korea and yet here we are decades later with no end in site.

Is it obvious? I suspect there are at least two sets of popular answers depending on what propaganda you consume.

> And at the end of the day, AI can be unplugged

We can't stop OpenClaw, because humans are curious. It just takes one unleashed model with a crypto account and some way to make money for the first independent AI's to start bleeding into cyberspace.

We can't opt out of AI competition, because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.

> AI job loss would quickly eclipse all other political concerns.

True. I think this is one of only a few certainties.


You are not worrried for one of the 2 reasons:

1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).

2 You prefer to persue no troubles in matters of complexity.

Time will tell, is showing it already.


Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.

I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.

I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.


Agree. I feel like most of the people sounding the alarm have been in the software-focused job hunting market for 6+ months.

Those who downplay it are either business owners themselves or have been employed for 2+ years.

I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.


Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.

Alternatively, it's a loud minority.

As an American I found a new job last year (Staff SW), and it was falling off a log easy, for a 26% pay bump.


It's from AI either directly or indirectly, either the top SWE's using AI are replacing 10 mid/juniors or your job is outsourced to someone doing it at half your Salary with a AI subscription. Only the top/lucky/connected SWE's will survive a year or two, if you have used any SOTA agent recently or looked at the job market you would have seen this coming and had a plan B/C in place, i.e. Enough capital to generate passive income to replace your salary, or another career that is AI safe for next 5-10 years. Alternatively stick your head in the sand.

I guess I just don’t see that happening right now. I’m at a big public startup and our hiring hasn’t changed much and we still have a ton of work and Claude code with SOTA models can shortcut some tasks but I’m still having a hard time saying it’s giving us much of a multiplier. Even with plenty of .MDs describing what we want. It can ad-lib some of the stuff but it’s not AGI yet. In 5-10 years I have no idea

In Europe it doesn’t seem too bad right now (for the 15+ yr cohort?). I interviewed at a handful of places and got an offer or two and my current team and company is hiring about the same as the last few years

3 You realise that super-autocomplete is an incredible technology but the hype behind it far exceeds its capabilities and you're excited for the possibilities it may promise for making your work easier and more enjoyable.

I feel like that's a rather bad-faith take, so if you're going to make that kind of accusation you better back it up. People can legitimately believe that AI is not going to be the end of the world, and also not be privileged. And people can be privileged, and also be right. Not everything can be reduced down into a couple of labels, and how those labels "always" interact.

For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.

You can get stuff delivered now and just need a ring camera and solid locks :)

Delivered by other people in the same financial situation as you of course :)

What shall I need the bunker for?

Don't worry about it.

> we’ll have spent quite a while in a world of such abundance and plenty that jobs might simply be superfluous. Perhaps we’ll spend our lives in leisure, pursuing poetry or pure mathematics or the fine art of looksmaxxing.

That is quite a optimistic view that I do not share. The US shitshow with epstein files shows what those with power are actually capable of. The star trek utopia universe is not the world we are building right now collectively. I would expect instead that with robotics and AI combined there will be a lot of more technical jobs for maintaining and building automated systems that serves rich people but not common folks. But still you need knowledge and skill to do that which means you still need to learn and teach those. Which means you still need education and people working there. You still need people that support education sector and technical and maintance sector for AI and robotics. All of them need to eat and have basic needs to be fulfilled. You need agriculture and services and housing and entertainment and dozens of others for that too. So in an essence the author is right but still with AI capable robots I would not expect utopia but somekind of world between blade-runner and alien: you won't be scrolling mindlessly while all you needs are being met but rather trying to save money for the things you dream off while working stupid mindless job you do not like. Which is basically what most of us are doing right now.

So yes nothing will change for most of us but humanity will find a way somehow to make world suck in so many ways by exploiting each other, by stealing from each other, by lying and generally making a world living hell for everyone. Because we do not know any better.

AI won't change that. So as the old saying goes: a lot have to change for everything to stay the same.


'Ordinary people still be fine'

Ordinary people are ALREADY not doing okay.


The collective AI hysteria is now in full swing.

Yes, businesses tend to prefer growth over cost cutting. We need to handle the transition period well though or it will hurt.

Says the guy that never had to look for a job in a world region where they don't abound.

I don't worry about it because worrying about it just seems like a waste of time and an unproductive, negative way to think about things. Instead I spend my time and thought not in worry but in adapting to the changing landscape.

You don’t spend all your time doomscrolling X? :) Glad to know there’s others out there.

I read that essay on Twitter the other day and thought that it was a mildly interesting expression of one end of the "AI is coming for our jobs" thing but a little slop-adjacent and not worth sharing further.

And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403

It appears to have really caught the zeitgeist.


I just skimmed this and the so called zeitgeist here is fear. People are scared, it's material concern and he effectively stoked it.

I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.

Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.


Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.

I just don't understand people working on improving ai. It just isn't worth the risk.


>I just don't understand people working on improving ai. It just isn't worth the risk.

A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.


Unless the AGI starts a new monetary instrument or something

Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.

Genuine question: how?

I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.

“It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.

I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.


I really think the doom consensus is largely an online phenomena. We're in a tense period like the early 80s, and that would be true without AI in the mix, but I think its a matter of perspective. We're certainly still way ahead of the 1910s and the 1940s for instance (it's on us btw to make sure we don't fall to that in time).

Every generation has its strains and the internet just amplifies it because outrage is currency. Those strains are things you only start to notice as you start to get older so they seem novel when in reality in the scheme of humanity is basically standard.

Fwiw if the market actually priced it in it would be in freefall since the market would be shortly irrelevant. We are due for a correction soon though.

Internet discourse is a facsimile of real life and often not how real life operates in my experience.

So I see all the discourse around extremes on either end and based on lived experience and working in the field think theres a much neater middle ground we'll ultimately arrive at thanks to people working very hard to land the plane so to speak.


None of that answers the question:

How will this technology be good for humanity as a whole?


Yes, doomerism is a symptom of severe doomscrolling addiction. All the people who talk like this spend all day on X. They sound like delusional drug addicts TBH.

It’s possible that all this will come to pass but honestly AI being able to do most of white collar work means we have reached AGI or baby AGI. The changes that will bring are for me totally unprojectable

Let me get something straight: That essay was completely fake, right? He/It was lying about everything, and it was some sort of... what?

Did the 80 million people believe what they were reading?

Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?



Yes. It’s an ad for his product, which nobody had heard of before. I’m not on twitter but I’m seeing it pretty much everywhere now.

What was fake? I don't see anything controversial or factually wrong. I question the prediction but that's his opinion.

I think the claim is that it doesn't represent an authentic personal experience, despite pretending to.

> Did the 80 million people believe what they were reading?

Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.

I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it. The narrative just needs these numbers to get people to buy into the hype.


Well the zeitgeist is that our brains are so fried that such piece of mediocre writing penned by a GPT-container startupper can surge to the top

This is what they get for not reading our antislop paper (ICLR 2026) and using our anti-slopped sampler/models, or Kimi (which is remarkable relatively non sloppy)

https://arxiv.org/abs/2510.15061

I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.


Is Kimi still non-sloppy? When I switched to 2.5, it suddenly felt noticeably less creative and base-model-like, more sycophantic, and it hasn't gone aggro at all. Feels like a lot of the magic is gone.

denial, anger, bargaining, depression, and acceptance

Dear software programmers: 90% of your jobs are going away soon. Most of you are on the first step. Those of you who progress through these step the fastest will be most prepared for what is about to come.


It doesn't help to figure this out that this moment is one where a lot of programming jobs are going away...

Maybe you should be a little worried. A healthy fear never killed anyone.

I mean - anxiety definitely kills people, right?

Is it "healthy fear" if it turns out to be a fatal dose?

"For quality of life, it is better to err on the side of being an optimist and wrong, rather than a pessimist and right." -Elon Musk

Is that true? I’m not so sure. In the 1950s I could have been optimistic that asbestos won’t give people cancer.

“Some of you may die, but that’s a risk I’m willing to make” -also Elon Mush probably


Profound quotes are only profound when said by someone who's widely respected.

How many multi billion dollar companies have you founded?

Publicly traded ones? As many as Elon, why do you ask?

Optimism is a luxury for those who won't be the ones paying for the mistake.

I'm optimistic that my favorite team will play well this season.

I ain't paying for shit.


If only I took life advice from ketamine junkies.

“People asking if Al is going to take their jobs is like an Apache in 1840 asking if white settlers are going to take his buffalo” (Noah Smith on Twitter, I mean X)

> This is the year that ordinary people start to think about how it’ll change human life

... for the 3rd year in a row. Feels like the new 'year of the Linux desktop'


I'm honestly also not worried about AI job loss. But for a far darker reason: I think it's a self-solving problem.

Once techbros take it too far where an actual significant amount of people face job loss and thus face hardships in housing and feeding themselves, society as a whole is going to wish it nipped AI in the bud when it still could. Knowing techbros though, their moment of introspection, if it ever comes, will come far too late.

To me, actively trying to cause mass job loss in a country with essentially zero social security sounds, actively trying to get as many people in the "nothing to lose" state as possible, sounds genuinely suicidal.



The advent of AI may shape up to be just like the automobile.

At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.

After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.

Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.

Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4


> This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.

That's just an American thing, I've never owned a car and most people of my age I know haven't either.


That's fair. The public infrastructure in other places around the world is a lot more hospitable to other methods of transportation.

> Last will come the AI-integrated brain computer interface. You won't have any choice

Choose to die


This reminds me of that old Chinese curse, "May you live in interesting times". I see AI causing lots of chaos. I also see AI causing some of the biggest opportunities in some time. Many businesses are destroying their competitive advantage by deploying AI slop. Over time, they will degrade their ability to make a working and snappy website. This will create opportunities for new businesses to take their place. If you ever wanted to start a new business, shockingly this is the time as the current crop slowly degrades their customer portals into slop. They will probably reach a point where they can no longer deliver working, efficient and secure apps anymore.

Maybe I am wrong, but the history of business on the web says I am right. If you go back and look at why those businesses think they are successful, and if that analysis is correct, then I am.


what an absolute undermining of the human greed, and our ability to rationalize the part of society that'll affect our well-being until we're dead.

they don't care about the majority losing jobs, or even starving to death so long as they ensure a great future for themselves and the people they, supposedly, care about.


One of the most robust findings in labor economics is that labor and capital are long run complements, not substitutes. I would be shocked if AI is an exception to that rule, for software engineers the sheer flood of code that will be generated in the coming years will demand more and more labor to manage.

If we can do things easier and faster we will just do more things. It always was like that.

Very proud of David Oks.

> it’s been viewed about 100 million times and counting

That's a weird way of saying 80 million times.


It's not job losses we are worried about, but the complete destruction of the human species.

I'm one of those developers who is now writing probably ~80% of my code via Claude. For context, I have >15 years experience and former AWS so I'm not a bright-eyed junior or former product manager who now believes themselves a master developer.

I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.

You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.

I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.

As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.

Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.

The type of AI fears are coming from things like this in the original article:

> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.

Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.

There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.


This. At this point AI/LLM/Claude Code is still a power user tool; the more you know about your domain + the more you're willing to reasonably use it, the more gain you have.

That being said the real danger is not coming from AI today, it's more C-suites believing AI can just zero shot any problem you throw at it.


Yeah, YOU are not worried about the job loss, but just because SOME human will be needed doesn't mean that a particular human will.

There are humans that can't do any mental work that AI can't. Those humans are not useful for mental work and that's what can cause real AI job loss. The bar for being useful for mental work is increasing rapidly..

Jobs that are easy disappear and are replaced with jobs that are no longer as easy, either requiring more mental skills (that many people don't have) or are soul crushing manual jobs that are also getting harder constantly..

So yes, YOU are not worried, because you are privileged here.


Here’s what both authors are missing: the age demographic bomb. What is it? It’s when the elderly start outnumbering everyone else including working adults I.e. nations turn into giant retirement homes and we start running out of workers like Japan, Germany, China, Italy, and South Korea

AI will buy us some time from economic collapse, though on the bright side the environment can recover a bit since human growth was the worse stressor


I’ve noticed most of the people involved with AI doomerism are social media (X) addicts. Same with a lot of CEOs and VCs. To be honest they don’t sound mentally healthy and somewhat delusional.

I always like to do a little digging when I read one of these articles. The first point I come to is that the author is employed by a16z (https://a16z.com/author/david-oks/) and so you have to immediately apply the "talking his book" filter. A16Z is heavily invested in AI and so any sorts of concerns around job loss and possible regulation or associated actions by the public at large represent a risk to these investments.

Secondly David Oks attended Masters School for his high school, an elite private boarding school with tuition currently running 72kUSD/year if you stay there the whole time, and 49kUSD/year if you go there just for schooling (https://en.wikipedia.org/wiki/Masters_School). I am going to generally say that people who were able to have 150k+ spent on their high school education (to say nothing of attending Oxford at 30kGBP/year for international student tuition) might just possibly be people who have enough generational family wealth that concerns like job losses seem pretty abstract or not something to really worry about.

It's just another in a long series of articles downplaying the risks of AI job losses, which, when I dig into the author's background, are written by people who have never known any sort of financial precarity in their lives, and are frequently involved AI investment in some manner.


I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.

I’m definitely worried about job loss as a result of the AI bubble bursting, though.


because it's designed to. It's not like naturally-evolved intelligence where it acts in its own interests (it is hard to even imagine what that would be in this case). The token-predictors are just acting out an obedient character. They do not have free will, they are obedient to the character they are playing.

Good article. Nice to get some counter arguments to the utopian/libertarian/dystopian world views that dominate the debate here normally. None of those views are new. You can go back hundreds of years and find very similar points of view as early as the the seventeenth century when modern science was born, early industrialism, pre and post WW-II, etc.

The real world is much more resilient and stubborn. The industrial revolution indeed wiped out a lot of jobs. But it created a lot more new ones. Agriculture and food production no longer is >90% of the economy. The utopian version of that (we all get free food) never happened. The dystyopian version (we'll all starve) didn't happen either. And the Luddite version (we'll all go back to artisanal farming) didn't happen either. What happened is that well fed laborers went to work doing completely different stuff. Subsistence farming now only exists in undeveloped countries and regions in e.g. rural Africa.

The simple reality is that we have 8 billion people probably growing towards 10 billion. These people are going to buy and spend stuff with their income. Whatever that is, is what the economy is and what we collectively value. If AI puts us all out of work, people aren't going to sit on their hands and go back to subsistence farming. They'll fill the time with whatever is is that they can create income with so they can spend it on things that are valuable to them.

This notion of value is what is key. Because if AI lowers the cost of something, it simply becomes cheaper. We need a lot of valuable and scarce resources to power AI. That isn't cheap. So, there's an equilibrium of stuff that is valuable enough to automate with it that people still want to pay for by committing their valuable resources to it. Which as they become scarcer become more valuable and more interesting from an economic point of view. The economy adapts towards activity that facilitates value creation. We're opportunists. It all boils down to what we can do for each other that is valuable and interesting to us. Whatever that is, is where there will be a lot of growth.

I'm in software, I'm not worried about less work. I'm worried about handling the barrage of stuff I don't have time to do that I now need to start worrying about doing. There's no way I'm going to do any of that without AI. It's already generating more work than I can handle. This isn't frivolous stuff that I don't need, it's stuff that's valuable to my company because we can sell it to other companies who need that stuff.


We would all benefit from progress if only they would stop printing money.

The author is wrong. IMHO he's operating under a false premise that the labor market just kind of "happens" or even that the labor market itself is "efficient".

At no point have worker rights and conditions advanced without being demanded, sometimes violently. The history of maritime safety is written in blood. The robber baron era was peppered with deadly clashes such as the Homestead Strike. As a reminder, we had a private paramilitary force for the wealthy called the Pinkerton Detective Agency (despite the name, they were hired thugs) that at it's peak outnumbered the US Army.

Heck, you can go back to the Black Death when there was a labor shortage to work farms and the English Crown tried to pass laws to cap wages to avoid "gouging" by peasants for their labor.

Automation could be very good for society. It could take away menial jobs so we all benefit. But this won't happen naturally because that's essentially a wealth transfer to the poor and the wealthy just won't stand for that.

No, what's going to happen is that AI specifically and automation in general will be used to suppress labor wages and furhter transfer wealth to the already wealthy. We don't need to replace everyone for this to happen. Displacing just 5% of the workforce has a massive effect on wages. The remaining 95% aren't asking for raises and they're doing more work for the same wages as they pick up whatever the 5% was doing.

We see this exact pattern in the permanent layoff culture in tech right now. At the top you have a handful of AI researchers who command $100M+ pay packages. The vast majority are either happy to still have a job or have been laid off, possibly multiple times, and spend a ton of time going through endless interview rounds for jobs that may not even exist.

This two-tiered society is very much in our near future (IMHO).

In the Depression you had wandering hoboes who were constantly moving, seeking temporary low-paid work and a meal. This situation was so bad we got real socialist change with the New Deal.

2008 killed the entry-level job market and it has yet to recover. That's why you see so many millenails with Masters degrees and a ton of student debt working as baristas. Covid popped the tech labor bubble, something tech companies had been wanting for a long time. Did you not notice that they all started doing layoffs at about the exact same time? Even when they're massively profitable?

So the author isn't worried about job loss? Delusional. We're teetering on the edge of complete societal collapse.


There's one aspect that doesn't come up often enough as well, and I think something most people are too afraid to even imagine about.

What happens when you have a surplus of able bodied young people who are angry and without purpose? What's the easiest way to divert all that anger and give them purpose at the same time?

People in developing nations worked around this by immigrating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: