Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The golden age for me is any period where you have the fully documented systems.

Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.

And software that’s open and can be modified.

Instead what we have is:

- AI which are little black boxes and beyond our ability to fully reason.

- perpetual subscription services for the same software we used to “own”.

- hardware that is completely undocumented to all but a small few who are granted an NDA before hand

- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”

- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.

The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.





My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.

Hostile operating systems. Take the effort to switch to Linux.

Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.

Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.


>Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself.

That's like saying you can understand humans by watching some physics or biology videos.


No it’s not

Nobody has built a human so we don’t know how they work

We know exactly how LLM technology works


We know _how_ it works but even Anthropic routinely does research on its own models and gets surprised

> We were often surprised by what we saw in the model

https://www.anthropic.com/research/tracing-thoughts-language...


Which is…true of all technologies since forever

Except it's not. Traditional algorithms are well understood because they're deterministic formulas. We know what the output is if we know the input. The surprises that happen with traditional algorithms are when they're applied in non-traditional scenarios as an experiment.

Whereas with LLMs, we get surprised even when using them in an expected way. This is why so much research happens investigating how these models work even after they've been released to the public. And it's also why prompt engineering can feel like black magic.


I don’t know what to tell you other than to say that the concept of determinism in engineering is extremely new

Everything you said right now holds equally true for chemical engineering and biomedical engineering so like you need get some experience


That’s some epic goal post shifting going on there!!

We’re talking about software algorithms. Chemical and biomedical engineering are entirely different fields. As are psychology, gardening, and morris dancing


We know why they work, but not how. SotA models are an empirical goldmine, we are learning a lot about how information and intelligence organize themselves under various constraints. This is why there are new papers published every single day which further explore the capabilities and inner-workings of these models.

You can look at the weights and traces all you like with telemetry and tracing

If you don’t own the model then you have a problem that has nothing to do with technology


I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm.

I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms.

Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware.

Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”.


but isn't this like a lot of other CS-related "gradient descent"?

when someone invents a new scheduling algorithm or a new concurrent data structure, it's usually based on hunches and empirical results (benchmarks) too. nobody sits down and mathematically proves their new linux scheduler is optimal before shipping it. they test it against representative workloads and see if there is uplift.

we understand transformer architectures at the same theoretical level we understand most complex systems. we know the principles, we have solid intuitions about why certain things work, but the emergent behavior of any sufficiently complex system isn't fully predictable from first principles.

that's true of operating systems, distributed databases, and most software above a certain complexity threshold.


No. Algorithm analysis is much more sophisticated and well defined than that. Most algorithms are deterministic, and it is relatively straightforward to identify complexity, O(). Even nondeterministic algorithms we can evaluate asymptotic performance under different categories of input. We know a lot about how an algorithm will perform under a wide variety of input distributions regardless of determinism. In the case of schedulers, and other critical concurrency algorithms, performance is well known before release. There is a whole subfield of computer science dedicated to it. You don't have to "prove optimality" to know a lot about how an algorithm will perform. What's missing in neural networks is the why and how any inputs will propagate, through the network during inference. It is a black box of understandability. Under a great deal of study, but still very poorly understood.

Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.

If GenAI could only write documentation it would still be a game changer.


But it write mostly useless documentation Which take time to read and decipher.

And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).


Loop it. Use another agent (from a different company helps) to review the code and documentation and call out any inconsistencies.

I run a bunch of jobs weekly to review docs for inconsistencies and write a plan to fix. It still needs humans in the loop if the agents don’t converge after a few turns, but it’s largely automatic (I baby sat it for a few months validating each change).


That might work for hallucinations, that doesn't work for useless verbose. And the main issue is that LLM don't always distinguish useless verbose from necessary one, so even when I ask it to reduce verbose, it remove everything save a few useful comments/docstring, but some of the comments that were removed I deemed useful. Un the end I have to do the work of cutting verbose manually anyway.

The problem with looping is that any hallucination or incorrect assumption in an early loop becomes an amplifying garbage-in-garbage-out problem.

To translate your answer:

- “You’re not spending enough money”

- “You’re not micromanaging enough”

Seriously?


It can generate useful documentation or useless documentation. It doesn't take very long to instruct the LLM to generate the documentation, and then check if it matches your understanding of the project later. Most real documentation is about as wrong as LLM-generated documentation anyway. Documenting code is a language-to-language translation task, that LLMs are designed for.

The problems about documentation I described wasn’t about the effort of writing it. It was that modern chipsets are trade secrets.

When you bought a computer in the 80s, you’d get a technical manual about the internal workings of the hardware. In some cases even going as far as detailing what the registers did on their graphics chipset or CPU.

GenAI wouldn’t help here for modern hardware because GenAI doesn’t have access to those specifications. And if it did, then it would already be documented so we wouldnt need GenAI to write it ;)


Have you tried reading the documentation it generates?

> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.

I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)


Actually this makes me think of an interesting point. We DO have too many layers of software.. and rebuilding is always so cost prohibative.

Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.


I’ve heard this argument made before and it’s the only side of AI software development that excites me.

Using AI to write yet another run-of-the-mill web service written in the same bloated frameworks and programming languages designed for the lowest common denominator of developers really doesn’t feel like it’s taking advantage leap in capabilities that AI bring.

But using AI to write native applications in low level languages, built for performance and memory utilisation, does at least feel like we are bringing some actual quality of life savings in exchange for all those fossil fuels burnt to crunch the LLMs tokens.


> perpetual subscription services for the same software we used to “own”.

In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?

If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.


People have been building alternatives to MS Office, Adopt Creative Suite, and so on and so forth for literally decades and yet they’re still the de facto standard.

Turns out it’s a lot harder to disrupt than it sounds.


It's really hard. But not impossible. Figma managed to. What's different this time around is AI assisted programming means that people can go in and fix bugs, and the interchange becomes the important part.

Figma is another subscription-only service with no native applications.

The closest thing we get to “disruption” these days are web services with complimentary Electron apps, which basically just serves the same content as the website except for duplicating the memory overhead of running a fresh browser instance.


Figma didn't disrupt anything except Adobe, it's the same shitty business model and the same shitty corporate overloads.

Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: