Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading some documentation to figure out a format is something you do once and takes you a few minutes.

Are you a developer? Then this is something you probably do a couple times a day. Prompting the correct version will take longer and will leave you with much less understanding of the system you just implemented. So once it fails you don't know how to fix it.

 help



I love that the posture is I have a problem I need you to fix haha.

I don't need you to fix my problems. I'm reporting that the LLM-based solution beats the dogshit out of the old "become a journeyman on one of 11 billion bullshit formats or processes" practice.


I'm not trying to help you, I'm just wondering how the LLM actually helps you.

You don't need to become a journeyman at understanding a format, you just need to see a schema, or find an open source utility. I just can't comprehend the actual helplessness that a developer would have to experience in order to have to ask an LLM to do something like this.

If I were that daunted by parsing a standardized file format for a workflow, I would have to be experiencing a major burnout. How could I ever assume I could do any actual technical work if I'm overwhelmed by a parsing problem that has out-of-the-box solutions available.


I’ll give you a real concrete example. I had to build an app on the Mac, which needed to be signed. I did not want to learn Apple signing procedures in order to do this. It turns out I did not have to, because I got the robot to learn it. So then I was able to finish doing what it was I intended to do without having to spend an afternoon or a day misunderstanding the Apple signing procedures.

Could I have learned these and become a more virtuous person by knowing apples signing rules? Maybe. What’s much more likely is that I might’ve just stopped doing this rather than deal with that particular difficulty. Instead, I was able to work on other problems that arose in the building of this application.

What I am suggesting to you is that I don’t have to fucking feel bad for being daunted anymore. And neither does anyone else. Folks that want to do that on their own time are free to, but I’m never going back.

There’s a lot of projects for people where this is gonna start to be the operative situation. Folks who might have gotten stuck on an early stumbling block are now just moving ahead and are learning about different and frankly more interesting problems to solve. I’m still beating my head on things, but they are not. “did I get this format just right?”

This shift is an analogous to how we took having to do computer arithmetic out of the hands of programmers in the 80s. There used to be a substantial part of programming that was just a computer arithmetic. Now, almost nobody does that. Nobody in this thread could build a full adder if their life depended on it or produce an accurate sin function. It used to be that that would’ve stopped you cold and trying to answer an engineering problem on a computer. Now it doesn’t. We do not run around telling people that they’re not engineers or that they’re not learning because we have made this affordance.


A full adder is literally one of the easier theoretical computer science concepts, and a sine approximation is a simple Maclaurin series. And yes, if you can't do a simple series expansion, you are not an engineer. You may be a developer, but not an engineer.

These are both first or second year bachelors topics. Just because you're unable to work through simple math problems doesn't mean any semi-competent computer professional would be.


Was it a good thing for anyone writing software which included those things to need to not only work out how they are on a blackboard but how they are on the real machine in question? And how they are on the next machine over?

Do you yearn to return to that world? I suspect most people don't. It's not just knowing your own machine, but any machine the code could run on. It's also not just reaching for some 2nd year bachelor topics when the matter at hand is much more complicated. Where does your sine approximation fail? How do you know? Can you prove that? Does the compiler or the hardware decide to do things behind your back which vitiate any of those claims?

Knowing the answer to that all every time you need a sine is not something 99.99% of engineers need to worry about. IT USED TO BE. But now it's not. No one is going back to that.


I don't know what world you live in, but I still definitely need to know the approximation error of the methods I use.

sin(x) has one of the simplest Maclaurin series:

sin(x) = x - x^3/3! + x^5/5! - x^7/7! ...

For any partial sum of that series, the error is always strictly less than the absolute value of the next term in the series. The fact that this was your example of a "difficult" engineering problem is uh, embarrassing.

For good measure, I would of course fuzz any component involving numerical methods to ensure it stays within bounds. _As any competent engineer would_.

And I absolutely work things out on pen and paper or a white board before implementing them. How else would I verify designs? I'm sure you're aware that fixing bugs is cheapest in the design phase.

Are you living in an alternate reality where software quality does not matter? I'm still living in the world where engineers need to know what the fuck they're doing.


On whose arithmetic?

You’re just showing me the blackboard approximation. How about just on x86? What are the bounds and how do you know?


Oh, IEEE 754 double precision floating point accuracy? Rule of thumb is 17 digits. You will probably get issues related to catastrophic cancellation around x=0. As I said earlier the easiest solution is just to measure in this case. You don't really need to fuzz a sine approximation, you can scan over one period and compare against exactly calculated tables. I would probably add a cutoff around zero and move to a linear model if there is cancellation issues.

And if the measurement shows the approximation has too much floating point error, you can always move to Kahan sums or quad precision. This comes up fairly often.

If I really had to _prove_ formally an exact error bound, that would take me some time. This is not something you would be likely to have to do unless you're building software for airplanes, or some other safety critical domain. And an LLM would absolutely not be helpful in that case. You would use formal verification methods.


"Oh, IEEE 754 double precision floating point accuracy?"

Ok, so we do agree! You DON'T want to go back to a system where everyone had to do their own arithmetic just to make a program! That's fabulous. I'm glad that we're in agreement.

It's it SO MUCH NICER to just have the vagaries of one arithmetic we've already agreed upon to deal with, instead of needing to become an expert in numerical analysis just to get along with things.


Ok. Based on your answer, you don't understand very much about computers. Maybe it makes sense that you're leaning on LLMs this early in your career. But it will bite you eventually.

Every x86 computer uses IEEE 754 floats, that's what you, the programmer, needs to be able to reason about.

You still need to understand floating point errors and catastrophic cancellation. And simple techniques to deal with that, like summing from small to big, or using Kahan sums, or limiting the range where your approximation is used. You can use a library for some of these, but then you need to know what the library is doing, and how to access these functions.

But the problem seems to be that you have a skill issue, and the LLM will only make your skill issues worse. Stop leaning on it or you'll never be able to stand on your own.


I said this situation is reminiscent of how we took computer arithmetic out of the hands of programmers in the 80s and you gave me a big lecture about how easy it was to make your own sine function which concluded in you explaining that every computer (mostly) uses IEEE floats.

No shit.

What do you think we did in the 1980s to take computer arithmetic away from working programmers? We standardized computer arithmetic so instead of needing a numerical analyst on hand you just need to read that Goldberg article you’ll run off to Google now.

You live in the land of milk and honey and you dare lecture someone about effort. You have absolutely no clue what world we left behind, but you’re happy to talk about who is and isn’t learning.


Standardization is a good thing. I never said it wasn't. You're just arguing with a strawman. Your two last posts aren't even related to the discussion at hand.

Here is what I said:

“ This shift is an analogous to how we took having to do computer arithmetic out of the hands of programmers in the 80s. There used to be a substantial part of programming that was just a computer arithmetic. Now, almost nobody does that. Nobody in this thread could build a full adder if their life depended on it or produce an accurate sin function.”

It is truly not my fault that you proceeded to lecture me for multiple posts just to reach the conclusion that I SET OUT FOR YOU: standardization of computer arithmetic is good and makes it so that someone doing math on a computer doesn’t need to become an expert on how the computer does math.

As I said when you first insinuated yourself: I don’t need your help to be an engineer or a developer, thank you. You persisted anyway and embarrassed yourself.


Lol, you still don't get it.

Standardization means you only need to become an expert in the standard. You still need to know the standard.

And to your point in the quoted part: I absolutely could, as could any of the people who I studied with (in this century).

When you add abstraction laters you do still need to understand how the underlying layers work in order to manage upper layers.

Look, I accept that I've posted more than I should about this. But it's only because you keep saying "nuh-uh". And when you start arguing in bad faith about what I've said, that should be called out.

Saying you disagree is fine, but becoming so flustered you respond dishonestly is not.


I have been saying that the shift with LLM’s is similar to the 1980s when we standardized computer arithmetic.

Prior to standardization, you had to become an expert on how the computer did arithmetic in order to do something that required arithmetic. This did not mean simply knowing an approximation for a function which you could program in a language. That is not enough as you point out that is 200 level stuff. If you wanted it to actually work on an actual machine, you would need to understand how the machine itself was actually going to undertake those operations. You had to have a numerical analyst around, or at least someone that had taken a couple of those courses.

Today you can tell me how simple it is to write a sine function, because when I press you for detail details, you can say things like well. It’ll just need to be to the standard or I’ll use a library.

In the 1970s that was not the case. Nothing about computer arithmetic was simple or unified or anything other than requiring an inordinate amount of attention paid to something that was not the object of interest. Lots of organizations that needed to get things done on computers had to hire people and train people to be experts in the arithmetic in a way that we do not have to anymore. Most people programming do not have to think about computer arithmetic in any significant fashion. If you compare this to the 1940s or the 1950s or the 1960s or the 1970s, the picture is very different. If you became a programmer in the 1960s about half of what you were learning was how to make the machine do arithmetic. Need to do a square root? well you better write that function from scratch. Does it also need to be performant? Well, then you’re in trouble.

The amount of intellectual effort, devoted to training programmers of all stripes in computer arithmetic is much less than it was 50 years ago. The fact that it is possible at all for you to boast that you could write that sine approximation and know its bounds and trust those is due to the standardization effort.

I am saying, and I have been saying that we are entering into a similar era, where there are whole categories of concerns, which are local to the machine that most users are not going to have to deal with. Some of these things will have been very central to some people’s identities, like being able to brag about sine approximations. Training is going to change; capabilities are going to change; what it means to be an engineer is going to change.

I’m having fun with the change, personally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: