Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This distinction is crucial and often overlooked. Social networks were built around bidirectional relationships — you follow someone because you know them or share mutual interests. The feed was a byproduct of your social graph.

Attention media inverts this completely. The algorithm decides what you see based on engagement metrics, not relationships. Your 'feed' is curated by what keeps you scrolling, not by who you chose to connect with.

The practical consequence: on social networks, you could curate your experience by curating your connections. On attention media, curation is impossible because the algorithm optimizes for a metric you don't control. The only winning move is to limit time spent.


prompt caching - big part of the reason why they can economically offer claude code plans. one of the ant team explain it here:

https://x.com/trq212/status/2024574133011673516


So, if I use my SIM card 16 hours a day, 7 days a week, Ill get banned? Doesn’t that seem absurd? The SIM card is enforcing one voice call at a time. If the apartment building has to wait in line to use it, what’s the difference?

If you deployed it in a way that did multiplexing such that multiple users could use it at once, then sure—-Business time. But otherwise…


The e-paper approach is brilliant for a family dashboard. No backlight means it blends into the room like a picture frame rather than screaming for attention like a tablet.

I wonder if you considered MQTT or a similar lightweight protocol for updates instead of polling. For something that only needs to refresh a few times per day (weather, calendar, chores), you could probably run the whole thing on a tiny solar panel with a battery, making it truly zero-maintenance.

The 000 price tag for the 13.3" Visionect display is the elephant in the room though. Would love to see this concept with a cluster of cheaper e-ink panels.


> One related thought I had was that given OpenAI is the only one _not_ doing this of the big3, it probably indicates they have a lot more spare compute.

Or, pessimistically, it could indicate they’re burning cash hoping the subsidized access will eventually result in someone giving them a product idea they can build and resell at a profit.

If they let *claw (or third party coding agents, or whatever) run for six more months and in those months figure out how to sell a safe substitute and then cut off access, maybe it will have been worth it.


Generally speaking, there's prompt caching that can be enabled in the API with things like this: https://platform.claude.com/docs/en/build-with-claude/prompt...

For a specific harness, they've all found ways to optimize to get higher cache hit rates with their harness. Common system prompts and all, and more and more users hitting cache really makes the cost of inference go down dramatically.

What bothers me about a lot of the discussion about providers disallowing other harnesses with the subscription plans around here is the complete lack of awareness of how economies of scale from common caching practices across more users can enable the higher, cheaper quotas subscriptions give you.


If you're not in the US, you probably don't understand how our system of federalism works. We have 50 different states, some of which are basically run by the Christian equivalent of the Taliban or the Shiite mullahs of Iran. These state governments often come up with goofy, performative laws such as age verification that are normally set aside by higher courts as First Amendment violations.

I say "normally" because the same religious factions are rapidly expanding their dominance over those very courts. Absolutely no historical freedoms can be taken for granted in the US right now. Nevertheless, the fact is, there is no national Internet censorship regime including age verification. No such laws are currently under consideration at the national level.

(Yes, you can be prosecuted for downloading or distributing child pornography, but that is not an Internet-specific issue, and there is no other country I'm aware of where such laws are not also on the books.)


But when they paste support replies using terms like "suspension," "violation of the Google Terms of Service," and "zero-tolerance," it sounds like someone's close to losing access to their family photos.

If that's the case, I want to see studies on how blue light filtering glasses fare versus just regular-ass sunglasses (of equivalent tint). Is it the reduction of blue light that helps people, or is it just the reduction of light, full stop?

Just the latter on its own is quite thoroughly proven to help people sleep, and helps with migraines, and so on.

Or what about UV light? I've never seen anyone say that an anti-UV coating/lens material in glasses helps with sleep (nor that it doesn't). But UV is still high frequency light that enters our eyes, even higher frequency than blue light, and there's at least some research to suggest it might affect human circadian rhythms despite its invisibility to us. But I've never seen anyone suggest that wearing your regular glasses (because most regular glasses these days are UV-blocking) before bed, or before a nap during the day (or before trying to sleep through the day, for a night shift worker), could help someone get to sleep.

I'd imagine it could also depend on exactly how much blue light is filtered; it's not like all blue light filtering glasses are tinted to the same degree. Glasses that all but make the world look like a reptile house might do a lot more than glasses that have only the faintest of orange tints. It's not like lenses either block blue light fully or don't block it at all; there's a spectrum here. How much blue light is supposed to get blocked for it to matter?

Were the studies that showed blue light blocking glasses to improve sleep done with lenses so tinted they were essentially sunglasses, or did they use nearly clear/only slightly tinted lenses akin to the blue light blocking lenses that are marketed as alternatives to regular clear lenses?


Google has always done no warning bans

YouTube is also full of huge content creators, people who make Google tons of money, that complain about the Byzantine and opaque rules they have to dance around to maintain their livelihood and fan base

Google fears their giant userbases so they act with zero regard for communication and transparency because of the small chance it’d help the abusers


I know that’s how it works and I also know it’s not a zero sum game. That’s why every law or policy gets time for comments and debate and sometimes policy gets revised. It’s how governance works.

But if you feel you have the perfect solutions, then by all means get yourself on the ballot so we can finally see the light.


This is exactly why API-level access matters more than consumer subscriptions for production workloads. Consumer plans are subsidized with the assumption of interactive, low-volume usage. The moment you programmatically route through them, you break the economic model they're built on.

The real issue is the lack of transparency. If Google's ToS says 'no programmatic access via third-party tools,' state it clearly and enforce it with warnings first. An instant ban with no recourse is hostile to paying customers who may genuinely not know where the line is.

For anyone building production systems, the lesson is clear: use the actual API tiers, budget for it, and treat consumer subscriptions as evaluation tools only.


> your home IP is now tied to whatever the agent does. If an agent scrapes a site too aggressively, your ISP may notice. Route agent traffic through a VPN for anything beyond light browsing.

What is the light browsing you mentioned? 10 sites, 100 sites. At what point you felt it as too much to be considered light browsing.


Did people learn nothing from the rise, stall, and now fall of social networks?

Yes, AI can do some incredible things. But we’re also running full speed into an ecosystem controlled by 2 or 3 major companies. Running at a loss. A reality check is coming.

It’s not a technology problem. It’s an economic problem. People are too busy looking at the tech to notice.


The irony is that web searches for an explanation of something often lead to a discussion thread where the poster is downvoted and berated for daring to ask people instead of Google. And then there's one commenter who actually actually explains the thing you were wondering about.

Similar could be said for US government contractors. Not as high as 80-100%, but 50-70% is common.

Sadly we can't all be Bellard

Broke up into 2 groups, the first asked to perform some ritual vs the control.

The control were actually placed in what my high school described as detention. Sit still and relax for 30 minutes.

Did they measure how much rituals chill you out or how much stewing in your own juices for 30 mins makes you uncomfortable?


I just really liked that question and response.

I find another factor that is not always discussed is comfort and how pleasant it is to use a layout. I know Dvorak is not much faster in the end but it is such a joy to use in comparison to Qwerty. I do wonder if it would be fun or just nice to use this hex-grid layout on a phone.

The irony is that if we had been writing literate programs instead of "normal" programs, from 1984 to 2026, then LLMs may actually have been much better at programming in 2026, than they turned out to be. Literate programs entwine the program code with prose-explanations of that code, while also cross-referencing all dependent code of each chunk. In some sense they make fancy IDEs and editors and LSPs unnecessary, because it is all there in the PDF. They also separate the code from the presentation of the code, meaning that you don't really have to worry about the small layout-details of your code. They even have aspects of version control (Knuth advocates keeping old code inside the literate program, and explaining why you thought it would work and why it does not, and what you replaced it with).

LLMs do not bring us closer to literate programming any more than version-control-systems or IDEs or code-comments do. All of these support-technologies exist because the software industry simply couldn't be disciplined enough to learn how to program in the literate style. And it is hard to want to follow this discipline when 95% of the code that you write, is going to be thrown away, or is otherwise built on a shaky foundation.

Another "problem" with literate programming is that it does not scale by number of contributors. It really is designed for a lone programmer who is setting out to solve an interesting yet difficult problem, and who then needs to explain that solution to colleagues, instead of trying to sell it in the marketplace.

And even if literate programming _did_ scale by number of contributors, very few contributors are good at both programming _and_ writing (even the plain academic writing of computer scientists). In fact Bentley told Knuth (in the 80s) that, "2% of people are good at programming, and 2% of people are good at writing -- literate programming requires a person to be good at both" (so only about 0.04% of the adult population would be capable of doing it).

By the way, Knuth said in a book (Coders at Work, I believe): "If I can program it, then I can understand it." The literate paradigm is about understanding. If you do not program it, and if _you_ do not explain the _choices_ that _you_ made during the programming, then you are not understanding it -- you are just making a computer do _something_, that may or may not be the thing that you want (which is fine, most people use computers in this way: but that makes you a user and not a programmer). When LLMs write large amounts of code for you, you are not programming. And when LLMs explain code for you, you are not programming. You are struggling to not drown in a constantly churning code-base that is being modified a dozen times per day by a bunch of people, some of whom you do not know, many of whom are checked out and are trying to get through their day, and all of whom know that it does not matter because they will hop jobs in one or two or three years, and all their bad decisions become someone else's problem.

Just because LLMs can translate one string of tokens into a different string of tokens, while you are programming does not make them "literate". When I read a Knuthian literate program, I see, not a description of what the code does, but a description what it is supposed to do (and why that is interesting), and how a person reasoned his/her way to a solution, blind-alleys and all. The writer of the literate program anticipates the next question, before I even have it, and anticipates what might be confusing, and phrases it in a few ways.

As the creator of the Axiom math software said: the goal of Literate Programming, is to be able to hire an engineer, give him a 500 page book that contains the entire literate program, send him on a 2 week vacation to Hawaii, and have him come back with whole program in his head. If anything LLMs are making this _less_ of a possibility.

In an industry dominated by deadline-obsessed pseudo-programmers creating for a demo-obsessed audience of pseudo-customers, we cannot possibly create software in a high-quality literate style (no, not even with LLMs, even if they got 10x better _and_ 10x cheaper).

Lamport (of Paxos, Byzantine Generals, Bakery Algo, TLA+), made LaTeX and TLA+, with the intent that they be used together, in the same way that CWEB literate programs are. All of these tools (CWEB, TeX, LaTeX, TLA+), are meant to encourage clear and precise thinking at the level of _code_ and the level of _intent_. This is what makes literate programs (and TLA+ specs) conceptually crisp and easily communicable. Just look at the TLA+ spec for OpenRTOS. Their real time OS is a fraction of the size that it would have been if they had implemented it in the industry-standard way, and it has the nice property of being correct.

Literate Programming, by design, is for creating something that _lasts_, and that has value when executed on the machine and in the mind. LLMs (which are being slowly co-opted by the Agile consulting crowd), are (currently) for the exact opposite: they are for creating something that is going to be worthless after the demo.


Show and tell daily helpful rituals? Here's mine:

I shower before bed and put out my clothes for the next day before bed. When I wake up, I roll out of bed into a predetermined outfit and not have to waste precious pre-coffee clock cycles picking one out.


Could a clever idiot understand such books? If so, I might be willing to check them out. Thank you for the recommendations either way.

Easy now, I wasn't saying they did make that claim. I simply provided a comparatively low-cost alternative for the very expensive display for those for those for whom the display would otherwise be cost-prohibitive.

Best Buy sells 24" touchscreen displays for $339 right now. So you can spend $3000 on a display that sips current or spend 10% of that and you get $2700 to pay towards the higher electric costs.

I call that an interesting trade-off. YMMV


You most likely don’t pay per call for your cellphone.

You most likely don’t pay per machine to use the gym.

You don’t pay per cup if they allow unlimited refills.

You are not supposed to go into an all-you-can eat buffet and stuff steaks into your bag.

Sometimes not all of us want to do the math à la carte for every thing we use in life. Don’t ruin it for us.


That’s called protecting a monopoly not protectionism

I'll admit to knowingly taking advantage of Google's pricing, but I had assumed it was within a gray area. No warning bans are insane.

>move this line to the end of the scope

Where it is invisible! What is so hard about this to understand?

>operator overloading..

Yes, but if we go by your argument, you can say it gets executed exactly as it is written. It is just that it is written somewhere else ie "at distance"...just like a defer block that could be far from the end of the scope that is trigerring it


Seems like a mix of AI slop and right wing racism.

> But here's what most people don't know: their average salary is under $105,000 — nearly 40% below what tech giants pay for similar roles.

Why would you compare random software positions with “tech giants”? Would you compare the pay for a local race track driver with an F1 driver?

> This isn't a story about hard-working immigrants. It's a story about a business model that exploits the H1B system.

Is it being exploited? The article doesn’t prove that at all.

> When 28,000+ positions pay 40% below market, it drags down wages for everyone in the field.

No, you’re just describing the market. “When the average is X, the data points below the average are below the average”. See how useless this observation sounds?

> The $90,000 gap isn't because the work is different.

The software work at big tech is absolutely different than software work at a random client of consulting companies. The fact that they both have a similar degree requirement is completely irrelevant.

-

Conclusion: This is a propaganda website


Google, unlike all their competitors, actually give Cloud API credits to all paying users of AI Pro and AI Ultra [1] - just use those for direct Gemini/Vertex API access instead of trying to hack the OAuth of Google's apps.

[1] https://blog.google/innovation-and-ai/technology/developers-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: