Hacker Newsnew | past | comments | ask | show | jobs | submit | franciscop's commentslogin

This is probably why I love using Zed for my hobby dev, it doesn't try to be too clever about AI. It's still there and when I do want some AI stuff it can be seamlessly prompted, but for normal day-to-day the AI steps back and I can just code. In contrast, using AI at work with VSCode I feel like the tools get too much in the way, particularly in 2 categories:

- Fragile interaction. There's popups on VSCode everywhere, and they are clickable. Too often I try to hover on a particular place and end up clicking on one of those. The AI autocomplete also feels way too intrusive, press the wrong key combination and BAM I get a huge amount of code I didn't intend to get.

- Train of thought disruption. Since some times the AI long auto-complete is useful (~1/3th of times), I do end up reading it and getting distracted from my original "building up" thinking and now change to "explore thinking", which kind of dismantles the abstraction castle.

I haven't seen either of those issues on Zed. It really brought me back the joy of programming on my free time. I also think both of these issues are about the implementation more than the actual feature.


Yeah, I'ma have to agree with you 100% on Zed. I was previously hardcore into JupyterLab for playing with code, and Kate (KDE default text editor) for writing code until I tried Zed. I love how it's first and foremost an IDE / Editor and all the other features (AI, Jupyter kernels, etc) are just there to enhance it's main purpose when the user wants them, but those features rarely (if ever) seem to "get in the way" of that primary mission of editing code / text. They're just quietly waiting there for when you need them. So far the Zed team has really hit the balance just right.

This seems totally fine though? XSLT 1.0 supporter says the support time is costing heavily, then Chrome says removing support is fine, which seems to align to both of them.

It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:

> "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"


Some surprising science fact that many people don't know, an animal egg (chicken, birds, etc) is a single cell, so there's a huge variability in the weight of a cell.

There are some very large single celled organisms that aren’t eggs. E.g.:

https://en.wikipedia.org/wiki/Valonia_ventricosa

https://en.wikipedia.org/wiki/Foraminifera

Even some bacteria can grow to visible size:

https://en.wikipedia.org/wiki/Thiomargarita_magnifica

There are some other examples here:

https://en.wikipedia.org/wiki/Largest_organisms


I found this claim unbelievable, but it is mostly true. It isn't quite the whole egg, it is just the yolk. But that's still a very large cell!

http://cnet.com/home/kitchen-and-household/appliance-science... verifies this.


It's analogous to the mammalian egg, but a lot bigger. (And IIRC the egg is the largest cell in humans.)

And the smallest is the sperm.

Which, ironically, are both only haploid.


i guess if it's fertilized then it will soon have more cells

IMHO not really, supply here is the limiting factor since the constrain is in licensing the work. The goal of the right holders is not to maximize access to the work or those stated by OP, but to maximize profit for the company, which when at odds with those other goals still prevails.

e.g. someone calculated/believes that having a big catalog from Disney at X/month is more worth more for Disney than sublicensing to Netflix at Y/month.


I really wish we had laws that producers of content cannot also be distributors. That just creates perverse incentives to use content to lock people into their distribution platform.

If they had to be separate, that gives content producers the ability to cross license and those licenses to be better deals. We’d actually have competition in distribution companies as distribution providers would then be competing on price, quality, convenience, and other things that matter, not locking content away.


> I really wish we had laws that producers of content cannot also be distributors.

We have laws like that for beer and cars, and they're disasters in both cases.

Why would we want to implement an incredibly stupid idea a third time?


I think you're going to have to back that up with a bit more than "it's stupid"

Here's a much more relevant precedent: https://en.wikipedia.org/wiki/United_States_v._Paramount_Pic....


Yes I considered the same but decided to keep the point simple.

And I still can’t help but think that if there really was a large market of people willing to pay a premium for a more permissive access model then we might already see trends in this direction. My hunch is the most folk don’t really care and price remains the dominant factor.

The essential point of the article was that it’s higher prices that’s pushing people towards piracy (either through price rises or fragmented subscriptions). It wasn’t that it is the restrictive streaming model that is pushing people towards piracy.

I’m fact it was precisely this restrictive streaming model that was the one to finally beat piracy. At low prices, that’s already been proven and it’s higher prices that is brining piracy back.


Unpopular opinion here but I wonder how much of the justification for piracy in this thread, broadly around what is perceived to be unfair business practices (“if only the terms were fairer and I would pay”), would actually stand up if the terms were actually fairer but the prices higher.

Or how much is really just the simple rational economic idea that piracy is better value for money.


I personally buy physical media (BluRays and/or DVDs). But I often feel too lazy to deal with the content ripping, so I just download it.

I like Youtube Premium and I'm gladly paying for it, although I'm considering switching to an alternative YouTube client because the official YT App is crap. But then the creators will lose income from my subscription.

Sigh. I wish content providers just gave us API to get the content in exchange for payment.


>> having a big catalog from Disney at X/month is more worth more for Disney than sublicensing to Netflix at Y/month.

But sometimes that leads to really stupid things. At one time all Star Trek TV shows were on Paramount while all the movies were only on Max. I believe they're all owned by Paramount, but apparently the shoes are the big draw (the new series "Picard" was exclusively on Paramount) and they could get more profit by putting the movies elsewhere and collecting a bit more than if it were all on their service. GAK!


I found details of the Impatiens Parviflora and its exploding properties on the Spanish Wikipedia. But digging deeper, it seems the general term for exploding seeds is "Ballocoria" (Latin), only found in a sub-section:

https://es.wikipedia.org/wiki/Dispersi%C3%B3n_de_los_prop%C3...

https://grok.com/share/bGVnYWN5LWNvcHk%3D_e1c522fa-64ae-4864...


I have found some footage of the seedpod exploding [1]. It seems that Impatiens parviflora is invasive species that propagated from botanical gardens in Europe. One research [2] states it can shoot seed up to 3.4 meters.

[1] https://www.youtube.com/shorts/QUzag5u7Pi0 [2] über Impatiens parviflora Dc. als Agriophyt in mitteleuropa, L Trepl - 1984


As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.


Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.


I'm a small-ish time author, but it was really painful for a while since we were all dual-publishing in CJS and ESM, which was a mess. At some point some prominent authors decided to go full-ESM, and basically many of us followed suit.

The fetch() change has been big only for the libraries that did need HTTP requests, otherwise it hasn't been such a huge change. Even in those it's been mostly removing some dependencies, which in a couple of cases resulted in me reducing the library size by 90%, but this is still Node.js where that isn't such a huge deal as it'd have been on the frontend.

Now there's an unresolved one, which is the Node.js streams vs WebStreams, and that is currently a HUGE mess. It's a complex topic on its own, but it's made a lot more complex by having two different streaming standards that are hard to match.


What a dual-publishing nightmare. Someone had to break the stalemate first. 90% size reduction is solid even if Node bundle size isn't as critical. The streams thing sounds messy, though. Two incompatible streaming standards in the same runtime is bound to create headaches.


The fact that CJS/ESM compatibility issues are going away indicates it was always a design choice and never a technical limitation (most CJS format code can consume ESM and vice versa). So much lost time to this problem.


It was neither a design choice nor a technical limitation. It was a big complicated thing which necessarily involved fiddly internal work and coordination between relatively isolated groups. It got done when someone (Joyee Cheung) actually made the fairly heroic effort to push through all of that.

Joyee has a nice post going into details. Reading this gives a much more accurate picture of why things do and don't happen in big projects like Node: https://joyeecheung.github.io/blog/2024/03/18/require-esm-in...


Node.js made many decisions that have massive impact on ESM adoption. From forcing extensions and dropping index.js to loaders and complicated package.json "exports". In addition to node.js steamrolling everyone, tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes.


Requiring file extensions and not supporting automatic "index" imports was a requirement from Browsers where you can't just scan a file system and people would be rightfully upset if their browser modules sent 4-10 HEAD requests to find the file it was looking for.

"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).

> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes

I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.


You're right. It wasn't a design choice or technical limitation, but a troubling third thing: certain contributors consistently spreading misinformation about ESM being inherently async (when it's only conditionally async), and creating a hostile environment that “drove contributors away” from ESM work - as the implementer themselves described.

Today, no one will defend ERR_REQUIRE_ESM as good design, but it persisted for 5 years despite working solutions since 2019. The systematic misinformation in docs and discussions combined with the chilling of conversations suggests coordinated resistance (“offline conversations”). I suspect the real reason for why “things do and don’t happen” is competition from Bun/Deno.


There were some legitimate technical decisions, that said, imho, Node should have just stayed compatible with Babel's implementation and there would have been significantly less friction along the way. It was definitely a choice not to do so, for better and worse.

It's interesting to see how many ideas are being taken from Deno's implementations as Deno increases Node interoperability. I still like Deno more for most things.


I maintain a library also, and the shift to ESM was incredibly painful, because you still have to ship CJS, only now you have work out how to write the code in a way that can be bundled either way, can be tested, etc etc.


It was a pain, but rollup can export both if you write the source in esm. The part I find most annoying is exporting the typescript types. There's no tree-shaking for that!


For simple projects you needed now to add rollup or other build system that didn't have or need it before. For complex systems (with non-trivial exports), now you have a mess since it wouldn't work straight away.

Now with ESM if you write plain JS it works again. If you use Bun, it also works with TS straight away.


This is where I actually appreciated Deno's start with a clean break from npm, and later in pushing jsr. I'm mixed on how much of Node has come into Deno, however.


I also didn't like when a new easing happened _while_ another easing was happening, which often felt very jerky. Had to do a bunch of calculus (derivatives) by hand and wrote a small library for it in JS:

https://github.com/franciscop/ola

Note: ola means (sea) wave in Spanish


Thanks for sharing this library! Author of the post here, I'll definitely check out your implementation.


Thanks! From your article, you might not want/like my library since it's based on a single easing function. I used a cubic function to find out the interpolation values, and the derivative to make sure it's always smooth. The equation looks like this, it's on the source code:

https://www.wolframalpha.com/input/?i=x+%3D+2+*+t+%5E+3+-+3+...

If you wanted some more details please feel free to ping me (email through my website's resume, or Twitter) and I'll dig the handwritten equations.


I expect CVEs to be directly proportional to project usage and popularity, and inversely proportional to maturity, which makes things a lot more complicated.


And also directly proportional to the publicity of the CVE system. If you're creative enough in your writing, any bug in any program can be filed as a CVE, and filing CVEs is much more interesting carreer wise than filing bug reports.

Any decently sized project has probably seen an increase in reported CVEs over the past 5 years, simply because the number of CVEs total has grown.


I'd also expect average CVSS severities to go down over time. While they definitely did get significantly lower in 2024, there's still some high severity stuff in 2025.


How do other cities around the world make this work and make it safe then?


Typically on the U.K., if a service is rowdy and feels unsafe the guard will hide in their cabin and never emerge to do a ticket check.

In other words a no difference in personal safety between a driver only or driver and guard operation.

I’m not aware of any evidence of reduced safety in any category after introducing DOO, and if there as I suspect unions would be screaming from the hills. The only measurable impact I’ve seen is on accessibility (which doesnt mean that’s not a consideration)


Well, typically, they start with safer cities, something that's out of the purview if the MTA.


That totally flew over my head, sorry! Violence in the train didn't even occur to me while reading the comment, I thought they meant something on the lines of "the driver will have better visibility when opening/closing the doors on the front train so it's safer to go there so they cannot accidentally close them on me" or something similar. Now of course they meant there's less chance of violent behavior near the train driver, and the comment makes a lot more sense for me now.

Note: I do take the train daily, just I live in Tokyo


American cities are less safe, but the NYC subway is quite safe. Train operators will never leave their cabin no matter what (part of the rules of the job), and so do not enhance the safety at all.


I've been 10 years on-and-off, and 10% sound way too low _if we include_ Suica/Pasmo. Credit card is another story and I'd agree.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: