This. Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved. AOT used to be bag of hurt with GCJ ( I know I could use Excelsior, not sure if it was free in my time though ), but now even that will be an supported option from Graal.
Java the languages still isn't pretty, but it has been much improved.
OpenJDK is GPL and apart form the trademark ( ? ) with Java ( I didn't read much into the JakartaEE problem, ) everything should be fine. I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.
> Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved.
Value types are a major missing piece in the JVM stack right now. It's at least on the roadmap, but it keeps getting pushed back and back and back. I'd also argue runtime generics is another one, and perhaps more depressingly one that is unlikely to ever get fixed.
.NET has both of them and also has the same core strengths JVM does, so given the choice I'd go with .NET over JVM 100% of the time as a result. JVM's GC & JIT seem to be on a never ending improvement cycle, but the actual language & core libraries are incredibly slow to react to anything.
I'll give you value types, but reified generics in .NET were a mistake. It really makes interop and code sharing among languages hard, in exchange for a rather slight added convenience. This means that if you're a language implementor and you're targeting .NET, you'll get much less from the platform than you would if you target Java, which makes .NET not a very appealing common language runtime. And not only is Java a good platform for Kotlin and Clojure, but thanks to Truffle/Graal it's becoming a very attractive, and highly competitive, platform for Ruby, Python and JavaScript. All of that would have been much more difficult with reified generics.
Also, I don't think value types in Java are being "pushed back." The team is investing a lot of work into them as it's a huge project, but AFAIK no timeline has been announced.
> and also has the same core strengths JVM does
I don't think so. Its compilers are not as good, its GCs are not nearly as good, and its production profiling is certainly not as good (have you seen what JFR can do?); and its tooling and ecosystem are not even in the same ballpark.
On the one hand, reified generics means that it's .NET's object model or the highway.
On the other hand, .NET maintains a much higher standard of inter-language interoperability. When I was on .NET, I didn't have to worry much about the folks working in F# accidentally rendering their module unusable from C#. Now that I'm on the JVM, I've accepted that it's just a given that the dependency arrows between the Java and Scala modules should only ever point in one direction.
It's not .NET vs Java so much that the development of the two languages you mention are coordinated. For Kotlin and Clojure interop works both ways; Scala doesn't care much about that, so Scala developers write special facades as Java language API. There are lots of different languages on Java, and they care about different things. The Java team itself develops only one language, although at one time there was another (JavaFX Script). Some language developers (Kotlin, JRuby) work more closely with the Java team, and others (Scala) hardly ever do.
Dumb question. Why do reified generics make interop challenging? Or is it reified generics plus value types that don't inherit system.Object? Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?
> Why do reified generics make interop challenging?
Suppose you have types A and B, such that A <: B. What is, then, the relationship between List<A> and List<B>? This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.
> Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?
They could, but then this adds significant runtime overhead at the interop layer. For example, a popular standard API may take a parameter of type List<int>. How do you then call it from, say, JavaScript, without an O(n) operation (and without changing JS)?
> This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.
Which, realistically, is probably the only principled way to do things if you want to be doing much with variant generics in a cross-language way.
The Java way, "I pick my variance strategy, you pick yours, and we'll both pass everything around as a List<Object> at runtime and just hope that our varying decisions about what their actual contents are allowed to be never cause us any nasty surprises for each other at run time," is not type-safe and can lead to nasty surprises at run time. It's easier, sure, but easier is not necessarily better.
The problem is that your GenericClass<T1> and GenericClass<T2> are really more like GenericClass_T1 and GenericClass_T2 with their own distinct type definitions and interfaces. From the perspective of a different runtime/language trying to interop with these types, you have to somehow understand and work with this mapping game. It's much easier from inside the .NET runtime than outside.
The general solution is, like you suggested, to avoid using reified generics in the module interface where the interop happens.
The solution I remember from the last time I dealt with Python on .NET (which was admittedly a long time ago) was the opposite - you did use the reified generics, and there were facilities to create an instance of GenericClass<TWhatever> from within Python. There's a whole dynamic language runtime that is purpose-built for smoothing over a lot of that stuff.
What wouldn't work would be to, e.g., create a Python-native list and try to pass it into a function that expects a .NET IList<T>. Which doesn't feel that odd to me - they may have the same name, but otherwise they're very different types that have very different interfaces.
That said, the Iron languages never took off. My personal story there is that all the new dynamic features that C# got with the release of the DLR pretty much killed my desire to interact with C# from a dynamic language. The release that gave me Python on .NET also turned C# itself into an acceptable enough Python for my needs.
Yes, those value classes are still heap/GC allocated objects.
Value types are a generalisation of `int`, `long`, `float` (etc), where values are stored inline, not allocated on the heap. For instance, a `Pair<long, double>` that isn't a pointer, but is instead the exactly the size of a long plus a double, and is the same as writing `long first; double second;` inline.
.NET is tied to Microsoft, so I'd avoid it 100% of the time.
Yes, yes. I know that Microsoft theoretically open sourced and ported it. However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.
They didn't "theoretically" open source it - they actually open sourced it.
I get why people used to shit on Microsoft, but Microsoft has demonstrated over a number of years that its changed under Satya.
> However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.
Outside of desktop GUIs, this is simply not true. I'm writing complex, cross-platform systems that work just fine on Windows and Linux (and would on MacOS if I chose to target it).
Hell, even a lot of the tooling is now cross-platform: Visual Studio Code, Azure Data Studio, even Visual Studio and Xamarin run on MacOS!
So because it’s not the same code base but it is produced by the same company with the same purpose? It’s not “Visusl Studio Code”? Was Photoshop not Photoshop on Windows when the assembly language optimizations were different between PPC and x86?
The .net 5 announcement was very clear that .net core is the future and its been a while since you’ve had needed things that are windows only to build a non-trivial .net core application.
Yes, you can write a non-trivial .NET application on Linux. But if you take a non-trivial .NET application that runs on Windows, the odds are low that it can easily be ported to Linux. And there are almost no non-trivial .NET applications that weren't originally written for Windows.
The result is that if you work with .NET, you're going to be pushed towards Windows.
It the wording of the announcement (taking them in good faith) that applies to applications using .NET Framework. .NET Core should be 100% portable to Linux/Mac/wasm.
.NET 5 should supersede both Core and Framework IIRC
I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.
In the years since, I've encountered many stories that attempted to make use of that portability. All failed.
I've seen the promise of portability with other software stacks, and know how hard it is. I also know that taking software that was not written to be portable, and then making it portable, is massively harder than writing it to be portable from scratch.
So, based on both the history of .NET and general knowledge, I won't believe that .NET will actually wind up being portable until I hear stories in the wild of people porting non-trivial projects to Linux with success. And I will discount all announcements that suggest the contrary.
When you're trying to write new code, you run into a problem and look for a library that solves it. But all of the libraries that you find are Windows First, and it is not until after you're committed to them that you sometimes discover how they are Windows Only.
So yes, even in a new project, there will be a pull back to Windows. Because virtually nothing is truly written from scratch.
It’s no more of a problem with .Net Core than it is with Python modules that have native dependencies, or Node modules with native dependencies.
You’re not going to mistakenly add a non .Net Core Nuget package to a .Net Core project. It won’t even compile.
Of course you can find Windows only nuget packages for Windows only functionality like TopShelf - a package to make creating a Windows Service easy. But even then, I’ve taken the same solution and deployed it to both a Windows server and an AWS Linux based lambda just by configuring the lambda to use a different entry point (lambda handler) than the Windows server.
You can even cross compile a standalone Windows application on Linux and vice versa.
I use a Linux Docker container to build both packages via AWS CodeBuild.
Would you also criticize Python for not being cross platform because there are some Windows only modules?
Looking at the announcement it seems they're basically folding all of the Windows-specific stuff back into .net core. Isn't that just going back to a compatibility minefield?
That's very nice, but saying it's strictly superior to async/await is a stretch. Fibers/stackful coroutines are a different approach with its own tradeoffs.
On the plus side, fibers offer almost painless integration of synchronous code, while async/await suffer from the "colored functions" problem[1].
The price you pay for that, is the higher overhead of having to allocate stacks. If you don't support dynamic stacks which can be resized, you basically don't have a much better overhead than native threads. There are two solutions I'm aware of, both of them have been done by Go at different times: segmented stacks and re-aligning and copying the stack on resize. Both carry some memory overhead (unused stacks) and computational overhead (stack resizing).
> Fibers/stackful coroutines are a different approach with its own tradeoffs.
The only tradeoffs involved, as far as I'm aware, are effort of implementation. There are no runtime tradeoffs.
> The price you pay for that, is the higher overhead of having to allocate stacks.
You have to allocate memory to store the state of the continuation either way. Some languages can choose not to call the memory required for stackless continuations "stacks" but it's the same amount of memory.
> Both carry some memory overhead (unused stacks) and computational overhead (stack resizing). Their "advantage" is that, because they're inconvenient, people try to keep those stacks shallow.
Stackless continuations have the same issue. They use what amounts to segmented stacks. "Stackless" means that they're organized as separate frames.
Great article, thanks. Some perhaps-silly questions:
1. Is it possible to inspect the state of a 'parking' operation, the way you can in .Net with Task#Status?
2. So fibers run in 'carrier threads'. Is there a pool of carrier threads, or can any thread act as a carrier? I'm thinking of .Net's where this is configurable (ignoring that .Net 'contexts' aren't exactly threads), by means of Task#ContinueWith() and the Scheduler class. I take it from the following snippets that fibers can only run on the thread where they were created:
starting or continuing a continuation mounts it and its stack on the current thread – conceptually concatenating the continuation's stack to the thread's – while yielding a continuation unmounts or dismounts it.
And also:
Parking (blocking) a fiber results in yielding its continuation, and unparking it results in the continuation being resubmitted to the scheduler.
On a non-technical note, how do OpenJDK projects feed back into the Java spec and Oracle Java?
> OpenJDK projects feed back into the Java spec and Oracle Java?
OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license). Projects are developed in OpenJDK (for the most part by Oracle employees because Oracle funds ~95% of OpenJDK's development, but there are some non-Oracle-led projects from time to time[1]) and are then approved by the JCP as an umbrella "Platform JSR" for a specific release (e.g. this is the one for the current version: https://openjdk.java.net/projects/jdk/12/spec/)
Very neat. So it preserves the virtues of .Net's task-based concurrency, but is even less intrusive regarding the necessary code-changes to existing synchronous code.
Does it impact things from the perspective of the JNI programmer?
> OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license).
Ah, of course. I'd missed that.
> there are non-Oracle-led projects from time to time[1], and are then approved by the JCP as an umbrella "Platform JSR" for a specific release
> but is even less intrusive regarding the necessary code-changes to existing synchronous code.
Yes. All existing blocking IO code will become automatically fiber-blocking rather than kernel-thread-blocking, except where there are OS issues (file IO; Go has the same problem). Fibers and threads may end up using the same API, as they're just two implementations of the same abstraction.
> Does it impact things from the perspective of the JNI programmer?
Fibers can freely call native code, either with JNI or with the upcoming Project Panama, which is set to replace it, but a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.
> How do they handle copyright?
Both the contributors and Oracle own the copyright (i.e. both can do whatever they want with the code). This is common in large, company-run open source projects.
> a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.
Doesn't this boil down to the native function blocking the thread?
How about the C API/ABI of JNI? Will there be additions there for better supporting concurrency (i.e. not simply blocking)? Or can that be handled today, with something akin to callbacks?
If the native routine blocks the kernel thread, it blocks, and if not, it doesn't. While something could hypothetically be done about blocking native routines, we don't see it as an important use case. Calling blocking native code from Java is quite uncommon. We've so far identified only one common case, DNS lookup, and will address it specifically.
> Fibers are user-mode lightweight threads that allow synchronous (blocking) code to be efficiently scheduled, so that it performs as well as asynchronous code
Is that true? The build instructions are for a Posix-like evironment, but I haven't actually looked to see if the actual implementation supports Windows yet.
As someone who runs Windows and Linux about equally, in differing proportions over time, I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet.
It's a prototype. It will support Windows when it's released, and probably sooner. We're literally changing it every day, and it's hard and not very productive to make these daily changes on multiple platforms, especially as none of the current developers use Windows (this is changing soon, though).
> I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet
Seems understandable though. Java is primarily a tool for heavyweight Unix servers, after all. (This is of course an empirical claim, and I have no source, but I'd be surprised if I turn out to be mistaken.)
Makes good sense to go with the strategy of building an industry-strength technology before investing the time to handle Windows.
I like java. Java 8 streams are particularly interesting. Its fast too. I took a hadoop class (which taught java 8 and ironically discouraged hadoop use excepting exceptional cases..).
The hardest part that everyone struggled with was getting a Java environment up and running. Gradle, maven, ant... You almost need and IDE. Its almost like they don't want people using it. I stopped when I didn't have too.
Plus the acronyms. Ones I didn't know from your post:
Except web development is almost all bootstrapped from a simple npm library these days. You generally npm install and you've got all your dependencies whether it's Angular, Vue, React or pretty much any modern web frameworks. The time for a new developer to get the tooling out of the way and start looking at code is dramatically shorter for web apps than Java in my experience.
GCJ and Excelsior are really niche, even people familiar with Java ecosystem might not known them as they are mostly used for AOT ( Ahead of Time ) Compiling Java into a Single redsitrbutuamble binary in the early 2000s. I was writing RSS Web Server application then and was looking into how to do Client Side Desktop Apps.... UI toolkit was a bag of hurt for Java, and I gathered that is still the case today.
I think JakartaEE is really just a rebranded JavaEE.
I know Graal only because I follow TruffleRuby, which is a new JVM written in Java. And it has proved that optimising Ruby, once thought as impossible due to its dynamic nature to be possible.
How is this any different than python or javascript? NPM, Babel, Webpack, TSC, PIP, VENV, PyPy, CPython, etc. They all have their learning curves and if you weren't in the ecosystem you wouldn't know what they meant.
> I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.
What do you mean by "lurking"? Oracle is the company developing OpenJDK, and it will continue to do so. All our projects are done in the open, with lots of communication.
By "lurking" people mean that the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.
You may not remember incidents like why Jenkins was forked from Hudson, but Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.
> the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.
I have no idea what Oracle may do tomorrow, but Oracle has been in control of Java for a decade, and what it has actually done is 1. significantly increase the investment in the platform and 2. open source the entire JDK. So I don't know about the next ten years, but the past ten years have been really good for Java (well, at least on the server).
> Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.
I don't share your romantic views of multinational corporations. Corporations are not our friends, and while they're made of people, they're not themselves people, despite what some courts may have ruled. But like people, different corporations have different styles, and it would be extremely hard to call any of them "good." I have certainly never heard of one is driven by caring (although what you do when you don't care may differ; some may be aggressive with licensing, some are in the business of mass surveillance, some help subvert democracy, others drive entire industries out of business through centralizations and others still drive kids to conspiracy theories). When you look at what Oracle has actually done so far for Java, I think it has been a better, and more trustworthy steward than Microsoft and Google have been for their own projects (Java's technical leadership at Oracle is made up of the same people who led it at Sun). And people who bet on Java ten years ago are happier now than those who bet on alternatives (well, at least on the server). This despite some decisions that made some people unhappy. You can like the good stuff and be disappointed about the bad stuff without some emotional attraction or rejection to these conglomerate monsters.
My opinion of the company and its products is more consistently negative than any other large company. And while you think that the non-Java world is suffering, I think you have some tunnel vision.
Let's just say that I am personally happy with my decision to stay away from Java. And the brief periods where I had to work with Java were misery. Languages have personalities as well as companies, and there is a reason that the startup world stays away from Java in droves.
> and there is a reason that the startup world stays away from Java in droves.
You may have your own case of tunnel vision. I mean, sure, there are "droves" of startups that stay away from Java (many only to turn to it later), but there are also "droves" that adopt it from the get-go.
Both Ruby and Python are more popular than Java, AND are better correlated with how good the company is. Your odds of being in a successful startup are improved if you are in those languages INSTEAD OF Java.
What about from the individual programmer level? Triplebyte did an article about how programming language and hiring statistics correlate. My impression is that their programmers are mostly being hired into relatively good startups, so it is a pretty good view of the startup world. That article is at https://triplebyte.com/blog/technical-interview-performance-....
Long story short, Java was the #2 language that programmers chose. Behind Python. Not so bad. But choosing Java REDUCED your odds by 23% of actually getting to a job interview. And for those got to a job interview, it reduced your odds by 31% of actually getting hired. By contrast Python IMPROVED those same odds by 11% and 18% respectively.
Apparently the startup world doesn't like Java developers either. You'd be far better off with Python.
Now I'm sure that you can trot out every successful Java startup out there. And there will be quite a few. But based on available data, not opinions, I did NOT express tunnel vision when I said that the startup world stays away from Java in droves.
If you truly believe any of the conclusions you've drawn from the numbers in the links you posted, then your favorite programming language REDUCES statistics skills.
Startups don't use Java because Java is for large-scale stable long- lived enterprises, not for prototyping simple small web apps that might be thrown away in a couple of years.
You often hear this, but what does it actually mean? Why is Java for one but not the other.
Here is my understanding.
Java was designed to limit the damage that any developer could accidentally do, rather than maximize the potential for productivity. Which is an appropriate tradeoff for a large team.
It is hard to get good statistics on this, but the figures that I've seen in books like Software Estimation suggest that the productivity difference is around a factor of two.
This matters because it turns out that teams of more than 8 people have to be carefully structured based on the requirements of communication overhead. (A point usually attributed to The Mythical Man-Month.) This reduces individual productivity. Smaller teams can ignore this problem. The result is that measured productivity for teams of size 5-8 is about the same as a team of 20. But the throughput of large teams grows almost linearly after that. An army does accomplish more, but far less per person.
Limiting damage matters more for large teams. Which are more likely to be affordable for large enterprises. However being in such an environment guarantees bureaucracy, politics, and everything negative that goes with that.
By contrast startups can't afford to have such large teams. Therefore they are better off maximizing individual productivity so that they can get the most out of a small team. And using a scripting language is one way to do that.
Today, I go back and forth between three languages at work - .Net, JavaScript, and Python. For simple prototype web apps or more realistically REST microservic APIs to feed front end frameworks, I really don’t see either being slower to develop.
For larger applications with multiple developers working in the same code base, the compile time checking of static languages is a god send. I would at least move over to TypeScript instead of plain JavaScript.
Oracle has a long track record of Sales & Marketing tactics which we can use as a reliable benchmark to predict outcomes.
Oracle will likely pursue the most aggressive strategy they can get away with Java.
I don't believe Sun was suing Google, but Oracle did.
The fact that Google is switching to Kotlin is mostly a means to absolve themselves of the 'Oracle risk' - it's a big change surely, a decision not taken lightly.
The future of Java under Oracle is hard to predict but there's legit concerns Oracle will make things hard.
Kotlin uses the same VM and API, so it makes no difference in this regard. It's not a big change – it's fully interoperable with Java. You can easily take a single class in a Java application and rewrite it in Kotlin, and everything continues working just as before.
Google adopted it because, as they more or less said in the announcement, it was already being adopted by the community and it hugely improved development experience.
but you splitting your developer base.
* There will be people better at kotlin
* There will be people better at java.
This is a problem when you are looking at hiring new people etc . This fragmentation is going cause issues just because people are hedging against Oracle future decisions.
In a perfect world Google should have bought Sun and the current version of Java would look at lot like Kotlin.
Kotlin is a light syntax for a coding style. It's as easy for a Java dev to learn Kotlin as it is to learn Spring or Hibernate or whatever library or framework the team at your new job uses
Yes, thanks for that, it stirred my recollection as I actually bumped into Jon Swartz by accident just in that era.
I don't think it was money, so much as the established culture at Sun (i.e. James Gosling: "Sun is not so much a company but a debating society). A more aggressive CEO/leadership/culture would have maybe raised the money to take on Google, or to take another tact.
So while you are right - and thanks for the reference - the issue here is what Sun was about, vs. what Oracle is about.
Whatever Sun was "about", sadly, it didn't work, and damaged some excellent technologies, like Java and Solaris, that Sun couldn't invest a lot of resources into because it no longer had them. Oracle managed to save one of them and make it thrive. Sun, as a big, impactful company, was a product of the dot-com bubble. It certainly made more lasting contributions than other bubble-era companies, but its strategy couldn't survive the crash. Maybe great ideas can be born in companies like Sun but need companies like Oracle to sustain.
I've been saying for years that Pivotal Labs is a debating club that produces code as a by-product. But now I'm wondering if I read the Gosling quip and then forgot I had.
Google made $billions from Java while Sun went near bankrupt, and is now a top-10 wealthiest company in USA. Oracle trying to get $ from Google in partnership with the former Sun is a different issue from your company's risk.
I do understand Oracle is paying the bill. As well as the team working on Graal and TruffleRuby. so I am grateful for that. Thank You.
>What do you mean by "lurking"?
Referring to Copywriting API a while ago and the JakartaEE problem which has blown up on my twitter feeds. I understand why Oracle is trying to charge money, and I am perfectly fine with that, I just don't like they are using Copywriting API as the tool. And whatever problem it is with JakartaEE this time around I don't have time to follow.
In a lawsuit, Oracle pushed for API's to be copywritten, not just their implementation. They also have paid lobbyists. They're also greedy assholes. The combo of greedy assholes and the ability to rewrite the law is a dangerous one.
So, I don't use a language unless it's open with patent grants and has a non-malicious owner. At this point, Wirth's stuff is probably legally the safest.
I don't have any public projects to release right now. So, I don't have to worry about getting sued. Modula 2 was nice but you could use any of Wirth's with low risk. Although Lisp's had lots of companies involved, Scheme is probably safe with PreScheme aiming for low-level use. A Racket dialect with C/C++ features like ZL language had could be extremely powerful and safe.
Rust, with Mozilla backing it, is probably not going to get you sued. Nim has potential given their commercial interests are paid development and support so far. As in, less greedy they are the better. Languages with community-focused foundations, such as Python, controlling them are probably pretty safe. Although it was risky, the D language now has a foundation. Although no foundation or legal protections, the Myrddin and Zig languages are being run by folks that look helpful rather than predatory.
So, there's you a few examples you might consider if avoiding dependencies on companies likely to pull shit in the future. Venture-backed, publicly-traded, growth-at-all-costs, and/or profit-maximizing-at-all-costs are examples of what to avoid if wanting to future-proof against evil managers turning it from good dependency into bad one.
"Copywritten" probably means nothing, but if it did, it would have something to with copy writing, the act of writing for publication (usually commercial, usually not long-form).
Added: FYI "copyrighting" is not a conscious decision, or an action you can take. Copyright emerges automatically when you create a work, what they've done is defend their copyright in court, and the courts have mixed opinions on the matter.
That is a gross mischaracterization of what Oracle did. They didn't just defend a copyright in court. They pushed to extend copyright to a mostly functional element that copyright law has not traditionally been thought to cover. It's a tremendously harmful viewpoint for interoperability.
Not just "not traditionally been thought to cover", but which existing precedent said DID NOT cover.
Does it surprise anyone that this case was decided by the Federal Circuit? The rogue court most consistently overturned by the Supreme Court, which also is responsible for most of the disastrous software patent cases out there.
The only bright light is that the Supreme Court has reopened the question. Given how often they overturn the Federal Circuit, we have real hope that we'll return to the previous precedent. Which is that since matching APIs is a functional part of how code works, and things that are functional are by law not copyrightable, APIs are not copyrightable.
Yes, copywritten isn't a word, but their point was that Oracle pushed for API's to be copywritable, which was not the case before their suit. It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.
> It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.
I mean, it has been big news, and it has already been nuclear war, with Oracle putting Google in a position to switch android from Dalvik (and successors) to OpenJDK. I agree that it could become a pretty horrible precedent (imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place).
> imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place
The things you're talking about are already protected by patents, and the copyrightability of APIs have nothing to do with them. At the very least, for something to be copyrightable it must be some specific fixed expression (a piece of text, image, video or audio). So the O v. G ruling applies only to actual (code) APIs; not to protocols (or REST "APIs") and certainly not to stuff that's already protected by patents (the distinction between the two may not always make sense to programmers, but it is what it is; for example, algorithms are patentable but not copyrightable, while programs are copyrightable but not patentable).
The licensing has gotten far better. First, Oracle has just open sourced the entire JDK for the first time ever, and second, instead of offering a mixed free/paid, open/proprietary JDK (with -XX:+UnlockCommercialFeatures flags), it now offers the JDK under either a completely free and open license (under the name OpenJDK) or a commercial license for Oracle support subscription customers (under the name Oracle JDK).
Support for native code is very bad. The JNI is a pain to use and very slow, IPC is often faster. High performance numerical code often suffers because of poor vectorization. Not to mention tuning the JVM is often needed for critical tasks. Modern GC'd languages like Go have much better memory footprints and the penalty of fast numerical code is much smaller.
Panama (https://openjdk.java.net/projects/panama/) will be replacing JNI very soon, and I don't think you're correct about vectorization. While I think Go has some good features, nothing about it is more "modern" than Java except in the most literal chronological sense; Java is more modern in almost every other meaningful sense. While you may need to tune the VM for critical tasks, in Go you don't need to tune, but rather just run slower.
Java the languages still isn't pretty, but it has been much improved.
OpenJDK is GPL and apart form the trademark ( ? ) with Java ( I didn't read much into the JakartaEE problem, ) everything should be fine. I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.