I was surprised to see that Java was slower than C++, but the Java code is run with `-XX:+UseSerialGC`, which is the slowest GC, meant to be used only on very small systems, and to optimise for memory footprint more than performance. Also, there's no heap size, which means it's hard to know what exactly is being measured. Java allows trading off CPU for RAM and vice-versa. It would be meaningful if an appropriate GC were used (Parallel, for this batch job) and with different heap sizes. If the rules say the program should take less than 8GB of RAM, then it's best to configure the heap to 8GB (or a little lower). Also, System.gc() shouldn't be invoked.
Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.
Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).
It’s so hard to actually benchmark languages because it so much depends on the dataset, I am pretty sure with simdjson and some tricks I could write C++ (or Rust) that could top the leaderboard (see some of the techniques from the billion row challenge!).
tbh for silly benchmarks like this it will ultimately be hard to beat a language that compiles to machine code, due to jit warmup etc.
It’s hard to due benchmarks right, for example are you testing IO performance? are OS caches flushed between language runs? What kind of disk is used etc? Performance does not exist in a vacuum of just the language or algorithm.
Why are you surprised? Java always suffers from abstraction penalty for running on a VM. You should be surprised (and skeptical) if Java ever beats C++ on any benchmark.
Conceptually, that’s true, but a compiler is free to do things differently. For example, if escape analysis shows that an object allocated in a block never escapes the block, the optimizer can replace the object by local variables, one for each field in the object.
You're right that Java lacks inline types (although it's getting them really soon, now), but the main cost of that isn't because of stack allocation (because heap allocations in Java don't cost much more than stack allocations), but because cache misses due to objects not being inlined in arrays.
Even for flattened types, the "abstraction penalty", or, more precisely, its converse, the "concreteness penalty", in Java will be low, as you don't directly pick when an object is flattened. Instead, you declare whether a class cares about identity or not, and if not, the compiler will transparently choose whether and when to flatten the object, depending on how it's used.
No, Java's existing compiler is very good, and it generates as good code as you'd want. There is definitely still a cost due to objects not being inlined in arrays yet (this will change soon) that impacts some programs, but in practice Java performs more-or-less the same as C++.
In this case, however, it appears that the Java program may have been configured in a suboptimal way. I don't know how much of an impact it has here, but it can be very big.
Even benchmarks that allow for jit warmup consistently show java roughly half the speed of c/c++/rust. Is there something they are doing wrong? I've seen people write some really unusual java to eliminate all runtime allocations, but that was about latency, not throughput.
Yes. The most common issues are heap misconfiguration (which is more important in Java than any compiler configuration in other languages) and that the benchmarks don't simulate realistic workloads in terms of both memory usage and concurrency. Another big issue is that the effort put into the program is not the same. Low-level languages do allow you to get better performance than Java if you put significant extra work to get it. Java aims to be "the fastest" for a "normal" amount of effort at the expense of losing some control that could translate to better performance in exchange for significantly more work, bot at initial development time, but especially during evolution/maintenance.
E.g. I know of a project at one of the world's top 5 software companies where they wanted to migrate a real Java program to C++ or Rust to get better performance (it was probably Rust because there's some people out there who really want to to try Rust). Unsurprisingly, they got significantly worse performance (probably because low-level languages are not good at memory management when concurrency is at play, or at concurrency in general). But they wanted the experiment to be a success, so they put in a tonne of effort - I'm talking many months - hand-optimising the code, and in the end they managed to match Java's performance or even exceed it by a bit (but admitted it was ultimately wasted effort).
If the performance of your Java program doesn't more-or-less match or even exceed the performance of a C++ (or other low level language) program then the cause is one of: 1. you've spent more effort optimising the other program, 2. you've misconfigured the Java program (probably a bad heap-size setting), or 3. the program relies on object flattening, which means the Java program will suffer from costly cache misses (until Valhalla arrives, which is expected to be very soon).
In my experience, if your C++ or Rust code does not perform as well as Java, it's probably because you are trying to write Java in C++ or Rust. Java can handle a large number of small heap-allocated objects shared between threads really well. You can't reasonably expect to meet its performance in such workloads with the rudimentary tools provided by the C++ or Rust standard library. If you want performance, you have structure the C++/Rust program in a fundamentally different way.
I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code. As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that. If you get the layout right, efficient code should be easy to write. Optimization is sometimes necessary, but it's often not very cost-effective, and it can't save you from poor design.
This critic always forgets that Java is how most folks used to program in C++ARM, 100% of all the 1990's GUI frameworks written in C++, and that the GoF book used C++ and Smalltalk, predating Java for a couple of years.
> it's probably because you are trying to write Java in C++ or Rust
Well, sure. In principle, we know that for every Java program there exists a C++ program that performs at least as well because HotSpot is such a program (i.e. the Java program itself can be seen as a C++ program with some data as input). The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile? That is quite hard to do, and gets harder and harder the bigger the program gets.
> I was not familiar with the term "object flattening", but apparently it just means storing data by value inside a struct. But data layout is exactly the thing you should be thinking about when you are trying to write performant code.
Of course, but that's why Java is getting flattened objects.
> As a first approximation, performance means taking advantage of throughput and avoiding latency, and low-level languages give you more tools for that
Only at the margins. These benefits are small and they're getting smaller. More significant performance benefits can only be had if virtually all objects in the program have very regular lifetimes - in other words, can be allocated in arenas - which is why I think it's Zig that's particularly suited to squeezing out the last drops of performance that are still left on the table.
Other than that, there's not much left to gain in performance (at least after Java gets flattened objects), which is why the use of low-level languages has been shrinking for a couple of decades now and continues to shrink. Perhaps it would change when AI agents can actually code everything, but then they might as well be programming in machine code.
What low-level languages really give you through better hardware control is not performance, but the ability to target very restricted environments with not much memory (as one of Java's greatest performance tricks is the ability to convert RAM to CPU savings on memory management) assuming you're willing to put in the effort. They're also useful, for that reason, for things that are supposed to sit in the background, such as kernels and drivers.
> The question is can you match Java's performance without significantly increasing the cost of development and especially evolution in a way that makes the tradeoff worthwhile?
This question is mostly about the person and their way of thinking.
If you have a system optimized for frequent memory allocations, it encourages you to think in terms of small independently allocated objects. Repeat that for a decade or two, and it shapes you as a person.
If you, on the other hand, have a system that always exposes the raw bytes underlying the abstractions, it encourages you to consider the arrays of raw data you are manipulating. Repeat that long enough, and it shapes you as a person.
There are some performance gains from the latter approach. The gains are effectively free, if the approach is natural for you and appropriate to the problem at hand. Because you are processing arrays of data instead of chasing pointers, you benefit from memory locality. And because you are storing fewer pointers and have less memory management overhead, your working set is smaller.
I don't know what plb2 is, but the benchmark game can demonstrate very little for because, the benchmarks are small and uninteresting compared to real programs (I believe there's not a single one with concurrency, plus there's no measure of effort in such small programs) and they compares different algorithms against each other.
For example, what can you learn from the Java vs. C++ comparison? In 7 out of 10 benchmarks there's no clear winner (the programs in one language aren't faster than all programs in the other) and what can you generalise from the 3 where C++ wins? There just isn't much signal there in the first place.
The Techempower benchmarks explore workloads that are probably more interesting, but they also compare apples to oranges, and like with the benchmark game, the only conclusion you could conceivably generalise (in an age of optimising compilers, CPU caches, and machine-learning banch predictors, all affected by context) is that C++ (or Rust) and Java are about the same, as there are no benchmarks in which all C++ or Rust frameworks are faster than all Java ones or vice-versa, so there's no way of telling whether there is some language advantage or particular optimisation work done that helps a specific benchmark (you could try looking at variances, but given the lack of a rigorous comparison, that's probably also meaningless). The differences there are obviously within the level of noise.
Companies that care about and understand performance pick languages based on their own experience and experiments, hopefully ones that are tailored to their particular program types and workloads.
For the most naive code, if you're calling "new" multiple times per row, maybe Java benefits from out of band GC while C++ calls destructors and free() inline as things go out of scope?
Of course, if you're optimizing, you'll reuse buffers and objects in either language.
benchmarks game uses BenchExec to take 'care of important low-level details for accurate, precise, and reproducible measurements' ….
BenchExec uses the cgroups feature of the Linux kernel to correctly handle groups of processes and uses Linux user namespaces to create a container that restricts interference of [each program] with the benchmarking host.
yes, but that's just one part of the equation. machine code from compiler and/or language A is not necessarily the same as the machine code from compiler and/or language B. the reasons are, among others, contextual information, handling of undefined behavior and memory access issues.
you can compile many weakly typed high level languages to machine code and their performance will still suck.
java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).
> java's language design simply prohibits some optimizations that are possible in other languages (and also enables some that aren't in others).
This isn't really true - at least not beyond some marginal things that are of little consequence - and in fact, Java's compiler has access to more context than pretty much any AOT compiler because it's a JIT and is allowed to speculate optimisations rather than having to prove them.
It can speculate whether an optimization is performant. Not whether it is sound. I don't know enough about java to say that it doesn't provide all the same soundness guarantees as other languages, just that it is possible for a jit language to be hampered by this. Also c# aot is faster than a warmed up c# jit in my experience, unless the warmup takes days, which wouldn't be useful for applications like games anyway.
Precisely right, but the entire point is that it doesn't need to. The optimisation is applied in such a way that when it is wrong, a signal triggers, at which point the method is "deoptimised".
That is why Java can and does aggressively optimise things that are hard for compilers to prove. If it turns out to be wrong, the method is then deoptimised.
There's no aliasing in the messy C sense in Java (and no pointers into the middle of objects at all). As for other optimisations, there are traps inserted to detect violation if speculation is used at all, but the main thrust of optimisation is quite simple:
The main optimisation is inlining, which, by default, is done to the depth of 15 (non-trivial) calls, even when they are virtual, i.e. dispatched dynamically, and that's the main speculation - that a specific callsite calls a specific target. Then you get a large inlined context within which you can perform optimisations that aren't speculative (but proven).
If you've seen Andrew Kelley's talk about "the vtable boundary"[1] and how it makes efficient abstraction difficult, that boundary does not exist in Java because compilation is at runtime and so the compiler can see through vtables.
But it's also important to remember that low-level languages and Java aim for different things when they say "performance". Low-level languages aim for the worst-case. I.e., some things may be slower than others (e.g. dynamic vs. static dispatch) but when you can use the faster construct, you are guaranteed a certain optimisation. Java aims to optimise something that's more like the "average case" performance, i.e. when you write a program with all the most natural and general construct, it will, be the fastest for that level of effort. You're not guaranteed certain optimisations, but you're not penalised for a more natural, easier-to-evolve, code either.
The worst-case model can get you good performance when you first write the program. But over time, as the program evolves and features are added, things usually get more general, and low level languages do have an "abstraction penalty", so performance degrades, which is costly, until at some point you may need to rearchitect everything, which is also costly.
I was very surprised to see the results for common lisp. As I scrolled down I just figured that the language was not included until I saw it down there. I would have guessed SBCL to be much faster. I checked it out locally and got: Rust 9ms, D: 16ms, and CL: 80ms.
Looking at the implementation, only adding type annotations, there was a ~10% improvement. Then the tag-map using vectors as values which is more appropriate than lists (imo) gave a 40% improvement over the initial version. By additionally cutting a few allocations, the total time is halved. I'm guessing other languages will have similar easy improvements.
D gets no respect. It's a solid language with a lot of great features and conveniences compared to C++ but it barely gets a passing mention (if that) when language discussions pop up. I'd argue a lot of the problems people have with C++ are addressed with D but they have no idea.
Tiny community, even more tinier than when Andrei Alexandrescu published the D book (he is now back to C++ at NVidia), lack of direction (it is always trying the next big thing that might atract users, leaving others behind not fully done), since 2010 other alternatives with big corp sponsoring came up, others like Java and C# gained the AOT and improved their low level programing capabilities.
Thus, it makes very little sense to adopt D versus other managed compiled languages.
The language and community are cool, sadly that is not enough.
Ecosystem isn't that great, and much of it relies on the GC. If you're going to move out of C++, you might as well go all in on a GC language (Java, C#, Go) or use Rust. D's value proposition isn't enough to compete with those languages.
D has a GC and it’s optional. Which should be the best of both worlds in theory.
Also D is older than Go and Rust and only a few months younger than C#. So the question then becomes “why weren’t people using D when your recommended alternatives weren’t an option?” Or “why use the alternatives (when they were new) when D already exists?”
This is only true in the most technical sense: you can easily opt-out of the GC, but you will struggle with the standard library, and probably most third-party libraries too. It's the baseline assumption after all, hence why it's opt-out, not opt-in. There was a DConf talk about the future of Phobos which indicated increased support for @nogc, but this is a ways away, and even then. If you're opting-out of the GC, you are giving up a lot. And honestly, if you really don't want the GC, you may be better off with Zig.
Garbage collection has never been a major issue for most use cases. However, the Phobos vs. Tango and D1 vs. D2 splits severely slowed D’s adoption, causing it to miss the golden window before C++11, Go, and Rust emerged.
I don't really get the idea that LLMs lower the level of familiarity one needs to have with a language.
A standup comedian from Australia should not assume that the audience in the Himalayas is laughing because the LLM the comedian used 20 minutes before was really good at translating the comedian's routine.
But I suppose it is normal for developers to assume that a compiler translated their Haskell into x86_64 instructions perfectly, then turned around and did the same for three different flavors of Arm instructions. So why shouldn't an LLM turn piles of oral descriptions into perfectly architected Nim?
For some reason I don't feel the same urgency to double-check the details of the Arm instructions as I feel about inspecting the Nim or Haskell or whatever the LLM generated.
If the difference in performance between the target language and C++ is huge, it's probably not the language that's great, but some quirk of implementation.
The study seems to be “solve this the obvious way, don’t think too hard about it”. Then the systems languages (C, Zig, C++) are pretty close, the GC languages are around an order of magnitude slower (C#, Java
doing pretty good at ca. 3x), and the scripting languages around two orders of magnitude slower.
But note the HO-variants: with better algorithms, you can shave off two orders of magnitude.
So if you’re open to thinking a bit harder about the problem, maybe your badly benchmarking language is just fine after all.
C# is very fast (see multicore rating). Implementation based on simd (vector), memory spans, stackalloc, source generators and what have you — modern C# allows you go very low-level and very fast.
Probably even faster under .net 10.
Though using stopwatch for benchmark is killing me :-) Wonder if multiple runs via benchmarkdotnet would show better times (also due to jit optimizations). For example, Java code had more warm-up iterations before measuring
This entire benchmark is frankly a joke. As other commenters have pointed out, the compiler flags make no sense, they use pretty egregious ways to measure performance, and ancient versions are being used across the board. Worst of all, the code quality in each sample is extremely variable and some are _really_ bad.
Provided the correct result is generated I don't get the rationale for this one. As long as you obey the other rule for UTF-8 compatibility, why would it be a problem to represent as bytes (or anything else)?
Seems like it would put e.g. GC'ed languages where strings are immutable at a big disadvantage
About the C++ version: You have to be an absolute weirdo to (sometimes) put the opening brace of functions on the same line, but on the next line for if and for bodies.
I think there was a name for that brace style? It seems silly, but leaving c++ development after decades for a variety of reasons, it turned out a standard formatting tool was one of my favorite features.
Totally agree. I found the results surprising because a bunch of languages are faster than C++. Then I looked closer. The requirements are self-conflicting, No SIMD, but must be production-ready. No one would use the unoptimized version in production. Also looking at the C++ implementation, they are not optimized at all. This makes this benchmark literally pointless.
I mean this is only meant to be an iteration if I understand correctly. Its not like someone is going around citing this benchmark yelling rewrite everything in Julia / D. Imo this is a good starting point if you are doubtful or fall into the trap of Java is not fast. For most workloads we can clearly see, Java trades off the control of C++ for "about the same speed" and much much larger and well managed ecosystem. (Except for the other day, when someones OpenJDK PR was left hanging for a month which I am not sure why).
Quality does vary wildly because the languages vary wildly in terms of language constructs and standard libraries. Proficiency in every.single.language. used in the benchmark perhaps should not be taken for granted.
But it is an GitHub repository and the repository owner appears to accept PR's and allows people to raise an issue to provide their feedback, or… it can be forked and improved upon. Feel free to jump in and contribute to make it a better benchmark that will not be «frankly a joke» or «_really_ bad».
I'm completely alright with just having fun and hosting your own little sandboxes online, but what good does it do to post and share this with others in its current state? The picture it paints is certainly not representative, and this sort of thing has been done a million times over with much better consistency. Again, I think it's great to hack around in every language and document your journey all the way, but sharing this is borderline misinformation. It's certainly not my duty to right the wrongs of this benchmark.
The fact that Julia “highly optimized” is 30x faster than the normal Julia implementation, yet still fails to reach for some pretty obvious optimizations, and uses a joke package called “SuperDataStructures” tells me that maybe this benchmark shouldn’t be taken all that seriously.
Benchmarks like this can still be fun and informative
That doesn’t require the strings that represent the tags to be the tag strings, So, one can bend the rules by representing tags by single-character strings or, alternatively, by using fixed strings of length 0 through 99, and then doing the tag comparisons only on the first character of each string or, alternatively, the length of the string (if obtaining that is fast)
Especially when tags have large common prefixes, that could speed up things tremendously.
It's not an issue of warmup time, it's an issue of jit compilation.
On my server (AMD EPYC 7252):
1) base time of the java program from the repo is 3.23s (which is ~2 worse than the one in linked page, so I assume my cpu is about 2 slower, and corresponding best c++ result will be ~450ms
2) if you count from inside of java program you get 3.17s (so about 60ms of overhead)
3) but if you run it 10 times (inside of same java program) you cut this time to 1570ms
It's still much slower than c++ version, but it's between rust and go. And this is not me optimizing something, it's only measuring things correctly.
update: running vector version of java code from same repo brings runtime to 392ms which is literally fastest out of all solutions including c++.
update2: ran c++ version on same hardware, it takes 400ms, so I would say it's fair to say c++ and vectorized java are on par (and given "allows vectorization" comment in cpp code I assume that's the best one can get out of it).
The quality of the benchmark code is... not great. This seems like Zig written by someone who doesn't know Zig or asked Claude to write it for them. Hell, actually Claude might do a better job here.
In short, I wouldn't trust these results for anything concrete. If you're evaluating which language is a better fit for your problem, craft your own benchmark tailored for that problem instead.
Modern c# has many low level knobs (still in a safe way; though it also supports unsafe) for zero allocation, hardware intrinsics, devirtualization of calls at runtime, etc.: simd (vector), memory spans, stackalloc, source generators (helps with very efficient json), etc.
Most of all: C# has a very nice framework and tooling (Rider).
It's not really surprising given the implementations. The C# stdlib just exposes more low-level levers here (quick look, correct me if I'm wrong):
For one, the C# code is explicitly using SIMD (System.Numerics.Vector) to process blocks, whereas Go is doing it scalar. It also uses a read-only FrozenDictionary which is heavily optimized for fast lookups compared to a standard map.
Parallel.For effectively maps to OS threads, avoiding the Go scheduler's overhead (like preemption every few ms) which is small but still unnecessary for pure number crunching. But a bigger bottleneck is probably synchronization: The Go version writes to a channel in every iteration. Even buffered, that implies internal locking/mutex contention. C# is just writing to pre-allocated memory indices on unrelated disjoint chunks, so there's no synchronization at all.
If you're referring to the SIMD aspect (I assume the other points don't apply here): It depends on your perspective.
You could say yes, because the C# benchmark code is utilizing vector extensions on the CPU while Go's isn't. But you could also say no: Both are running on the same hardware (CPU and RAM). C# is simply using that hardware more efficiently here because the capabilities are exposed via the standard library. There is no magic trick involved. Even cheap consumer CPUs have had vector units for decades.
C# is great, but look at the implementations. The jvm is set up wrong, so JAVA could perform better than what is benchmarked. Hell with Python you'd probably use Celery or numpy or ctypes to do this much faster.
If people don't find their preferred language on top, they will claim the benchmark is flawed. They will find a condition that is not satisfied by the benchmark. But if we operate outside of the benchmarks assumptions, all benchmarks are flawed since they cannot satisfy all possible conditions.
(Given credits to both sources in the description of this repo)
(Also fair disclosure but it was generated just out of curiosity of how this benchmark data might look if it was on benjdd's ui and I used LLM's for this use case for prototyping purposes. The result looks pretty simiar imo for visualization so full credits to benjdd's awesome visualization, I just wanted this to be in that to see for myself but ended up having it open source/on github pages)
I think benjdd's on hackernews too so hi ben! Your websites really cool!
Someone replied to me in an old comment that for fast Python you have to use numpy. In the folder there is a program in plain python, another with numpy and another with numba. I'm not sure why only one is shown in the data.
Disclaimer: I used numpy and numba, but my level is quite low. Almost as if I just type `import numpy as np` and hope the best.
For what it's worth, I've ported a lot of heavily optimized numpy code to Julia for work, and consistently gotten 10x-100x speedups, largely due to how much easier it is to control memory allocations and parallelize more effectively.
I wrote a script (now an app basically haha) to migrate data from EMR #1 to EMR #2 and I chose Nim because it feels like Python but it's fast as hell. Claude Code did a fine job understanding and writing Nim especially when I gave it more explicit instructions in the system prompt.
Genuine question: Are GitHub workflows stable enough to be used for benchmarking? Like CPU time quantum scheduling is guaranteed to be the same from run to run?
I will have a look, but R has much better data structures than Python for data processing (everything is a vector in R)
EDIT: they have one script related.R in their repo, which is 3 years old, and uses jsonlite as a package which is notoriously slow. Using a package such as yyjsonr yields 10x performance, so something tells me what whoever wrote this piece of code has never heard of R before.
That only applies in an apples-to-apples comparison, i.e., same data structures, same algorithm, etc.
You can't compare sorting in C and Python, but use bubble sort in C and radix sort in Python.
In here there are different data structures being used.
> D[HO] and Julia [HO] footnote: Uses specialized datastructures meant for demonstration purposes: more ↩ ↩2
You're right of course but it also depends on how long you want to spend on it. If Python gives you radix sort directly and the C implementation you can have with the same time is bubble sort because you spent much time setting up the project and finding the right libs it kinda makes sense.
Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.
Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).
reply on default site