this post was submitted on 29 Jul 2025
56 points (88.9% liked)

Programming

21924 readers
638 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

A bit old but still interesting

you are viewing a single comment's thread
view the rest of the comments
[–] SuperFola@programming.dev 2 points 4 days ago (2 children)

I find this paper false/misleading. They just translated one algorithm in many languages, without using the language constructs or specificities to make the algorithm decent performant wise.

Also it doesn’t mean anything, as you aren’t just running your code. You are compiling/transpiling it, testing it, deploying it… and all those operations consume even more energy.

I’d argue that C/C++ projects use the most energy in term of testing due to the quantity of bugs it can present, and the amount of CPU time needed just to compile your 10-20k lines program. Just my 2 cents

[–] KRAW@linux.community 15 points 4 days ago* (last edited 4 days ago)

The amount of CPU time compiling code is usually negligible compared to CPU time at runtime. Your comparison only really works if you are comparing against something like Rust, where less bugs are introduced due to certain guarantees by the language.

Regarding "language constructs" it really depends on what you mean. For example using numpy in python is kind of cheating because numpy is implemented in C. However using something like the algorithm libraries in Rust woulf be considered fair game since they are likely written in Rust itself.

[–] atzanteol@sh.itjust.works 11 points 4 days ago (1 children)

I find this paper false/misleading.

They presented their methodology in an open and clear way and provide their data for everyone to interpret. You can disagree with conclusions but it's pretty harsh to say it's "misleading" simply because you don't like the results.

They just translated one algorithm in many languages, without using the language constructs or specificities to make the algorithm decent performant wise.

They used two datasets, if you read the paper... It wasn't "one algorithm" it was several from publicly available implementations of those algorithms. They chose an "optimized" set of algorithms from "The Computer Language Benchmarks Game" to produce results for well-optimized code in each language. They then used implementations of various algorithms from Rosetta Code which contained more... typical implementations that don't have a heavy focus on performance.

In fact - using "typical language constructs or specificities" hurt the Java language implementations since List is slower than using arrays. It performed much better (surprisingly well actually) in the optimized tests than in the Rosetta Code tests.

[–] FizzyOrange@programming.dev 1 points 4 days ago (1 children)

They chose an “optimized” set of algorithms from “The Computer Language Benchmarks Game” to produce results for well-optimized code in each language.

Honestly that's all you need to know to throw this paper away.

[–] atzanteol@sh.itjust.works 2 points 4 days ago (1 children)
[–] FizzyOrange@programming.dev 1 points 3 days ago (1 children)

It's a very heavily gamed benchmark. The most frequent issues I've seen are:

  • Different uses of multi-threading - some submissions use it, some don't.
  • Different algorithms for the same problem.
  • Calling into C libraries to do the actual work. Lots of the Python submissions do this.

They've finally started labelling stupid submissions with "contentious" labels at least, but not when this study was done.

[–] atzanteol@sh.itjust.works 2 points 3 days ago (1 children)

They provide the specific implementations used here: https://github.com/greensoftwarelab/Energy-Languages

I dislike the "I thought of something that may be an issue therefore just dismiss all of the work without thinking" approach.

[–] FizzyOrange@programming.dev 1 points 2 days ago (1 children)

I agree, but if you take away the hard numbers from this (which you should) all you're left with is what we all already knew from experience: fast languages are more energy efficient, C, Rust, Go, Java etc. are fast; Python, Ruby etc. are super slow.

It doesn't add anything at all.

[–] atzanteol@sh.itjust.works 1 points 2 days ago* (last edited 2 days ago)

Well... No. You're reading the title. Read the document.

"We all know" is the gateway to ignorance. You need to test common knowledge to see if it's really true. Just assuming it is isn't knowledge, it's guessing.

Second - it's not always true:

for the fasta benchmark, Fortran is the second most energy efficient language, but falls off 6 positions down if ordered by execution time.

Thirdly - they also did testing of memory usage to see if it was involved in energy usage.