I did not saw Elevate Your Rust Code: The Art of Separating Actions and Calculations in my feed before. It was a good read
robinm
It seems to be a lot of work but could also be a good idea.
Something that I would like would also be a statement on the Rust blog to say that lemmy instance X is the main Rust lemmy instance and discussion should mostly be done here, so that the migration path is clear for reddit users.
That's a very nicecly written article.
Just a quick question, isn't point 8 outdated (misconctption: “Rust borrow checker does adanced liftime analysis”) due to the introduction NLL (no lexical lifetime) in Rust 2018?
If only std::unique_ptr
and std::variant
were introduced in C++ it would be possible to use the default destructor, copy and move constructor…
That article would have been useful 15 years ago, but not anymore.
That being said, if you access the database in GUI there is a high chance that you will repeat yourself making the whole program bigger.
On reddit, someone was always linking to the donation page for antonio. I liked that tradition and hope that someone will continue it on lemmy.
The devil is really is the details. One year ago, I felt that the progress was moving so fast I had the feeling that it was close to being finished, but it seems toat it's much more complex than what I thought. Nonetheless, congratulation to all contributors!
As far as I know, adding the support for restrict didn’t trigger any bugs in GCC
That's very impressive for gcc. IIRC adding restrict to LLVM triiggered major bugs and miscompilations at least for the first two attempts. As they said they need to do a crater run to be sure, but even passing the initial smoke test is an achievement for gcc.
However, I'm surprised the code is “only” 3% faster using restric annotation. IIRC the speed-ups were about 5% for LLVM so maybe there is still some performance to gain on the gcc side?
I think you don't understand what @CasualTee said. Of course dynamic linking works, but only when properly used. And in practice dynamic linking in a few order of magnitude more complex to use than static linking. Of course you still have ABI issue when you statically link pre-compiled libraries but in practice in statically linked workflow you are usually building the library yourself removing all ABI issues. Of course if a library is using a global and you statically linked it two times (with 2 differents versions) you will have an issue, but at least you can easily check that a single version is linked.
If it was solved, “DLL hell” wouldn't be a common expression and docker would have never been invented.
@CasualTree was talking specically of UB related to dynamic linking and whitch simply do not exists when statically linking.
Yes dynamic linking work in theory, but in practice it's hell to make it work properly. And what advantage does it have compare to static linking?
To sum-up, are all the complications introduced specifically introduced by dynamic linking compared to static linking worth it for a non-guaranteed gain in RAM, a change in the tools of Linux maintainors and extra download time?