BREAKING NEWS: Girl gets home safely after night out. More at 11.
nous
Yes. They can. But they do not mix well with required checks. From githubs own documentation:
If a workflow is skipped due to path filtering, branch filtering or a commit message, then checks associated with that workflow will remain in a "Pending" state. A pull request that requires those checks to be successful will be blocked from merging.
If, however, a job within a workflow is skipped due to a conditional, it will report its status as "Success". For more information, see Using conditions to control job execution.
So even with github actions you cannot mix a required check and path/branch or any filtering on a workflow as the jobs will hang forever and you will never be able to merge the branch in. You can do either or, but not both at once and for larger complex projects you tend to want to do both. But instead you need complex complex workflows or workflows that always start and instead do internal checks to detect if they need to actually run or not. And this is with github actions - it is worst for external CICD tooling.
Don't think any game has the same support doom has. Doom has become a benchmark of sorts so gets ported to the strangest of places. Not normally places you would bother to port any game. It is done for the challenge of the port rather than any practical reason.
There are tonnes of games that could run in the same places as doom, many could run in far more places. But doom is complex enough to be an interesting challenge while being simple enough to run on very limited hardware. And has been open sourced while being a classic icon which makes it attractive to be a benchmark for getting to run in the weirdest of places.
If you have folderA
and folderB
each with their own set of tests. You don't need folderA
s tests to run with a change to folderB
. Most CI/CD systems can do this easily enough with two different reports. But you cannot mark them both as required as they both wont always run. Instead you need a complicated fan out pipelines in your CICD system so you can only have one report back to GH or you need to always spawn a job for both folders and have the ones that dont need to run return successful. Neither of these is very good and becomes very complex when you are working with large monorepos.
It would be much better if the CICD system that knows which pipelines it needs to run for a given PR could tell GH about which tests are required for a particular PR and if you could configure GH to wait for that report from the CICD system. Or at the very least if the auto-merge was blocked for any failed checks and the manual merge button was only blocked on required checks.
One problem is GHs auto-merge when ready button. It will merge when there are still tests running unless they are required. It would be much better if the auto merges took into account all checks and not just required ones.
Yeah there are ways to run partial tests on modified code only. But they interact poorly with GH required checks. https://github.com/orgs/community/discussions/44490 goes into a lot more detail on similar problems people are having with GH actions - though our problem is with external CICD tools that report back to GH. Though it does look like they have updated the docs that are linked to in that discussion so maybe something has recently changed with GH actions - but I bet it still exists for external tooling.
Even if true, SIMD is doing the heavy lifting here. Probably followed by the fact that almost any rewrite of a code base will result in performance improvements due to the nature of being more familiar with the domain and where the bottle necks are. I would be surprised if the assembly was responsible for more then about 1% of the gains here. So why highlight the fact assembly was used here? It is just missleading. If you want to show how ASM is so much better you need a much better example then this. For all we know the use of ASM could have made things slower and harder to develop. There is just no details at all as to why ASM is beneficial here except some author seems to love it.
We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.
And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict...
But GH required checks are just broken for large code bases as well. We don't always want to run every check on every code change. We don't need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.
This is such a missleading claim. You cannot get a 100x speed up with handwritten assembly unless the code before what utter shit to start with. Assembly is not the cause for the 100x speedup here, the fact that it was written to use SIMD instructions are the bigger reason for the speedup. You don't require asm to make use of SIMD instructions.
has developed a new type of nuclear battery called a perovskite betavoltaic cell (PBC), which could power small devices for decades without the need for recharging
Title lacks that key bit of information, small devices. You are still going to need to change your phone. And small devices can already last for years on conventional batteries. So the win here is minor at best if it ever actually sees the light of day.
Also worth pointing out:
There are other issues too. All of Law and Blair’s tests were done with one kind of 3D printer—a Prusa MK4S. There’s hundreds of different devices on the market that all behave differently. Law also pointed out that brass nozzles warp over time themselves and may produce different results after hundreds of prints and that different nozzles made from different materials may work very differently. Law would also want an examiner rate study—a formal scientific inquiry into false positives and examiner bias.
Go to work, work, go home. Wait for the end of the week/month for their next paycheck.
Most people already have more then a dollar and more then 24 hours to earn money. If schemes like that really worked then why would so many people do anything else?