I'm confused why people are voting down an article about AI in an AI community, discussing small language models, which are much better in terms of energy consumption and the environment.
I stumbled upon this article after reviewing a pull request, where someone was unit testing the abstract base class. I'm of the opinion that base classes should not be tested. We don't want to be testing the architecture of an application, we want to be testing the behaviour. The author sums this up nicely with this point:
For tests, though, it shouldn’t matter whether the classes under test share the domain logic or duplicate it. Tests should view all production code as a black box, and approach verifying it with a blank slate. Otherwise, such tests will start couple to the code’s implementation details.
I am wondering about this too. The article, ActivityPub on a (mostly) static website, goes into detail about what is involved.
I also make use of ‘⚠’ to mark significant/blocking comments and bullet points. Other labels, like or similar to conventional comment prefixes, like “thought:” or “note:”, can indicate other priorities and significance of comments.
Thank you for introducing me to conventional comments! I hadn't heard of them before, and I can see how they'd be really useful, particularly in a neurodiverse team.
The issue was they changed their server URL and added www, so I've updated the link accordingly.
How strange. It was definitely working when I shared it.
Vulkan?
I'm not too familiar with either, but this article goes into more detail: A Comparison of Modern Graphics APIs
Ollama uses the Metal API on Apple Silicon Macs for GPU acceleration.
How does one measure code quality? I'm a big advocate of linting, and have used rules including cyclomatic complexity, but is that, or tools such as SonarQube, an effective measure of quality? You can code that passes those checks, but what if it doesn't address the acceptance criteria - is it still quality code then?
What I got from the article is an example of how generative AI can fix a bug, if you provide it with a reproducing case. Yet funnily enough, the AI introduced a bug in the first place by using an older version of a dependency.
I know what you mean. Quite often when I've worked in a project where there is a pull request template, a lot of the time people don't bother to fill it out. However, in an ideal world, people would be proud of the work that they've delivered, and take the time to describe the changes when raising a pull request.