Here are a few photo posts about using lights outdoors for example:
Reader9
This is a great suggestion because it focuses directly on tracking the outcome (did the software work?) and it gives a fair chance to the folks who don’t want to test - maybe their code really is perfect!
Another similar metric I would add is the number of rollbacks of newly released code, if the CD system supports it using a method like canary or blue-green rollouts.
Focusing on code coverage (which doesn't distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.
From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”
Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.
One aspect that does work is framing the need for tests as assurance that specific invariants are verified and preserved
Agreed - this is the specific aspect which I hoped would be communicated by studying TDD a bit!
The team is afraid that making changes will be more difficult when tests exist, but TDD (or maybe a more specific concept like you mentioned) demonstrates that tests make future changes easier.
And I specifically advocated not to follow “write tests first”.
Name-dropping concepts actually contributes to loose credibility of any code quality effort, and works against you.
OK. If I were having an in-depth discussion with my team of fellow developers to convince them to start writing tests, I don’t think that’s name-dropping.
We can’t test yet, we’re going to make changes soon
This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.
“When we make those changes, wouldn’t it be great to have more confidence that the business logic didn’t break when adding a new technical capability?”
You shouldn’t have to refactor to test something
This seems like a reasonable statement and I sort of agree, in the sense that for existing production code, making a code change which only adds new tests yet also requires refactoring of existing functionality might feel a bit risky. As other commenters mentioned, starting with writing tests for new features or fixes might help prevent folks feeling like they are refactoring to test. Instead they’re refactoring and developing for the feature and the tests feel like they contribute to that feature as well.
those tools eventually became what Airflow and other orchestration tools are: defining DAGs and running scripts
Definitely. It is much more pleasant to work with better tools for the same functionality.
Airflow got a lot of things right. For example in Luigi a runnable “task” is a python class that gets implicitly executed, whereas in Airflow tasks are made from functions that get called in a more straightforward/imperative manner. This makes DAGs much easier to read and write in Airflow.
That seems like a good idea by removing ambiguity about what the necessary skills are.
When joining a new company, I once asked a wise colleague “are you a data engineer or a backend engineer?”. They replied “I’m a software engineer” and ever since I have given the same answer, for reasons similar to your post.
I have also seen “data engineer” used at facebook to indicate someone who writes SQL but not other programming languages, another potential reason not to use this as a job title IMO.
Good point and an interesting take. As you said this could be a good signal, when taken in context with the other limited information we get as employment candidates, about internal development practices.
It’s probably not going to work as a defense against training LLMs (unless everyone does it?) but it also doesn’t have to — it’s an interesting thought experiment which can aid in understanding of this technology from an outside perspective.
Love to see such well-regulated output, thanks for recording and sharing.
Related: ts10 aluminum silver, red, and green hosts are being discontinued according to Wurkkos, but not black, so it might be a good time to stock up.
Aside: linking to Lemmy posts with federation doesn’t seem to be supported properly at the moment, unlike community links which keep you on your own instance, e.g. !flashlight@lemmy.world doesn’t transport me to a different Lemmy instance when using this account from programming.dev.