this post was submitted on 14 Aug 2023
37 points (100.0% liked)

Experienced Devs

3978 readers
1 users here now

A community for discussion amongst professional software developers.

Posts should be relevant to those well into their careers.

For those looking to break into the industry, are hustling for their first job, or have just started their career and are looking for advice, check out:

founded 2 years ago
MODERATORS
 

I'm like a test unitarian. Unit tests? Great. Integration tests? Awesome. End to end tests? If you're into that kind of thing, go for it. Coverage of lines of code doesn't matter. Coverage of critical business functions does. I think TDD can be a cult, but writing software that way for a little bit is a good training exercise.

I'm a senior engineer at a small startup. We need to move fast, ship new stuff fast, and get things moving. We've got CICD running mocked unit tests, integration tests, and end to end tests, with patterns and tooling for each.

I have support from the CTO in getting more testing in, and I'm able to use testing to cover bugs and regressions, and there's solid testing on a few critical user path features. However, I get resistance from the team on getting enough testing to prevent regressions going forward.

The resistance is usually along lines like:

  • You shouldn't have to refactor to test something
  • We shouldn't use mocks, only integration testing works.
    • Repeat for test types N and M
  • We can't test yet, we're going to make changes soon.

How can I convince the team that the tools available to them will help, and will improve their productivity and cut down time having to firefight?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

Mandatory test coverage was a thing in the past, but later many teams found out that if you are forced to have like 80% test coverage, you end testing framework or even getter and setter tests (at least in Java world). Tests for the sake of tests seems to be irrelevant.

…but, test as a skill and habit is also necessary to some extent. Let's say, you set up mandatory 20% of test coverage (for new code). This shouldn't end with pointless tests (if it's not only configuration, you will make much more coverage just by adding quality tests). If you have people in your team that don't write tests at all, this requirement will get people used to writing tests.

When we no longer need to spend energy on just remembering, that we should do something (to write tests in this case), we can spend this energy on making that thing with higher quality or quicker.

Once everyone in your team is used to write tests, then is a time to start a conversation about quality of those tests, idioms, preferred patterns, aspects of readability.

PS At the beginning of my career the most important aspect of automatic testing was writing unit tests. Then sometimes people were writing some integration tests. E2E tests? What's that? …Only now more and more I sway to the opposite side, that E2E tests are the most important ones and even in a legacy code you may not find a single unit test, but a few E2E tests cover so much. You will hate a codebase without unit tests the same, regardless if there are E2E tests or not. But if you are new to the project, there is at least a chance that those E2E tests stop you from merging code to master if it breaks core functionality. With amount of time put to write those E2E spent instead on unit tests, you won't have this assurance. It doesn't matter that a shared technical component in a codebase has 100% test coverage if it's only a tiny part of core functionality.

PPS If you decide to incorporate mandatory X % of code coverage, take a look at mutation testing frameworks / tools

Mutation testing (or mutation analysis or program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. Each mutated version is called a mutant and tests detect and reject mutants by causing the behaviour of the original version to differ from the mutant. -- Wikipedia

[–] [email protected] 2 points 1 year ago

We use a little bit of property testing to test invariants with fuzzed data. Mutation testing seems like a neat inverse.