this post was submitted on 09 Jul 2023
83 points (97.7% liked)

Programming

17503 readers
8 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

you are viewing a single comment's thread
view the rest of the comments
[–] raspberry_confetti 22 points 1 year ago (4 children)

80%. Much beyond that and you get into a decreasing return on the investment of making the tests.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago) (2 children)

I think this is a good rule-of-thumb in general. But I think the best way to decide on the correct coverage is to go through uncovered code and make a conscious decision about it. In some classes it may be OK to have 30%, in others one wants to go all the way up to 100%. That's why I'm against having a coverage percentage as a build/deployment gate.

[–] raspberry_confetti 6 points 1 year ago

Bingo, exactly this. I said 80 because that's typically what I see our projects get to after writing actually useful tests. But if your coverage is 80% and it's all just tests verifying that a constant is still set to whatever value, then yeah, thats a useless metric.

load more comments (1 replies)
load more comments (2 replies)