this post was submitted on 09 Jul 2023
83 points (97.7% liked)

Programming

17503 readers
10 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there's sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 year ago (1 children)

On top or, better, in addition to mutation testing, some amount of property-based testing is always great where it counts.

[โ€“] [email protected] 2 points 1 year ago

Additionally I like roundtrip tests.

For example we have two data formats and support conversion between both of them.

So I have tests that convert from A to B and back to A. Then I can go and call assertEquals on them.

It's a very cheap test, that tests all functionality of the conversion itself.