Hacker News new | past | comments | ask | show | jobs | submit login

My initial reaction was to downvote you for your last sentence - I think OP has obviously worked on medium-large systems, and is just strongly opinionated.

However, I agree with your first sentence.

> If fixing a bug breaks a functional test, then either the test was correct and you changed your expected system behavior and the test saved your bacon, or the test was a change detector test not a functional test.

And I think this is a _hugely_ valuable for medium-to-large systems, which typically have many, many layers of abstractions.

I used to work on one such system that had a lot of, what this author would refer to as, functional tests. Sometimes, changing a couple of lines in one of the core modules would cause 50 tests to fail. While this is just a "change detector", it's still really valuable, say about 20/50 of these changes in behavior are unexpected but acceptable, 5/50 are actually intended, but 25/50 of these are actually regressions in ideal behavior. This means the layer at which you made the change is just wrong, or you didn't put sufficient nuance in your change.

It's a big time-sink for sure and made many new developers angry, but honestly the gain was that the overall product had fewer regressions.

I'm somewhat unsure what I would replace these "functional tests" with.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: