The best ROI I get when writing tests is from thorough unit testing of the most fundamental, most heavily used low-level components. That's because components like "StreamReader" or "Lock" or "Response" tend not to change much over time.
In contrast, higher-level tests often break because the high-level components the test change frequently in response to business needs.
I don't get this meme that unit tests are "brittle". Are the interfaces to your low-level components changing all the time? Perhaps they are not written with "loose coupling" in mind and are exposing needlessly fragile and complex APIs?
The earliest proponents I know of - the ones writing XP books back in the '90s - had a very flexible definition of "unit" and assumed that you would vary the scale according to your needs. So some units might be low level components like data structures, but others might be large modules that included a lot of moving parts, up to and including actual databases.
The world has moved on now, and "agile" no longer means "use your own best judgment." It's become a carefully catalogued list of rules and prescriptions and definitions and dictums.
Which, come to think of it, is an excellent example of memetic natural selection. "Use your own best judgment" is a meme that is uniquely poorly adapted to survival in an ecosystem that consists largely of people jabbering back and forth on the Internet like we are right now. You can't base a rollicking good fun argument or a strongly worded blog post on something like that.
> The earliest proponents I know of - the ones writing XP books back in the '90s - had a very flexible definition of "unit" and assumed that you would vary the scale according to your needs. So some units might be low level components like data structures, but others might be large modules that included a lot of moving parts, up to and including actual databases.
It's actually a fairly strict definition, it's just not from the angle that people like to go with (as shown even by your description here): It's a semantic/conceptual unit, not a syntactic unit.
One of my co-workers eventually got it when I told him to completely forget about all the code he's written, then pretend there was a library he could include that did exactly what he wanted. He came up with a single function with a simple set of arguments, that would be used twice, and I stopped him there with something along the lines of: "Okay, that's your API. When you write the code that actually does this, put this function in one separate file and only import this one function in the code and the tests. Now it doesn't matter how you break out the internals into other functions or classes - it's a conceptual unit, not necessarily a single function."
> The world has moved on now and most programmers are duct taping parts together so unit testing no longer has a place in this world.
Even on business-domain level components there are often simple routines which can and should be tested. Larger routines should be built up from smaller ones. Divide-and-conquer is a fundamental engineering principle.
If there are no such small-enough-to-be-tested routines, then the code is not being written with testing in mind. If all your functions are hundreds of lines long, then yeah it's hard to write unit tests!
I agree. Your unit tests should focus on testing individual pieces, the fundamental building blocks (classes and functions) that you have in your system. The changes should be isolated so when a class API changes only those unit tests would need to change.
I find the key thing here to be separation of concerns. Aim for testable classes and functions and that will make you also thing about the depencies.
For example I have class implementing NNT protocol. Initially one could think that this class needs to deal with IO or even worse with socket based data transfer. But if you think about it doesn't. The api can be based on an idea of getting a buffer full of data in and getting buffer Full of data out. Class coupled with IO/socket functionality = coupled hard to test without its trivial to unit test. (Reduced functionality class only maintains the protocol state doesn't concern itself where the data comes or goes)
It's frequently the domain model objects themselves changing that causes all the spurious breakage.
This is not nearly as much a problem in dynamic languages, especially if you tend to do data-level programming instead of defining a lot of custom types for your domain modeling. And I suspect that this is exactly why classical TDD was first popularized among Java developers, while the London school first gained traction in the Ruby and Python communities.
In contrast, higher-level tests often break because the high-level components the test change frequently in response to business needs.
I don't get this meme that unit tests are "brittle". Are the interfaces to your low-level components changing all the time? Perhaps they are not written with "loose coupling" in mind and are exposing needlessly fragile and complex APIs?