Are you defining unit tests are per class tests? This is the biggest mistake.
You should be testing a unit of behaviour, like a business rule or an effect to be expected. If this involves multiple objects collaborating then so be it. But they should not cross architectural boundaries like http calls, or database calls. If you can only make public api classes accessible, and make all helper classes inaccessible to users. That also forces you test things from only a public api standpoint.
Tests that just check an object calls another object, but exhibits no desired behaviour in its self are pointless, and just couple everything to the implementation. I.e Certain methods, are called in a certain order with specific parameters. When you do that, you make it really hard to change things(just design changes, not desired behaviour changes) without breaking every test.
This is part of the difference between the classicist and the mocking styles of testing. The argument for the latter goes that if (big if!) your collaborators are well designed and have good interfaces, then asserting that something is called is a meaningful assertion in the language of your domain (e.g. "check that an email is sent with exactly this content" is a meaningful assertion, even when you mock the mailer).
I agree that in many cases, APIs are badly designed and expose implementation details that you shouldn't couple your tests to, but many still do that. Mockist proponents would claim that this is not the right way to do it (for example, they argue you should wrap bad APIs in better ones and write your assertions against the latter, see also "don't mock what you don't own").
I can understand the point of view that even then you wouldn't like these kinds of tests, but for me they do provide some assurance that "the right stuff happens" (of course, you need integration tests to test the wiring etc.) at a still lower cost than "you have to extremely strictly separate pure from impure code" (which has other benefits, yes, but is also really hard to enforce especially in a team).
I mean I am a functional programmer so i naturally make code pure until the edges anyway. In a functional language I will just replace the edge function call with a fake version.
But the problem with mocking, is you couple your objects to one another. You expect certain methods to be called. With certain parameters.
What if you want to add in extra layer, or remove a layer, or add a helper. All your tests have now been broken yet the overall behaviour has not changed.
You also have issues around mocks not acutally setup to behave like the real object.
If your object under test sends a null it may pass but the real thing may fail.
Instead create an architectural boundry with a clear public interface. Exercise the interfaces in tests and then check for results on the otherside of the boundary. You might do that with a mock or a fake.
You are now free to refactor from the start to finish of that architectural boundry without breaking tests.
This is just fancy talk for only testing your public API. But defining those boundries and public interfaces is where the skill is.
If your building a web app using hexagonal architecture, I might say drive your primary ports from your tests, mock and fake your secondary ports.
If you expect some group of objects to used in multiple places like a library I will test those as if they're a boundry to. For example If I've built money exchange rate module.
> Instead create an architectural boundry with a clear public interface. Exercise the interfaces in tests and then check for results on the otherside of the boundary. You might do that with a mock or a fake.
I think mockist (test-doublist really) TDDers would suggest that the architectural boundaries derived from the design pressure are the "correct" public interfaces you're describing here, they just often happen to coincide with class (or your favourite languages equivalent) boundaries.
The pattern often ends up with many one-function role interfaces, orchestrated by collaborators down the dependency tree until you hit value structures, pure functions or external integrations at the edge of the system. In many ways, mockist TDD is a gateway to functional programming.
Hm, yes and no. I do feel that mockist TDD is still heavily emphasising the OOP idea (more in the original sense than in the Java sense) of having separate "collaborators" with their own internal state and side effects that exchange messages (in a way similar to the Actor pattern).
A functional approach (at least in a pure functional language) wouldn't necessarily emphasise this sort of interaction pattern between independent components and try to isolate state and side effects much more.
If you do functional programming, obviously many OOP techniques just don't apply. This is more of a difference in philosophy. I'm sympathetic to functional programming, and do try to use immutability, value objects, referential transparency, explicit state handling etc., whenever possible, but I'm still constrained by the languages, frameworks and teams that I work with, and I think many others are as well.
Of course, the question whether we should all just program in Haskell (or lisp, or erlang, ...) can be debated, but for a variety of reasons that is not currently the case, so I think mocks are still a valid answer for OOP, if (!) you use them correctly (and I agree that many may be too cavalier about mocking).
But to answer your question about your "extra layer": If your code is written in a domain driven style, then potentially adding in a new layer should be considered a change in behaviour, so changing the tests makes sense. If it's purely a technical thing, then there are IMHO often ways of not exposing them to surrounding code. As an example, if you're introducing some e.g. logging layer to your BillingService, instead of pasting in that layer in the original code that calls the billing_service, you could decorate your BillingService with a logging wrapper class and just change the injected dependency. Nothing about the tests using the billing_service would have to change. This is a stupid example, but I hope it gets the point across.
> What if you want to add in extra layer, or remove a layer, or add a helper.
You make the extra layer implement the same interface and delegate to the original object. Classes don't depend directly on other implementation classes in this style.
Absolutely. I've worked in places which enforced this decouple-everything-from-everything approach, and it just grinds work to a halt, makes test failures commonplace and uninformative.
How do you avoid external dependencies in this situation? If you're only testing the public interface, then you have to expose some kind of public dependency injection pattern. Now you've introduced a level of indirection into the production code specifically to make testing easier. This feels like an antipattern.
It's how I do it, but I'm not satisfied with it. It makes the code harder to read, I sometimes miss bugs because my injected mocks don't handle an edge case correctly, and the effort of maintaining the mocks themselves is non-trivial.
I agree, fake objects are a good alternative to mocks especially for complicated interfaces (databases, external services, etc.). But they require more setup, which is why I still think mocks can be useful in some situations.
They can require more setup in that you have to write an object to represent the thing.
They require less setup in that mocks require tons visual noise in setting them inside the test. Where as a fake will just be created with a standard constructor.
I hate a lot of mocking I see in real code. They go over board with the mocking and the test is 80% mock setup. Making it hard to see the real purpose of the test.
Tests should be short, simple, to the point, and easy to read. When most of the test is setting up a mock, you've lost that.
But there not too bad if it's a very simple one line setup. Which is how they should be used.
But that would not tell you that if you passed a null object into the real thing it would crash. Where as fake, with the tests will.
I think it really depends. If you keep the number of collaborators in a class small and your interfaces are well designed mocking is ok (and really just a limited form of a fake).
I don't agree that dependency injection is necessarily bad, but if you really want to avoid it/mocking, there are also other ways, i.e. structuring your code into pure and impure parts, only unit testing the pure stuff and relying on integration tests for the "wiring".
You’re talking about integration tests, not unit tests. Unit tests should focus on a single method. Integration tests can combine methods/objects/APIs/etc for business rules.
Your best bet is a combination of unit tests, integration tests, and e2e tests (there’s some old advice about a 70/20/10 split but this is pretty arbitrary).
Yes this what has spread around and what people think of as unit tests. But it an awful way of splitting tests. Code bases following this end up with brittle tests, that break at slightest design change with no functional change.
I define unit tests as fast, and don't cross architectural boundaries, and operate on a well defined public interface. And test some kind actual property you care about. I don't follow "strict" rules like unit test per class, per function etc.
A lot classes just extract to some to some helper class, and they operate well together and the helper is not likely to be used anywhere else. Just make the helper class private and test as a unit against some actual desired property you want to test.
Integration tests is when you bring external things into the mix like databases, or http calls.
What I would encourage you to do is define the terms you're using, and don't assume that others are using certain words in the way that you mean them. I like that you've been explicit about your definitions here (although it may be better to describe rather than assert, e.g. "I treat unit tests as focusing on a single method", rather than "Unit tests should focus on a single method").
Indeed, especially having an axis extending from an individual function/method up to third-party services would presumably stop the "everything is a class" creep that's happened to so much testing terminology already.
You should be testing a unit of behaviour, like a business rule or an effect to be expected. If this involves multiple objects collaborating then so be it. But they should not cross architectural boundaries like http calls, or database calls. If you can only make public api classes accessible, and make all helper classes inaccessible to users. That also forces you test things from only a public api standpoint.
Tests that just check an object calls another object, but exhibits no desired behaviour in its self are pointless, and just couple everything to the implementation. I.e Certain methods, are called in a certain order with specific parameters. When you do that, you make it really hard to change things(just design changes, not desired behaviour changes) without breaking every test.