Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree that ordered tests are bad. See, I don't even have mutable state in my program -- but still have ordered tests for another reason: the ease of debugging. Say, if module A depends on module B, then B should be tested first, only then A, since if you test in the other direction, you might have a hard time figuring out whether A or B is misbehaving.


This doesn't make sense to me. If you know that A depends on B and you run your test suite while B is broken, then both A Suite and B Suite will fail. But the order is irrelevant, you'll know that B (the one A depends on) is broken and that should be the focus of your investigation. Why would there be any confusion if the tests executed in a random order?


I've answered it in the other comment -- because I might not know the entire module dependency graph. For example, if the code is not mine.


If the code isn't yours then you have to hope they ordered the tests correctly or, you know, figure out the dependency order. Which isn't hard:

- Use a static analysis tool that kicks out a dependency diagram. That literally shows the dependency order.

- Look for import/include/using statements. That shows the dependency order.

- Look for inherits/implements part of class definitions. That shows the dependency order.

- Look at the build commands. It'll tell you the dependency order.

- When you see a function, constructor, whatever being used, you have a dependency.

And if the code is yours, and you don't know the dependency order, then fix that. Write it down or something.


Well yes. But this all requires additional movements on my side. On the other hand, if the tests are always ordered such that dependencies are tested first, then I won't have to deal with the approaches you've mentioned.

Ideally, a test framework should enforce the order based on the project ontology. They don't usually do that though because they cannot easily examine the code being tested to extract the necessary dependency information.


Doesn't that only apply if you stop the entire suite upon receiving one failed test?


Even if I can have multiple failing tests, unordered testing might get tricky. For example, as I said, A depends on B, and suppose that both A and B tests are failing. If I (or really anyone else in the team) forget that A depends on B, they might be tempted to debug the A test first, only later figuring out that this is B that needs to be fixed.


What would be best probably is for them to run in the random order, but the output would be sorted always the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: