Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Handling Intermittence: How to Survive Test Driven Development (joblivious.wordpress.com)
21 points by jmathes on Feb 20, 2009 | hide | past | favorite | 17 comments


As much as TDD irks me to think about when I'm in the middle of a program, it's a very useful thing.

Its saved me from making countless mistakes.

JUnit isn't good for beginners though, because you have to define an equals method for each class. Not only is that completely repetitive, but it is also very difficult for beginning programmers to write good tests. If you haven't been programming for very long, how do you know how to define equality? Do you consider two objects that have the same fields (only the fields you're testing) to be equal? Or are you checking whether or not two objects are the same (bit for bit in the stack)? What happens when you have super-classes that have less fields than their subclasses?

Anyway, one of my professors is working on a new tester for java (http://www.ccs.neu.edu/javalib/Tester/index.html). It's very elegant; the syntax is very clean:

"public void testGresterThan(Tester t){

     t.checkexpect(7 > 4, true);
}"


Although my projects are small and not currently using TDD, I found this analysis in systematizing development strategies. Present five ways of breaking down ways you set yourself up for intermittent failures during development:

Hidden data dependence

Race conditions

Time sensitive tests

Unstable 3rd party software

Intentional randomness in code.


TDD means you write your test before you write your code.

Test code is code. You must write test code for your test code before you can write your test code.

Test code for test code is code. You must write test code for your test code for your test code before you can write your test code for your test code.

TDD is trapped in a loop of infinite regression. This means that TDD never produces actual application code. In fact, it can't even produce test code. How do you define a test of a test of a test .... of a test ... of a test ..... so you can write it? You can't define it so you can't write it. You have no starting point.

The answer must be that you really cannot do TDD. You just call it TDD. At some level you rely on the old standby of clean simple design, incremental implementation, inspection, and manual testing. Why not cut to the chase and do that on your application code?


I think you've missed the point. The real utility of coding to pass a test is not to verify the current state of the application. You can easily cross check that you're getting the right return values from any module manually. The real beauty of TDD comes a few months down the road when you've forgotten the current module, and you make some change that affects it. With a unit test framework in place, regression problems like these are found immediately.


Each test you write is a value-add. You get increased safety that what you're testing works the way you think it does, and that it will continue to do so as people modify it. Not all cases need to be tested; "test to your level of paranoia" is our motto. In the case of tests themselves, the value add for testing individual tests has never been worth the ROI. There are, however, tests for the testing _system_, insofar as that's possible (which it is, for the most part).

As for the "I happen to view TDD as a completely obnoxious and unnecessary exercise" objection, I'd be willing to bet you've never tried TDD. It doesn't take long for proper TDD to _save_ you time. Each iteration cycle is slower, but the marginal cost for adding new code to a complex codebase doesn't increase nearly as much.

Without TDD, you soon have to spend time on manual testing, and then you've got humans in QE basically practicing TDD for you, as the walk through script after script doing things that could be done faster by a computer. The only time it's appropriate to consider skipping TDD, IMO, is on tiny one-man projects.


I see. TDD works on poorly designed, poorly implemented, basically lousy code. Code that is highly coupled, withlow cohesion, and is inappropriately modularized. This necessitates that all code must be tested every time there is a "modification". Add to that the random unthoughtful modifications made by careless programmers who have identified the wrong problem to fix and you get constant disaster.

I say, fix the fundamental problem by not writing lousy code in the first place. THAT would really be adding value rather than simply adding job security busy work. Automated testing cannot fix lousy code. Most of the time, it can't even discover it.

There is no magical way that gets good results without knowledge, skill, understanding, thought, discipline, and effort. With those things, all you need are good enough tools. TDD may be one of those good enough tools in some limited circumstances. It is not a Silver Bullet that cures all, most, or even many software flaws.

Keep in mind that one really good programmer can out produce a team of twenty poor programmers. Also keep in mind that one poor programmer can keep twenty really good programmers busy cleaning up the trash they create.

TDD is a very poor band aid used to cover up an even more fundamental problem: most programmers are lousy at programming. That is the problem that need fixing.


TDD is like bumper lanes at a bowling alley. If your goal is to knock down the pins, it can help prevent problems, like falling in the gutter.

Of course, if you're incredibly good, you don't need the bumpers. You might even forego bumpers to show off that you are -that good-.

Your solution is to make all the bowlers really good. Here's the issue - historically, that doesn't scale across a growing dev team.

Imagine 5+ bowlers all using the same lane. The more bowlers you have, the more you will have balls bumping into each other, getting in the way, crossing paths, and falling into the gutter.

As you increase the number of developers, the more likely it is that tools like TDD and Source Control can help mitigate issues that inevitably arise on a larger team. They aren't silver bullets to great programming, and they aren't a bandaid to cover up bad programming. They are time-tested, proven ways to get people going the right direction.


"Imagine 5+ bowlers all using the same lane. The more bowlers you have, the more you will have balls bumping into each other, getting in the way, crossing paths, and falling into the gutter."

I suggest THAT is the problem. TDD is a band aid to cover the fact that such a team cannot function. The right team of <=4 can blow the roof off of ANY team of 5+ based upon communication network overhead alone. If you don't have the right team, you are hosed no matter what.


It's much easier to arrange for your test code to be extremely modular, with different bits not affecting one another, than it is for the actual application code. And for most problems it's much easier to get evidence that you've solved them right than it is to solve them right. (It's a bit like the difference between P and NP.) Therefore, writing good test code is much much easier than writing good application code. This is the answer to your last question, and the reason why the rest of what you said is wrong.


How do you know your test code is testing what you need it to test without actually testing it? If not by TDD, then are you really using TDD or simply saying that you are?

Like I say. Cut to the chase. Incrementally write good, clean, well designed, application code. Then inspect and manually test. THAT is the ONLY thing that really works. Stop pretending you are doing something that you aren't. Especially don't use it as an excuse for writing lousy, poorly designed, and poorly implemented application code.

TDD is still one more attempt to create a sacred Silver Bullet that gets good results without knowledge, skill, understanding, thought, discipline, or effort. Its at best a very feeble tool in a quiver of feeble tools. It takes a good, well trained brain, paying attention to get good results. Without that, nothing will work.

PS: Writing good code is never easy. It only looks easy when done by someone who really knows what they are doing.


Who here, please, is claiming that TDD gets good results without knowledge, skill, understanding, and all the rest of it? Or that writing good code is easy?

Your infinite-regress argument would be entirely sensible if someone were claiming that all code written without testing it is entirely broken. Fortunately, no one is claiming that. In view of which, what you say is just like this: "How do you know that your reasoning about what the code needs to do is right unless you've checked your reasoning? And then how do you know your checking of your reasoning is correct unless you've checked that? Etc., etc., etc. Therefore, no one is really developing software by thinking about it."

TDD is not supposed to be an alternative to writing good, clean code, inspecting it and testing it. It's supposed to be a way of writing good, clean code. (No, that doesn't mean it's supposed to enable idiots to do that, or supposed to make it effortless.) Maybe it works well -- i.e., enables a person with a given amount of brainpower and a given amount of training and experience to expend a given level of effort and get better code -- and maybe it doesn't; but the objections you've made here don't make any sense.


Sorry. I was attempting to argue with a religion. My bad.

The basic premises cannot be questioned. The words of the sacred order must be followed to the letter. They must be swallowed whole without modification. They are to be held to be applicable to all problems, all contexts, forever, amen. That is except when they don't apply according to some mysterious unwritten set of rules held by the sacred order.

Gad. I can't count the times I have seen this crap since I started developing software back in the mid 1960'sd. It started with designing software using an IBM flow chart template. Each "method" had their very narrow range of applicability. None were the cure all or ultimate solution they were touted to be.

Automated testing has a place. The place is small, narrow, and limited. I seriously question that it can make up for lousy programmers and lousy code. Especially since they are also usually the programmers of the test code.

This is an issue that is indistinguishable from my identified problem of infinite regression. How are you going to test the testers to make sure they are testing what needs to be tested the way it needs to be tested?

Ultimately its do the right things correctly. What that is is much more dependent upon the problem and its context than some sacred text and holy method.


I am not a TDD zealot, nor even a TDD advocate. (I haven't written enough stuff that way to know whether it works well for me, never mind anyone else.) I was just pointing out some flaws in your reasoning. I don't think I'm the one being religious here.

I don't agree with jmathes's diagnosis that you're trolling; I'm sure you really believe that I am a fanatical TDD devotee who thinks TDD is the right thing for every situation, that I regard everything to do with it as sacred and holy, and that I think TDD can make up for lousy programmers and lousy code. Since none of that bears any resemblance to the truth, however, I'm going to leave it here, and just suggest that you go back and re-read what I said, and see whether you can find any actual evidence of zealotry there. Then, when you don't, you might want to work out what's got your zealotry detectors so oversensitive and made you draw such stupid conclusions, and maybe think about fixing whatever it is.

Anyway: enough.


This guy is trolling. Please to not feed him.


I think your troll detectors are miscalibrated, though not as badly as lgriffith's zealot detectors. But I think you're right that there's not much point in trying to carry on a reasoned discussion with him on this topic. Sorry, all.


Because a failure in your test code means some application code isn't tested as well as it should be. It doesn't mean the application code will fail.

A failure in your application code can be much more serious.

Hence, in general, you test your application code, but not your test code.


You can put me in the downvote column, too. I happen to view TDD as a completely obnoxious and unnecessary exercise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: