"What Works for Whom, Where, When, and Why? On the Role of Context in
Empirical Software Engineering" (ESEM’12, September 19–20, 2012,
A systematic review that aggregated the available evidence about the
effectiveness of TDD found that TDD does not have a consistent effect
on productivity. The evidence from controlled experiments suggests an
improvement in productivity when TDD is used. However, the pilot
studies provide mixed evidence, some in favor of and others against
TDD. In the industrial studies, the evidence suggests that TDD yields
worse productivity. Even when considering only the more rigorous
studies, the evidence is equally split for and against a positive
effect on productivity. Ten studies resulted in higher productivity
for TDD than otherwise, nine studies led to worse productivity for
TDD, while six additional studies found no significant effect on
productivity at all .
 Turhan, B., Layman, L., Diep, M., Shull, F. and Erdogmus, H.
(2010) How Effective is Test Driven Development?, in G.Wilson & A.
Orham (Eds.), Making Software: What Really Works, and Why We Believe
It, O’Reilly Press, pp. 207-219.
- Tests are weighted more heavily than the code
The emphasis of composing tests before the actual code of the
application adds more importance to the tests than the actual code
itself. This is another reason why the tests are so commonly accepted
as gospel in spite of evidence to the contrary. Weighing the tests
more than the actual code of the application is idiotic and wrong in
every way, shape, and form.
Think about it, what is the ultimate goal when you develop software?
Is it to develop a large testing framework that tests pre-conceived
notions about the software, or is it to actually develop the
software? Placing so much emphasis on the tests shifts the focus of
the development process to something that is important to the
developer and the developer alone.