These days, reading the Goos book, by Steve Freeman and Nat Pryce, it reminded me of a project I worked on a while ago. It was a one year old system, poorly tested, integrating to a handful of other systems, and the code-base... well I prefer not to remember. Despite this scenario, I joined the team to help them implement some new functionalities.
I remember sometimes it was difficult to write tests, the classes were tightly coupled, with no clear responsibilities, several attributes, bloated constructors, etc. And despite our best effort, working around the bits that were preventing us from writing the tests, we felt we were getting down the wrong road, trying to do it in such a crappy code-base. As a result some of our tests were massive! A bunch of lines of mocks, stubs, and expectations, making it impossible to understand their purpose.
What have I learned?
Reading one of the book chapters I learned that the same qualities that makes an object easy to test also makes the code responsive to change. In my situation, the tests were telling me how clumsy the code was and how difficult it would be to extend it.
I also learned that when we come across a functionality that is difficult to test, asking ourselves how to test it is not enough, we also have to ask why is it difficult to test, and check whether it's an opportunity to improve our code. The trick is to do it driven by tests, so we can get rapid feedback on code's internal qualities and on whether it's doing what it's supposed to do.
So they introduced a variation for the well-known TDD cycle— "Write a failing test" => "Make the test pass" => "Refactor". As described on the figure below (extracted from the book), if we're finding it hard to write the next failing test for our application, we should look again at the design of the production code and often refactor it before moving on, until we get to the point that we can write tests that reads well.
Extracted from Growing Object-Oriented Software, Guided By Tests— Steve Freeman and Nat Pryce
An Example of a Smell Tests Might Be Telling You
Reference data rather than behavior
When applying "Tell Don't Ask" or "Law of Demeter" consistently, we end up with a coding style where we tend to pass behavior into the system instead of pulling values up through the stack. So picking up the famous Paperboy example, before refactoring the code applying the "Law of Demeter" the code and test would look something along the lines of the snippet showed below.
class Paperboy def collect_money(customer, due_amount) if due_amount > customer.wallet.cash raise InsuficientFundsError else customer.wallet.cash -= due_amount @total_collected += due_amount end end end
it "should collect money from customer" do customer = Customer.new :wallet => Wallet.new(:amount => 200) paperboy = Paperboy.new paperboy.total_collected.should == 0 paperboy.collect_money(customer, 50) customer.wallet.cash.should == 150 paperboy.total_collected.should == 50 end
We can easily see that the test is telling us it knows too much detail about Customer class implementation. We can see its internals, which objects it's related to, and even worse, we're also exposing implementation details of its peers. So it's clear for me that it needs some design improvement. My main goal here is to hide Customer implementation details from the users of the Paperboy class. Which means that I don't wanna see anything but Customer and Money classes referenced on the test!
class Paperboy def collect_money(customer, due_amount) @collected_amount += customer.pay(due_amount) end end
it "should collect money from customer" do customer = Customer.new :total_cash => 200 paperboy = Paperboy.new paperboy.total_collected.should == 0 paperboy.collect_money(customer, 50) customer.total_cash.should == 150 paperboy.total_collected.should == 50 end
The method customer.pay(due_amount) wraps all the implementation detail up behind a single call. The client of paperboy no longer needs to know anything about the types in the chain. We've reduced the risk that a design change might cause ripples in remote parts of the codebase.
As well as hiding information, there's a more subtle benefit from "Tell, Don't Ask." It forces us to make explicit and so name the interactions between objects, rather than leaving them implicit in the chain of getters. The shorter version above is much clearer about what it's for, not just how it happens to be implemented.
All the logic necessary to collect the money is inside the Customer object, so it doesn't have to expose its state to its peers.
Now it's safer to continue writing new failing tests to our objects.
Remember, listen to the tests!
The agile software development practice I like the most, and at the same time, the one I find the most difficult is pair programming. Each individual has his/her own way of working, and characteristics such as motivation, engagement, habits, open-mindedness, and coding/design style varies a lot from individuals. Sometimes, to get a balance between these differences is quite hard. I am still not an expert in pair programming coaching, but I've been learning a lot on my current assignment.
And from this experience, it seems that clients are definitely more involved and amused when it comes pairing following the ping pong pattern.
Ping Pong Pattern
It happens when the developer 1 from a pair implements a test for a given feature and see it failing, then passes the keyboard to developer 2 who makes the test pass, do some refactoring on the code and implements another test, passing the keyboard back to developer 1 to do the same thing and continue until the feature is done.
Why Do We Like
- Challenge - Each time a developer writes a test for you to make it pass, it sounds like a challenge, then you do it and write another one, challenging him back.
- Dynamics - The worse thing is a developer that just hogs the keyboard, making you feel a useless. Ping pong pairing makes you swap keyboard more frequently.
- Engagement - Developers are much more engaged because they are constantly coding, not only observing.
- Fun - It is so much fun when you have all the above items together!
One of the most difficult tasks for consultants is to influence business people to embrace and support test-driven development. Seems like they do "understand" the values, "agree" with that, but when it comes to put into practice the figure is generally a bit different. When I say to put into practice, I mean stick with it steadily, even when dealing with unexpected situations. A typical one could be of a project with delivery delays, a tight deadline, and invariant scope. By experience, when such situation happens, the first decision made is to cut off test development and give way code quality, in order to deliver faster. No matter how hard you try to revert it by showing them the bad outcomes for this decision, they simply ignore them and take the risks, just for the fact that there are no concrete risks, other than not delivering the software.
Not having a way to show managers that not writing tests, at least for the most critical functionalities, is indeed a concrete risk, has always puzzled me. One day while talking to Kristan Vingrys about this, he showed me a risk matrix he has been using to help him influencing people to understand test values. See the image:
Basically it measures the rate of test coverage required and tells what type of tests (unit, functional) to be implemented based on the impact of the functionality to the business stakeholders and the amount of new code needed to implement it (you can be re-implementing it from an existent code). The more impact and likelihood for new technology the feature needs, the more test implementation it should have.
The ideal approach would be, for each implemented feature, the team is responsible to evaluate and make a decision on how much test effort they want to put in the story. The best time to make it is during the iteration planning meeting, so that the final output you get is both the iteration goal/features and the minimal of test effort to each of them.
And as generally all features has at least a minimum of significant business value, you will always have the guarantee of having these tested.