I've always been a huge fan and advocate of using tests for developing applications. For me, working on a software without a decent suite of test is like walking on eggshells, each modification brings out the risk of breaking something on the system. To mitigate this risk I always make sure I have a minimum set of unit, integration and acceptance tests covering my application.
But does all that gives us the confidence that the system will work perfectly when it's deployed to any of the environments, on its way through the release pipeline? I thought so until work with this guy and read this book. Tom Czarniecki firstly introduced me the concept os smoke tests, then reading Jez Humble and David Farley's Continuous Delivery I could grok the real values of using it in conjunction with a build pipeline.
What are smoke tests?
As aforementioned, deployment smoke tests are quite handy because they give you the confidence that your application is actually running after being deployed. It uses automated scripts to launch the application and check that the main pages are coming up with the expected contents, and also check that any services your application depends on— like database, message bus, third-party systems, etc —are up and running. Alternatively you can reuse some acceptance or integration tests as smoke ones, given that they are testing critical parts of the system. The name smoke test is because it checks each of the components in isolation, and see if it emits smoke, as did with electronic circuits.
Provide clear failure diagnostics
If something goes wrong, then your smoke tests should give you some basic diagnostics explaining the reasons why your application is not working properly. In our current project at Globo.com, we are progressing towards start using Cucumber to write our smoke tests, thus having a set of meaningful and executable scripts, like this one below.
Feature: database configure System should connect to the database Scenario: should connect to the database When I connect to "my_app" database as root Then it should contain tables "users, products"
For those who like using Nagios for monitoring infrastructure, Lindsay Holmwood wrote a program called cucumber-nagios which allows you to write Cucumber tests that output the format expected of Nagios plugins, so that you can write BDD-style tests in cucumber and monitor the results in Nagios.
Knowing quickly whether you are ready or not!
Clearly rapid feedback and safety are the two major benefits of introducing smoke tests as part of a release process.
In our project, we implemented a deployment pipeline, so each new commit into the source repository is a potential deployable version to any environment, even to production. So we have the commit-stage where we run all the quick tests, and as soon as all of them passes, the acceptance-test-stage is automatically triggered, and the longer tests— integration and acceptance —are run, and once they've also passed, the application is automatically deployed into the dev environment. Getting a green at this stage means that it's successfully deployed and smoke tested. But there still some exploratory testing to be performed before releasing this version into the staging environment. And in our team, this is done by the product owner, together with a developer. So as soon as they are ready to sign the story off, all they have to do is click the manual button which in turn deploy the application into the qa1 (UAT) environment, and if it's green they can proceed, otherwise they pull the cord because something is malfunctioning, as you can see on the picture.
Don't let the application deceive you
It's quite frustrating, when all you need is the system to work as expected, because you are about to showcase it to your customers, and the first thing you click, all you see is a big and ugly error screen, instead of the page they were expecting. And later on you find out that it was due to database breakdown. What an embarrassing situation that could have been avoided by simply checking the smoke test diagnostics before showcasing.
One of the big challenges faced by distributed teams is how to get over the communication gap created by the physical distances that separates them. We all know that communication, either verbal or non-verbal, is fundamental for any project to be delivered successfully. When a team is good at communicating, they cultivate a more effective sense of collectivity and cooperation, having faster feedback, by sharing information (knowledge) and having valuable discussions.
But this is not quite the real world for distributed development teams. It’s much harder, not to say almost impossible, to know what exactly is happening on each other’s mind. What problems and technical challenges are they facing? What are they doing now? What points are they considering when designing a new feature? How important is for them to write tests? Are they following the project development standards?
Blame the "Bandwidth Limited" Communication Tools!
Software development teams, by the nature of their work, needs to discuss and assess different ideas to solve complex problems. And they are very difficult to communicate when using tools such as email or telephone, which on the book they call “bandwidth limited”. And those are exactly the ones available for most distributed teams. So face-to-face communication suits better for this kind of discussions, using the assistance of diagrams or sketches, not to mention the use of body language. This would give us immediate feedback, just by looking into the other person’s eyes, which communicate understanding.
* Extracted from The Organization and Architecture of Innovation: Managing the Flow of Technology (with some modifications).
And as you can't always minimize distances to allow verbal communication, you have to look for other ways, and maximizing non-verbal communications is definitely a road to go down.
Some Bad Outcomes
Poor Code Quality
- Code Duplication (see)
- Reinventing the wheel (see)
- Code For The Others (and for yourself)
- Broken Builds
It's quite usual. For example, the guy wants to load a XML file as a String so that he can perform some assertions over the result. He will implements something like a
FileLoader class. But what he doesn't know is that another developer has already implemented a class with this behaviour.
This is partially caused by lack of communication and partially a result of the programmer's discipline. When adding a new library to the project the team must have a discussion and look for the benefits earned by using it. Before adding a XML parsing library that you're used to, have a quick chat with the team will let you know if is there any other parsing library being used. Maybe someone could make a walk-through with you on it. But it is your responsibility to know how to use it afterwards.
When coding, you should always ask yourself if your peers would be able to understand what are you producing. Better still, you should ask if you would easily understand it again in a couple of weeks from now. It's quite common when coding, you get contextualized with what you need to do to deliver that functionality. This context will always get lost after finishing, unless you share it with the others or document it. There are some good materials out there that shows you how to write clean and readable code.
In a distributed team, a broken build not only just affects the people in your room, it also affects people in rooms into other cities. So reverting a broken build should be taken into account, specially when you have a slow build, then definitely the commiter would get himself into a big problem! Imagine a long build that takes about 30 minutes for example, and someone commits something broken. If he fixes it really quickly, it still may take 1 hour for the other team to be able to commit its changes and consequently 1 hour and a half lost in productivity in the other cities. It's all about communication - the quicker the build, the quicker the feedback. So a fast and successful build is mandatory!
Fear of Refactoring
Poor code quality results in fear of refactoring. Who hasn't been in a situation, working on a tightly coupled system, where it was quite hard to do any refactoring? Any attempt would propagate the changes deep in the source code, ending up shaving the yak, not going anywhere.
Absence of Trust
I see this one as a result of the other two I mentioned above. When your team is biased to go off the tracks when trying to comply with code standards, some precautionary measures are generally created to avoid the worse.
I've seen a case where a pair, assigned to implement a story, and almost completing the development, ended up realising that another pair was also looking at it. Don't ask me why!
I've also seen people creating triggers on the version control system, so that for each commit from one team, the other received an email with all the commit information. This is good in one side, because you can easily identify cowboy commiters that don't write tests. But this is also used to check if the code is acceptable, reverting if not!
My Current Experience
The team I'm currently working with is facing some of these problems, and during all last week, when I was on the other side of the fence, visiting the other part of out team on Tasmania, this became even more highlighted. Although we were having daily stand-up meetings, I felt like I was missing something, specially because there was another team in Melbourne joining us and I still didn't know how it was going to work out. Chatting with my friend Mark Needham about it, he recommended me a book called The Organization and Architecture of Innovation: Managing the Flow of Technology, where there's a chapter dedicated exclusively to this point, and that I could probably get some ideas of how to overcome this problem.
It seems obvious that an organization that wants its technical staff members to communicate needs to ensure the distances among them are minimized. Unfortunately, the traditional and most common form of office configuration does just the opposite. Not to mention when they are in separated buildings.
The quote above also extracted from the book, doesn't tell anything new, and that's exactly one of the issues we wanted to fix. Now, with three teams we agreed that we would need to have them communicating face-to-face more often. So the rule is that every week we should have at least one person from each team visiting a different one. Apart from that, we are continuing with our daily stand-up meetings, each team separately, and later on another daily meeting, but between teams (in the Scrum world called Scrum of Scrums). This one involves, by default, only the iteration manager and the tech lead, but everyone else is also welcome to attend.
We also had to put more effort on improving the non-verbal communication, as they are more required on distributed teams. With this separation teams have to be even more strict with what they permit or not in the codebase. We introduced development tools such as Checkstyle and Compile With Walls to ensure this. Checkstyle acts as a hammer on misbehaved commiters and Compile With Walls ensures that project structure is being respected. Sometimes quite good threads (over IM or email) are created by people trying to understand why a Checkstyle rule has failed.
(Thanks to Tom Czarniecki for helping me with this one.)
Last week I attended the Lean Thinking And Practices For IT Leaders workshop organised by ThoughtWorks. There we had the presence of Mary and Tom Poppendieck, my colleague Jason Yip and two consultants from KM&T. One of the things that I really liked about it was that it wasn't only driven by presentations, but also by a lot of practical exercises, so we could get a better feeling of the benefits of applying these thinking and practices. One of the exercises we did was the Go-Kart game.
How it works?
Two teams are created (alpha and beta), and each one has to split up into five groups with the given responsibilities: disassembly, transportation, assembly, observation and time-keeping. They are given the task to completely disassemble, transport and re-assemble a Go-Kart as quick as possible, in a safe manner, while the observer write notes about problem points. The whole process is done twice, so that you can run it once, analyse the process used, based on feedback provided by the observer, and think of ways to improve it, before running the second time.
In our first attempt, all we knew was that we had to split the team into five groups. We had no idea of the necessity of a detailed process, but doing all the phases as fast as possible. Vikky, our team leader, proposed the creation of a manual with the detailed steps needed to assemble the kart, to be used by the assembly team. And that's what we did!
Planing time: 10 minutes
Disassembling time: 5 minutes
Assembling time: 12 minutes
Total time: 14 minutes 20 seconds
Quality of delivered product: OK
Problem Points (Gathered by observers)
- The team took seven minutes to get organised and start doing something.
- No leadership nomination. Vikky, one of the team members, had to auto-niminate herself as the team leader.
- Disassembly group didn't notice differences on the washers and on the bolts, causing uncertainty and waste of time in the assembly group.
- Bottleneck on the transportation of the parts from one station to another. No one from disassembly group to pick up the parts, making the transporter keep holding them, stopping the process flow.
- The components needed to assemble specific parts of the car were not delivered together, making the assembly group wait for the remaining ones.
- Some members in the assembly group were in a rush to finish fast and ignored the manual, resulting in some mistakes.
Before starting the second attempt we got together to discuss the problem points, coming up with some ideas of improvements. Here they are:
- We nominated people on both disassembly and assembly groups to be in charge of handing and picking up parts from the transporter.
- We decided to hand the parts related to each other in chunks, so that they could be assembled straight away, eliminating the time wasted waiting for remaining parts.
- We nominated specialists for roles such as assembling the wheels, etc.
- We added one more member to the transportation group, to get rid of the bottleneck.
Instead of spending a long time planning, we did it the agile way, highlighting only things we knew at the time, very quickly, and running through, spiking and checking if we were actually carrying out with the improvements, before doing the official attempt. We found some problems, adjusted to them and immediately got organised for the second attempt.
Planing time: 10 minutes
Disassembling time: 1 minute 50 seconds
Assembling time: 2 minutes 33 seconds
Total time: 3 minutes 45 seconds
Quality of delivered product: OK
Click here to see some photos of our team during the exercise.
Lean advocates that you should pursue perfection when improving your process - aiming to reduce effort, time, space, cost and mistakes - and I learnt that this applies to any organisation, of any size. Thus, from the process used on this game, collaboration, self-organisation, rapid feedback contributed a lot to our improvement, helping us to eliminate waste.
So, what could you do for your organisation?
Take a step back, take a look at the big picture of how things work in your company and ask yourself questions such as: How do we deliver? Does it takes longer to test and deploy our system than to develop it? Who do we depend on to put the system onto production? What is causing a bottleneck? What could I do to change this scenario? Answer these questions (or others you make up) and think of improvements.
One day, while reading Esther Derby's book, preparing for a retrospective session, I came across a great analogy between retrospective and development life-cycle:
While continuous builds, automated unit tests, and frequent demonstrations of working code are all ways to focus attention on the product and allow the team to make adjustments, retrospectives focus attention on how the team does their work and interacts.
Indeed it helps people improve practices and focus on teamwork. That's why it is one of my favorite meetings.
The agile software development practice I like the most, and at the same time, the one I find the most difficult is pair programming. Each individual has his/her own way of working, and characteristics such as motivation, engagement, habits, open-mindedness, and coding/design style varies a lot from individuals. Sometimes, to get a balance between these differences is quite hard. I am still not an expert in pair programming coaching, but I've been learning a lot on my current assignment.
And from this experience, it seems that clients are definitely more involved and amused when it comes pairing following the ping pong pattern.
Ping Pong Pattern
It happens when the developer 1 from a pair implements a test for a given feature and see it failing, then passes the keyboard to developer 2 who makes the test pass, do some refactoring on the code and implements another test, passing the keyboard back to developer 1 to do the same thing and continue until the feature is done.
Why Do We Like
- Challenge - Each time a developer writes a test for you to make it pass, it sounds like a challenge, then you do it and write another one, challenging him back.
- Dynamics - The worse thing is a developer that just hogs the keyboard, making you feel a useless. Ping pong pairing makes you swap keyboard more frequently.
- Engagement - Developers are much more engaged because they are constantly coding, not only observing.
- Fun - It is so much fun when you have all the above items together!
Are defined as tasks or features that represents something that needs to be fixed, because it represents a risk to the system in production (or going to).
Generally they represent acceptance criteria that was not defined during development. They should have higher priority over the other stories to be implemented.
Last Wednesday, 5th of November, we run our first Coding Dojo session at Sydney office. We had a reasonable number of attendants, and the experience was fantastic, although we still have some points to improve.
The idea was originally from my friend and flat-mate Mark Needham. Since we moved in to our new place, he came up with the idea of getting together every week to solve some CodeKatas at home, exploring a different language. We decided it would be more interesting if we could broaden the idea, and decided to organise a session at the Community College in the ThoughtWorks office.
How did we run:
There were six people attending, so we decided to split it into three pairs, each with their own solution, rotating every ten minutes.
We had three design discussion breaks, one in the beginning, one in the middle and another at the end of the session. Since the focus was on object-oriented design, we chose to implement the bowling game, extracted from Uncle Bob Martin's book. We did it following the Object Calisthenics rules, which are:
- Use only one level of indentation per method
- Don't use the else keyword
- Wrap all primitives and strings (strong types)
- Use only one dot per line
- Don't abbreviate
- Keep all entities small
- Don't use any classes with more than two instance variables
- Use first-class collections
- Don't use any getters/setters/properties
What was cool:
Apart from being an amusing experience, it was quite interesting to see the different approaches that people take to solve the same problem, - the design, the way they write tests, the code style, pretty cool. The pair swapping was also another nice experience. It was gratifying to pair with ThoughtWorkers other than the ones on my current project, like David Cameron and Nick Carroll.
For the next session, I would like to experiment with another approach.
Restrict the number participants from seven to ten developers. And instead of splitting them into as many pairs as possible, having all seated around a table, where there would be only one pair working on the solution, while the others are watching through a projector. They are free to help whenever they want, providing suggestions, ideas for design, algorithm, etc. The developers pairing would be swapped every ten minutes, by other ones participating. Although the number of developers participating is restricted, anyone is welcome to attend as a watcher.
I reckon we would be much more productive this way, when everyone is working on the same thing, centralizing the focus, and learning even more from other developers.
One of the most difficult tasks for consultants is to influence business people to embrace and support test-driven development. Seems like they do "understand" the values, "agree" with that, but when it comes to put into practice the figure is generally a bit different. When I say to put into practice, I mean stick with it steadily, even when dealing with unexpected situations. A typical one could be of a project with delivery delays, a tight deadline, and invariant scope. By experience, when such situation happens, the first decision made is to cut off test development and give way code quality, in order to deliver faster. No matter how hard you try to revert it by showing them the bad outcomes for this decision, they simply ignore them and take the risks, just for the fact that there are no concrete risks, other than not delivering the software.
Not having a way to show managers that not writing tests, at least for the most critical functionalities, is indeed a concrete risk, has always puzzled me. One day while talking to Kristan Vingrys about this, he showed me a risk matrix he has been using to help him influencing people to understand test values. See the image:
Basically it measures the rate of test coverage required and tells what type of tests (unit, functional) to be implemented based on the impact of the functionality to the business stakeholders and the amount of new code needed to implement it (you can be re-implementing it from an existent code). The more impact and likelihood for new technology the feature needs, the more test implementation it should have.
The ideal approach would be, for each implemented feature, the team is responsible to evaluate and make a decision on how much test effort they want to put in the story. The best time to make it is during the iteration planning meeting, so that the final output you get is both the iteration goal/features and the minimal of test effort to each of them.
And as generally all features has at least a minimum of significant business value, you will always have the guarantee of having these tested.
This Monday and Tuesday, as I mentioned in the previous post, I attended the JAOO Conference here in Sydney. The event was fantastic, I had the opportunity to meet some great developers that I previously knew from the web. ThoughtWorks had a stand there, where people could catch up during intervals and play some Wii! See photos here!
The first day:
Erik Meijer made a great talk on functional programming entitled "Why Functional Programming (Still) Matters", his advocation of this style of programming was interesting and funny at the same time. For him, any language that causes side-effects is not functional.
Rebecca Wirfs-Brock talked about "Lessons Learned from Architecture Reviews", explaining the real purpose of these reviews. The normal behavior in most projects is people coming up with an architecture made of trendy technologies as candidate to "address" an issue, just because people say it's nice. She claims that instead of doing this you should think about the outcomes that would result in the issue being addressed, evaluate if it can be achieved using the chosen architecture and if so, present it to the client. Pretty obvious but generally people don't put it into practice.
Martin Fowler talked about "Patterns Of Enterprise Application Architecture". It's impressive how, even when talking about a subject that is not new (for those who read the book), he can still come up with useful information through his insights. One example was the case of when to use Transactional Script and when to use Domain Model.
Martine Devos presented "Agile Coaching", she was also the Certified Scrum Master instructor in the training I attended. What I thought would be a valuable discussion about coaching and organizational transformation activities, type of work commonly requested by companies nowadays, turned out into an explanation of randomly selected agile concepts and practices. A wealth of information but a lack of conclusion!
Than I attended the "Enterprise Systems Panel", with Jim Webber, Patrick Linskey, Thilo Frotscher, Martin Fowler, Ben Alex (on the photo) and Dave Thomas (as the moderator). Lots of discussions on architectural issues such as transaction propagation, distributed architecture, etc.
The day finished with an awesome keynote of Erik Dörnenburg and Martin Fowler about Simplicity in Design.
The second day:
No comments required for Robert Martin! He's always inspired me and watching him was one of the main reasons that made me want to attend JAOO. He made two good-humored presentations about "Clean Code". If you don't know Uncle Bob, go and search for his books!
Steve Vinoski gave a great talk on "Building RESTful Services with Erlang and Yaws", definitely I left this presentation decided that Erlang is gonna be my language to learn in this year. It contains a feature called Pattern Matching that is quite interesting, probably I'm probably gonna post something about it in the near future. Regarding JAWS (Just Another Web Server), written in Erlang, for those implementing multi-threading systems considering the use of Apache as web server, one data: during a workbench, while JAWS was handling 80k concurrent connections, Apache died handling only 4k. I don't know you, but this is enough for me to have a look at it!
Gregor Hohpe talked about Google GData application. I don't know if I am the only one feeling it, but Google's presentations nowadays seems quite similar to those presented by Adobe, BEA etc, teaching how to use their products in order to make them richer!
Michael Feathers talked about "Working Effectively with Legacy Code", I've read part of his book and I found it quite useful. During the presentation he came up with some examples of really bad code (coupled, with no tests) and in real time, wrote unit tests, made it testable by refactoring to make them pass. Nice presentation, as well as being a cool man!
My friend Erik Dörnenburg closed the event talking about "Software Quality", showing the two perspectives of measuring software quality; the external one, if the software aggregates value to the customer, and the internal one, if it is easy to maintain, understand and extend, and if it was designed properly. He talked about tools to help you check code metrics, such as duplication, coverage and testability rate, among others, some with fancy graphics. He explained the purpose of these tools, why keep checking it? Code metrics helps you measuring tech debt and effectiveness of training (in a management role), and it guides you to spot code that needs refactoring/improvement.
For those interested, you can download some slides here.
Next week (June 2-4) I'm gonna attend the JAOO Conference here in Sydney. ThoughtWorks is sponsoring the event and I'm very happy that they chose me to be one of the attendants. It's quite exciting to have the opportunity to catch up with professionals that influenced me, such as Robert Martin, Rod Johnson, Martin Fowler, Jim Webber, Gregor Hohpe and also with Erik Dörnenburg who has been off the Sydney office for a while.
I will also be attending the Scrum Master training, which is an extension of the JAOO Conference. Martine Devos is going to run it and I'm really excited, because I wanted to attend it since I was in Brazil, and now I'm gonna be able to do it.
Will post updates of the Conference here soon!