Monday, September 27, 2010

As A User I Want To...

User Stories, especially if used in a very formal way, are much criticized by my colleagues. One point is that the user (in any role) most probably does not want to use your system but just to get it over with. So, whenever someone says "As a (role) I want to" it has become somewhat of a meme to say:


As a user I want to drink piƱa colada in a swimming pool

I assumed that this was true but I wanted to be sure and made some empirical studies and it in fact is so.

Photo: Heikki Pora

Tuesday, September 14, 2010

Rituals of Hypocritical Scrum

The Certified Scrum Master training does not make anyone a great Scrum Master. News at 11. But one thing I think it makes is better agile team members. I like to suggest CSM training to each team member I work with. The two days of Scrum training usually do not give much new about the roles, artifacts or ceremonies of Scrum to the attendees. But it still makes a difference.

As it says in Scrum Alliance introduction to Scrum "The Scrum framework is deceptively simple." In fact it is quite possible to start up doing a sort of hypocritical Scrum with all the ceremonies in place: plannings, dailies, sprint reviews and perhaps even retrospects. And fail. With hypocritical I mean Scrum without touching the core of Scrum - its principles. The ceremonies of Scrum become mere empty rituals without some understanding of the principles below them.

The principles of Scrum are in Agile Manifesto and in Toyota Product Development System. On a higher level it is in embracing change and concentrating on producing value. On method level it is in using pull systems and JIT, removing waste and doing continuous improvement of the system. 

Someone just tweeted that when you write code you should know at least one level of abstraction below the level at which you operate. For instance when writing C it's good to understand about the call stack and dynamic and static linking to libraries and optimizations at compile time. The same goes with Scrum. It is not enough to know the secret handshake. You must understand the things that can make your process work. And the same goes with Kanban, XP or other agile process models.

The CSM training does not teach you all the principles of Scrum. But it does one thing that is very important. It demonstrates some of those principles at work with some very simple exercises and thus usually gives a some kind of incurable agile brain tumor to most of the participants.

I find myself often in the empty rituals mode. I envy my colleagues like Kaira who seems to be able to work on the principles level all the time in his daily work. I just have to train my nose to smell my own odors to keep up.

Thursday, September 2, 2010

Our Continuous Delivery Process

A colleague of mine asked if I and a teammate of mine from my previous project could organize a small openspaceish tech talk on continuous delivery after our company's monthly meeting. To advertise it I drew a process diagram of our continuous delivery process:


  1. Teamsters pick up a new feature to implement from the features on our sprint board.
  2. They create a local or remote branch for the feature.
  3. They edit the text files in the source repository. This is also known as "programming".
  4. When the feature is ready the branch is pushed to the master branch in our git master repository.
  5. Hudson builds and tests a new version on every commit to master including all integration tests and Webdriver based browser tests. If the build is successful, new version of our application is deployed to a repository.
  6. Hudson is also used to deploy the latest successfully built version to production very early in the morning.
In haste and looking at the process through tool-glasses (mainly git-glasses) I forgot our step #3.5 which contains all the human process parts of our Definition of Done occurring between stages three and four: GUI reviews, exploratory testing, mini-demo to customer and customer acceptance of the feature.

So, compared to the well described and indeed very good git flow branching model we have a much simpler model with just master plus feature and fix branches and no develop or release branches as they have no meaning to us. This mainly because we only have one installation and one release of the application at a time running and we try to keep the number of features under development in one or two at any given time.

Thursday, March 4, 2010

My First ATDD Project?

I remember when around year 2000 or so we were looking for a better way to make software in our small dotcom bubble startup. Somehow the traditional project model did not work. I do not think we knew the word "waterfall" then. First we found Unified Software Process and its commercial friend RUP. The idea in RUP that design, development and testing are not sequential phases but tasks you do repeatedly in a cycle during a project was just super. Of course the bureaucracy of zillions of artifacts and roles and the model driven madness soon turned us away from it to look for something else. What we found was XP.

And did XP sound just like a great idea! In RUP the actual software development was considered a triviality that just happened between the pile of UML diagrams and test plans. In XP the programming was the centerpiece with all its great practices like unit test first, pair programming, refactoring, collective code ownership and so on. And we tried to follow them all quite strictly. Unit test first rewired my brain forever. So did pair programming and the thoughts about simplicity.

There were not much tools around. Refactoring came in some version of Eclipse and I stopped using emacs. Builds we made with make. Continuous integration was done manually committing a task to a branch in cvs and merging that to the trunk at the integration machine. Radiator was a printout of the number of the unit tests in the code base in a font so large that it did fill the whole sheet. But we did all those things. And we were good at it. At least a lot better than before that.

Thing we did not manage to do well was measuring velocity and the thing we did not manage to do at all was doing acceptance tests. It just did not happen.

Our bubble blew and I went on to a one decade larger company with its own waterfall method for software development. I was in the trenches and as the management could not see me I secretly did TDD and CI (manually) until I got bored of it and started my work in Reaktor that already at 2005 was full of very agile colleagues doing Scrum and all that TDD and whatever. Still none of the projects that I was in had really any process for doing acceptance tests. I guess other projects here possibly had but where I was in certainly did not.

There were some system integration projects where there was no GUI so the integration tests were more or less the acceptance tests. And in the GUI projects tools like Fit did not look so suitable as we did not have so much problem in testing rules but in testing GUI interaction. We might have had occasional ad hoc or even some more careful testing by customer at the end of each sprint but there was no process, automation or prevention of regression built into our way of working on this level. Surely there was a lot of unit tests but as you know they are not for acceptance or even for testing purposes but design tools.

During the five years in my current job the role of functional design has increased in our projects thanks to our world class specialists in that domain and the simulation based GUIDe method. With GUIDe we get better product backlogs and realistic usage scenarios to test the software, cases that have been used for simulation in the GUI design process.

The number of test specialists has also increased in Reaktor during my stay and they can support more projects if not by joining them all but at least doing a lot of valuable knowledge transfer. Their ATDD 3rd degree interrogation method combined to well designed GUIs and the realistic usage scenarios have for the first time given me confidence that we can really do acceptance tests and even more than that, we can do ATDD or acceptance tests in the test first way (secret: it is easier to do test first than add the tests later).

Also we found a tool set that we feel quite comfortable with. Selenium 2 lets us write the tests in a way that does not make the tests break if we change the implementation from javascript to java or to actually anything producing a web GUI. JDave lets us use BDD vocabulary in the tests. Hudson runs the tests nicely with the different browsers in our browser CI machine. The project is young and the implementation has changed drastically but the acceptance tests have required very little or no maintenance after such modifications. Au contraire, the tests have saved us a several times finding we broke something in the system while doing our heavy lifting.

So, now ten years after I find myself in my first ATDD project. And I am happy. And no way, I am not moving away.

Monday, February 22, 2010

Writing Selenium 2 Acceptance Tests Using Behaviour Driven Vocabulary

Our project is using Selenium 2 to automate Acceptance Tests on our web app. We try to write the tests browser and implementation agnostic. This because we want to use the same test code to run the tests with Internet Explorer, Firefox and Chrome and parts of our application are written in JQuery and other parts are written using Wicket and we want the tests to work even if we change the implementation of some part for any reason.

We tried first to write the actual tests with JUnit but somehow the TDD vocabulary of JUnit was not satisfactory when we had been using the JDave BDD framework for our BDD specifications.

In JDave the specifications for one type are put to one class using the naming convention TypeSpec:



Then you create a context representing some state of an object of the type being specified as an inner class and write specifications to that:



And in natural language you read the specifications like this:

Shopping cart specification:
Empty shopping cart:
- is empty
...

We decided that in behaviorish acceptance test language the context is the state of the browser and we wanted to then write something like in the above example. So we created a class called WebSpecification. It also has an inner class WebSpecificationContext that is extended to create the contexts:



The "specification class" is the web browser which is represented in Selenium 2 by the interface WebDriver. The getWebDriver()-method is used to get instance of the currently active driver:



The driver is set by our version of the JDaveRunner, Selenium2JDaveRunner, which sets the driver class based on system property and sets it to the specification before running it:

Thursday, February 18, 2010

Setting up Selenium 2.0 test engine with Hudson and Maven

In our project we as a team wanted to implement a strict ATDD practice where we write functional test cases for features before implementing them. We also wanted to write some of these tests as automated browser based tests and decided to use Selenium 2.0 (currently in alpha).

Our Continuous Integration server is Hudson but it is running in Linux so it cannot run the Internet Explorer driver. So we decided to set up another instance of Hudson running in a Windows box to run the tests with IE, Firefox and Chrome. After a very easy Hudson installation (Java WebStart + setting it to run as a windows service from the Hudson web gui with one click!), lengthy maven configuration and exploiting one windows vulnerability we now have a setup that runs the tests in three browsers and publishes the results to our main Hudson CI server.

Here is a short summary of the Hudson + maven setup we have:
  • Hudson has separate jobs for each browser. The webdriver class name is passed in MAVEN_OPTS using -D to our custom test runner class that runs the functional tests with the specified driver.
  • Maven has a separate profile called functional-test used to run the tests.
  • Functional tests are run in the integration-test-phase. Integration tests are not ran in functional test profile.
  • Jetty-plugin is started in the pre-integration-test phase of the maven build to run our application.
  • Hudson runs jobs one at a time because we have not yet (Note to self: //TODO) configured different Jetty ports for different jobs.
  • Maven arguments to run the individual jobs and to build the correct reports and to fail if tests fail (not default in failsafe...): "-P functional-test clean integration-test surefire-report:report failsafe:verify".
Below some relevant excerpts from our pom.xml.

First the profile for functional tests. We have naming convention for them "*FunctionalTest". It does not overlap our unit tests because they are named in BDD convention "*Spec" (we use JDave instead of JUnit). Integration tests are named "*IntegrationTest".



Integration test plugin failsafe config in build-section of the pom:

Wednesday, February 17, 2010

Selenium 2.0, Chrome, Hudson, Maven and Windows

I finally got our functional test server to run our functional tests also with the ChromeDriver of Selenium 2.o (WebDriver). Our functional test server is a Windows machine because it is required to run Internet Explorer and our Selenium 2.0 tests with it. This has caused quite a many problems and the last one was that our Hudson continuous integration server did not want to run the Chrome tests.

After a lot of blood, sweat and tears I figured out the problem. The problem was that the Chrome browser seems to install only to the user that run the installation program. The binary just does not start if you run it with another account. The Hudson CI is ran as a Windows service and Windows services are ran with a special system account. You can change the account to another one with the Windows admin tools but then the service cannot interact with the desktop. Like start a browser. Which is quite a showstopper when trying to do browser based testing.

So I would just have to install the Chrome browser using the system account. Easier said than done: one cannot log in as the system account. Some very dark windows magic: I started cmd.exe and then Firefox with system account using the "at" command, downloaded the Chrome installation program with the Firefox instance just started and then installed another instance of Chrome as system user. I also copied the installation from the C:\Documents and Settings\blahblahblah to C:\GoogleSys to have it in a directory without a space in the name - a major menace when passing paths in environment variables like MAVEN_OPTS.

This just so worked and now we have a Hudson running a maven build running Selenium 2.0 functional tests with Internet Explorer, Firefox and Chrome.