Monday, November 21, 2011

Business Process Orchestration - The Original Soundtrack


The Mad Men had their presentation. The CIO was impressed and decided to buy the Orchestration System™. After all it promised to remove the programming tasks from the organization leaving mere process configuration to do whenever changes were required. And the business changes constantly. Everyone knows that. The deal was signed and the license fee calling to mind a phone number was paid to the vendor. The system was installed from the DVD. The Business Managers configured the business processes. Everything was ready to just switch it all to production.

But the slideware promises did not quite hold. The orchestration system was just a tabula rasa: an empty development environment and a runtime engine. It was also a closed system that only a few people in the world knew how to master compared to standard open technologies like the ones behind REST interfaces.

The configuration task that was the responsibility of the Business Managers turned out to be a programming task they had no skills to handle. And a programming task to be accomplished with a visual programming language inferior not only to any contemporary development platform but also to anything from the 1970's. A development platform offering no mechanisms for proper composition to modules or layers of abstraction. And therefore there were no third party libraries of any kind whatsoever either.

The source code was saved in XML and thus was incomprehensible to any version control system around. All testing had to be done in a full blown live environment and manually as there was no support for unit testing of components. Remember: the system did not support any composition so there were no units to test. No automated tests means also no continuous integration.

But it is now the date that we have to go to production. So we can read from the Gantt Chart of the Project Manager. It is also the milestone to make the fat bonus tied to the project. So we turn on the system. And this is what we hear: Business Process Orchestration - The Original Soundtrack.


I hope that whenever anyone hears the words "Business Process Orchestration" this "Business Process Orchestration - The Original Soundtrack" starts to play in their heads. Many lives would be saved. Please help me to accomplish this altruistic task by playing this soundtrack whenever someone mentions the O word!

Monday, September 27, 2010

As A User I Want To...

User Stories, especially if used in a very formal way, are much criticized by my colleagues. One point is that the user (in any role) most probably does not want to use your system but just to get it over with. So, whenever someone says "As a (role) I want to" it has become somewhat of a meme to say:


As a user I want to drink piƱa colada in a swimming pool

I assumed that this was true but I wanted to be sure and made some empirical studies and it in fact is so.

Photo: Heikki Pora

Tuesday, September 14, 2010

Rituals of Hypocritical Scrum

The Certified Scrum Master training does not make anyone a great Scrum Master. News at 11. But one thing I think it makes is better agile team members. I like to suggest CSM training to each team member I work with. The two days of Scrum training usually do not give much new about the roles, artifacts or ceremonies of Scrum to the attendees. But it still makes a difference.

As it says in Scrum Alliance introduction to Scrum "The Scrum framework is deceptively simple." In fact it is quite possible to start up doing a sort of hypocritical Scrum with all the ceremonies in place: plannings, dailies, sprint reviews and perhaps even retrospects. And fail. With hypocritical I mean Scrum without touching the core of Scrum - its principles. The ceremonies of Scrum become mere empty rituals without some understanding of the principles below them.

The principles of Scrum are in Agile Manifesto and in Toyota Product Development System. On a higher level it is in embracing change and concentrating on producing value. On method level it is in using pull systems and JIT, removing waste and doing continuous improvement of the system. 

Someone just tweeted that when you write code you should know at least one level of abstraction below the level at which you operate. For instance when writing C it's good to understand about the call stack and dynamic and static linking to libraries and optimizations at compile time. The same goes with Scrum. It is not enough to know the secret handshake. You must understand the things that can make your process work. And the same goes with Kanban, XP or other agile process models.

The CSM training does not teach you all the principles of Scrum. But it does one thing that is very important. It demonstrates some of those principles at work with some very simple exercises and thus usually gives a some kind of incurable agile brain tumor to most of the participants.

I find myself often in the empty rituals mode. I envy my colleagues like Kaira who seems to be able to work on the principles level all the time in his daily work. I just have to train my nose to smell my own odors to keep up.

Thursday, September 2, 2010

Our Continuous Delivery Process

A colleague of mine asked if I and a teammate of mine from my previous project could organize a small openspaceish tech talk on continuous delivery after our company's monthly meeting. To advertise it I drew a process diagram of our continuous delivery process:


  1. Teamsters pick up a new feature to implement from the features on our sprint board.
  2. They create a local or remote branch for the feature.
  3. They edit the text files in the source repository. This is also known as "programming".
  4. When the feature is ready the branch is pushed to the master branch in our git master repository.
  5. Hudson builds and tests a new version on every commit to master including all integration tests and Webdriver based browser tests. If the build is successful, new version of our application is deployed to a repository.
  6. Hudson is also used to deploy the latest successfully built version to production very early in the morning.
In haste and looking at the process through tool-glasses (mainly git-glasses) I forgot our step #3.5 which contains all the human process parts of our Definition of Done occurring between stages three and four: GUI reviews, exploratory testing, mini-demo to customer and customer acceptance of the feature.

So, compared to the well described and indeed very good git flow branching model we have a much simpler model with just master plus feature and fix branches and no develop or release branches as they have no meaning to us. This mainly because we only have one installation and one release of the application at a time running and we try to keep the number of features under development in one or two at any given time.

Thursday, March 4, 2010

My First ATDD Project?

I remember when around year 2000 or so we were looking for a better way to make software in our small dotcom bubble startup. Somehow the traditional project model did not work. I do not think we knew the word "waterfall" then. First we found Unified Software Process and its commercial friend RUP. The idea in RUP that design, development and testing are not sequential phases but tasks you do repeatedly in a cycle during a project was just super. Of course the bureaucracy of zillions of artifacts and roles and the model driven madness soon turned us away from it to look for something else. What we found was XP.

And did XP sound just like a great idea! In RUP the actual software development was considered a triviality that just happened between the pile of UML diagrams and test plans. In XP the programming was the centerpiece with all its great practices like unit test first, pair programming, refactoring, collective code ownership and so on. And we tried to follow them all quite strictly. Unit test first rewired my brain forever. So did pair programming and the thoughts about simplicity.

There were not much tools around. Refactoring came in some version of Eclipse and I stopped using emacs. Builds we made with make. Continuous integration was done manually committing a task to a branch in cvs and merging that to the trunk at the integration machine. Radiator was a printout of the number of the unit tests in the code base in a font so large that it did fill the whole sheet. But we did all those things. And we were good at it. At least a lot better than before that.

Thing we did not manage to do well was measuring velocity and the thing we did not manage to do at all was doing acceptance tests. It just did not happen.

Our bubble blew and I went on to a one decade larger company with its own waterfall method for software development. I was in the trenches and as the management could not see me I secretly did TDD and CI (manually) until I got bored of it and started my work in Reaktor that already at 2005 was full of very agile colleagues doing Scrum and all that TDD and whatever. Still none of the projects that I was in had really any process for doing acceptance tests. I guess other projects here possibly had but where I was in certainly did not.

There were some system integration projects where there was no GUI so the integration tests were more or less the acceptance tests. And in the GUI projects tools like Fit did not look so suitable as we did not have so much problem in testing rules but in testing GUI interaction. We might have had occasional ad hoc or even some more careful testing by customer at the end of each sprint but there was no process, automation or prevention of regression built into our way of working on this level. Surely there was a lot of unit tests but as you know they are not for acceptance or even for testing purposes but design tools.

During the five years in my current job the role of functional design has increased in our projects thanks to our world class specialists in that domain and the simulation based GUIDe method. With GUIDe we get better product backlogs and realistic usage scenarios to test the software, cases that have been used for simulation in the GUI design process.

The number of test specialists has also increased in Reaktor during my stay and they can support more projects if not by joining them all but at least doing a lot of valuable knowledge transfer. Their ATDD 3rd degree interrogation method combined to well designed GUIs and the realistic usage scenarios have for the first time given me confidence that we can really do acceptance tests and even more than that, we can do ATDD or acceptance tests in the test first way (secret: it is easier to do test first than add the tests later).

Also we found a tool set that we feel quite comfortable with. Selenium 2 lets us write the tests in a way that does not make the tests break if we change the implementation from javascript to java or to actually anything producing a web GUI. JDave lets us use BDD vocabulary in the tests. Hudson runs the tests nicely with the different browsers in our browser CI machine. The project is young and the implementation has changed drastically but the acceptance tests have required very little or no maintenance after such modifications. Au contraire, the tests have saved us a several times finding we broke something in the system while doing our heavy lifting.

So, now ten years after I find myself in my first ATDD project. And I am happy. And no way, I am not moving away.

Monday, February 22, 2010

Writing Selenium 2 Acceptance Tests Using Behaviour Driven Vocabulary

Our project is using Selenium 2 to automate Acceptance Tests on our web app. We try to write the tests browser and implementation agnostic. This because we want to use the same test code to run the tests with Internet Explorer, Firefox and Chrome and parts of our application are written in JQuery and other parts are written using Wicket and we want the tests to work even if we change the implementation of some part for any reason.

We tried first to write the actual tests with JUnit but somehow the TDD vocabulary of JUnit was not satisfactory when we had been using the JDave BDD framework for our BDD specifications.

In JDave the specifications for one type are put to one class using the naming convention TypeSpec:



Then you create a context representing some state of an object of the type being specified as an inner class and write specifications to that:



And in natural language you read the specifications like this:

Shopping cart specification:
Empty shopping cart:
- is empty
...

We decided that in behaviorish acceptance test language the context is the state of the browser and we wanted to then write something like in the above example. So we created a class called WebSpecification. It also has an inner class WebSpecificationContext that is extended to create the contexts:



The "specification class" is the web browser which is represented in Selenium 2 by the interface WebDriver. The getWebDriver()-method is used to get instance of the currently active driver:



The driver is set by our version of the JDaveRunner, Selenium2JDaveRunner, which sets the driver class based on system property and sets it to the specification before running it: