In iterative software development methods like Scrum people limit the work in progress by selecting the amount of work they take in when iteration starts. This works well when the amount of the work that has been selected fits in iteration so the work gets completed. What to do when people select constantly too much work to the iteration and at end of iteration there is huge amount of work in progress and almost no work is in a state that it could be called completed?
In this situation one can try to limit the work in progress during the iteration. This happens so that there is only limited amount of requirements that can be worked simultaneously. Requirements that are under work must be completed before new requirement can be started if work in progress limit has been reached. E.g. we can limit the work in progress by setting limit of two requirements in progress at any point during iteration.
I have noticed that having explicit work in progress limits that all people agree on works much better than good intentions that we finish the requirement before moving to the next one.
In manufacturing it is well understood that the product development process aims to create a profitable manufacturing value stream. But in software development there is only product development value stream that creates the value and the manufacturing part is missing. I have seen many times that software development is done using projects where the only goal is to get the product to the market and there is no focus on creating effective software development value stream. This has lead me to think that in software development we should focus on creating effective product development value stream at the same time we create products.
I my opinion the software development value stream should contain at minimum following parts: development environment, reliable automated building environment, automated testing environment (unit, functional, acceptance, performance and stability testing), automated source code analysis, automated source code formatting, automated deployments of SW for exploratory testing (in some cases automated deployments to customer environments). And this is just the engineering part of the value stream we should not forget the customer request analysis and delivery part of the value stream.
Have you seen value stream maps that expose big queues and huge ineffectiveness in software development organisations? I have but seldom I see something done that would improve the situation. There are many reason why so little is done but I feel that most important one is lack of value stream owner. Value stream owners main responsibility is to create effective and profitable value stream but many product development organisations have not recognised the need for such a person.
In Toyota where the value stream mapping is said to becoming from there is owner for product development value stream called chief engineer and therefore the value stream mapping in Toyota improves the situation. So do not copy the value stream mapping without value stream owner because it does not work in my experience.
I will be presenting an experiences report at XP2010. The experience report will be available for downloads at my site rannicon.com after conference.
Automated Acceptance Testing of High Capacity Network Gateway
In this paper we will explore how agile acceptance testing is applied in testing a high capacity network gateway. We will demonstrate how the organisation managed to grow agile acceptance testing testing from two co-located teams to 20+ multi-site team setup and how acceptance test driven development is applied to complex network protocol testing. We will also cover
how the initial ideas that we had of agile acceptance testing evolved during product development. At the end of paper we give recommendations to future projects using agile acceptance testing based on feedback that we have collected from our first customer trials.
Please come to XP to see the live presentation.
Many organisations struggle with incentives when they start applying agile methods. I was couple of days ago watching a team that had modified planning poker to incentive poker to effectively distribute the incentives for the team.
Incentive poker works so to that the manager who’s job it is to define the achieved incentive level for the team reads the objective. The team ask clarifying questions and when everyone understands the objective the manager ask everyone to write down their own level of achievement on paper. Then the manager asks everyone to show estimates at the same time. The team that I watched just calculated average of the first round vote and did not waste time in clarifying the outliers. The average was then put to as objective achievement for whole team.
Amazingly this kind of simple approach worked and the team was able to agree the incentives in less than 15 minutes.