Our Software is Full of Bugs

Scope of this Report

  • What are the potential causes of defects?
  • Process, Culture or Estimation Issue?
  • The Value Tetrahedron
  • Tracking and monitoring defects
  • Conclusion

What are the potential causes of defects?

There are various types of defects discovered in live software and we need to consider the root cause of each. While most people assume that all defects are the developer’s responsibility, this isn’t always true. Typical causes include:

  • Poor coding which could be caused by:
    • Unclear or incomplete requirements – Process
    • Poor or insufficient unit testing – Culture or Poor estimation
    • Lack of peer review – Culture or Poor estimation
    • Poor estimation leading to rushed development – Poor estimation
    • Tight development window – Culture or Poor estimation
  • Insufficient Testing which could be caused by:
    • Insufficient test coverage – Process, Culture or Poor estimation
    • Lack of regression testing – Process, Culture or Poor estimation
    • Unclear or incomplete requirements Process
    • Squeezed testing window - Culture or Poor estimation
  • Unexpected Data in the “Live” system which could be caused by:
    • Corrupted live data
    • Unexpected scenarios
  • Change which could be caused by:
    • Volatile requirements - Process, Culture
    • Late change - Process, Culture

Of the four, only “Unexpected data in the Live System” defects can really be considered outside the development project’s capacity to capture or prevent. Even in that case, there are often disagreements between the developers and the business about whether or not certain scenarios should have been unexpected. We have three common themes for the remainder.

Process, Culture or Estimation Issue?

As we can see above, any of the recurring themes – Process, Culture or Estimation - can have a significant impact on the project’s performance and so impact the quality of the release. The tendency is always to blame the developer or the tester for the quality but it’s often a combination of all three issues.


We need to look at the failure from two perspectives – business and development. From the business viewpoint, we should ask if the following were controlled:

  • Business Change Description - Did the business describe what the new business state will be?
  • The Business Case - Was it clear, quantified, current and did it match the business change?
  • Communication - Did the Business help the Development Team to understand the Business Change so they could both articulate which aspects of the backlog provided highest business value?
  • Backlog Prioritisation and Change Control - Was there a product backlog or its equivalent effectively managed and prioritised using techniques such as Value Visualisation?
  • The Risk Management Process - Were risks documented, prioritized and tracked to resolution?

Constant change always has a negative impact on quality in waterfall projects. Even in Agile, which is designed to accommodate change, excessive re-work of features can inject defects. Agile is a lean process and aims to eliminate defects at source or, at least, at the earliest opportunity (e.g. before the end of a sprint). While this may appear to constrain productivity at the front end, the end-to-end benefits of early defect detection and removal are huge.

In the technical processes, we need to look at how the code was delivered:

  • Were agile principles adhered to with constant communication through the business owner to end users?
  • Is adequate time given for an integration sprint if such an activity is planned or needed?
  • Were peer code reviews performed?
  • Was the team’s unit test robust?
  • Was there any systems integration testing?
  • Are agile code releases managed within a robust architecture with continuous integration managed effectively
  • Is adequate time given for integration testing?


Now, we need to consider the development project culture. With constant demands for everything delivered yesterday at a cheaper cost, ultimately something suffers and, unfortunately, it tends to be quality.

Unless Service Level agreements or Key Performance Indicators are set with development teams/suppliers around defect rates then the developer cares (quite rightly) more about meeting promised delivery dates than rigorously testing the code. In Agile, where this problem should not manifest itself, we sometimes see the symptom of continually developing the same thing in multiple sprints as the product owners try to get closer to what is actually needed.


Unless strong and robust estimating procedures are followed, projects are likely to face schedule pressure as a result of development teams cutting corners as they are driven to meet (unrealistic) cost and/or schedule targets.

Even in the Agile team, there is only a finite resource and unless the estimate is strong, the chances of delivering the minimum viable scope will be small unless corners are cut. Put another way, if the product owners and businesses expectations are not carefully managed in an Agile project then their expectations of what they will receive after x iterations may be unrealistic enough to cause the team to cut validation and verification code rather than functionality.

The Value Tetrahedron

In our careers, we have all come across managers and clients who are obsessed by the “Time, Cost, Quality” triangle when they consider software development and, rightly, insist that it should be possible to strike a balance between the three. There has to be intelligence applied during planning and estimating to get that balance correct.

A great deal of time is spent looking at how one can maintain quality and at the same time reduce cost and shorten duration. For example, costs are often driven down by project teams being made to work unpaid overtime or else to cut corners in order to deliver by unrealistic dates. The software goes live and stays up (more or less). The PM happily moves on to his next project. The commissioning (client) manager moves on to the next step on her career ladder. The maintenance team is paid to try to keep the defect backlog down to a manageable size (usually defined as a reasonable level of complaints from the customers) for less cost and in less time! In short, in many organizations, the people who cause the bugs aren’t held accountable as the defect backlog and problems in the code mount up. One manifestation of this is Technical Debt.

Technical debt is the dimension missed so frequently when we look at software development. Somehow the downstream consequences of business driven decisions are often overlooked in the heat of “getting it done”, and often this is tinged with the knowledge that “It won’t be my problem!” when the decision is made to go for it and to hell with the consequences.

If the volume of the tetrahedron is the TCO (total cost of ownership) then controlling that growth has to be the focus of system support and effective control only happens where programme teams and senior commissioning managers make sure that any decisions they make for enhancements will not inflate the TCO beyond controllable limits.

The price of defects hitting the live system is a high TCO and unhappy clients.

Tracking and monitoring defects

The final area to consider is effective tracking and control of the project and that includes defects. Continuous review of the project’s number of defects discovered or defects outstanding should help determine the testing efficacy and the quality of the deliverables before the software goes “live.”

Consider the Defect Discovery Curve.

Figure 1: Defect Discovery Curve
Defect Discovery Curve

Figure 1 shows a typical (albeit fictional data) s-curve shape we would expect for defects in a delivery project. It is a key metric to monitor and is often a strong indicator of when the code is fit to release. The earlier you release the more defects you would expect in live of course. Estimation toolsets will generate predictive models based on historical data and can predict when a project is likely to have discovered 95% or 99% of all defects introduced.

Finally, it is important to investigate where a defect was introduced and where it was discovered, a design defect not found until acceptance testing is much more costly than if it is found in design review (see comments on Agile above). Defect root cause analysis can highlight process issues in specific areas which can improve future performance.


Improving software project quality requires an organization to commit resources to the development and execution of a well-defined software practice.

Strong causal analysis of defects and their origin helps prevent future problems.

Governance of change is important, if you know what the business is going to look like at the implementation of the project then the project will control change and is much more likely to succeed.

The collection of key metrics and review of the data quality will help reduce project failure.
Strong governance, realistic expectations and close communication between the vendor and supplier will help ensure success.

Written by Default at 05:00

Story Points, Function Points or Both?

Scope of this Report

Story points and function points are both methods for ‘sizing’ software. This Trusted Advisor report will establish why sizing is important and present an overview of the two sizing methods, followed by a discussion on the merits of both story points and function points by answering some very common questions:

  • Can I use function points on an Agile project?
  • Story points are much easier and faster than function points, aren't they?
  • Is there a relationship between story points and function points?

Importance of Sizing

When managing the delivery of your software product you need to know how big or small it is so you can properly plan, estimate and manage the delivery of that software. Sizing software requires a size measure that is ideally meaningful to both the development team as well as to the end user. And it should be a measure that can be applied consistently across all projects and all applications.

The development team will want to have an accurate measure of size so they can properly estimate the level of effort and duration. They will also use the size measure to monitor changing requirements (scope creep) as features or functions or stories are added to the original requirements document or product backlog. An end user or Product Owner may want a measure of size so they can understand the relative business value of what is being delivered; what features and functions the user is actually getting.

Story Points

Story points are a relative measure based on the team’s perception of the size of the work. The determination of size is based on level of understanding, how complex and how much work is required compared to other units of work. Story points are expressed according to a numerical range, which is usually constrained to a limited set of numbers such as an adaptation of a Fibonacci sequence (e.g. 1, 2, 3, 5, 8 etc.).

Story points are a relative measure used by Agile teams typically during a sprint session. Each story is assigned a story point value based on everyone’s best understanding as to the “level of difficulty” of that particular story. Of course, “level of difficulty” can include different things such as complexity,
size, duration, effort and so on. Regardless of the scale being used, in a process called planning
poker, the values assigned are assessed independently by each individual, compared by the team
and then discussed to reach a consensus. There is no consistent definition of what the values
represent other than to use it as a comparative value of one story being larger/harder or
smaller/easier than another within the one team. Over a number of iterations (sprints) an Agile team
can develop a consistent velocity (number of story points delivered per sprint) which can serve to
estimate future amounts of work/effort in future sprints. Of course, even if that one team is
achieving exactly the same volume/complexity of work as another team, their story points will not
necessarily be the same.

Function Points

“Function points measures software by quantifying the functionality requested by and provided to
the customer, based primarily on logical design”; as defined by the International Function Point
Users Group. Function points measure “software size” or, more precisely, the size
of the requirements/design specified to which the resulting software provides a “no more, no less”
solution. The size of a defined business requirement is a necessary piece of information if you want
to estimate how long it will take and how much effort it will take to develop that piece of software.
Unlike story points, function points are a defined, reproducible unit of measure. They can be
measure consistently regardless of who is measuring them. Function points can be used on both
Agile and non-Agile projects. For example, Agile user stories, for the most part, describe the features
and functions requested by the product owner.

The function point methodology calls for the identification of 5 key elements including inputs,
outputs, inquiries, interfaces and internal stores of data. Naturally there needs to be some
description of these elements; e.g., requirements documentation or stories, in order for a function
point sizing to be accomplished. Once a function point size is determined it can be used to estimate
level of effort or on the backend, the size information can be used to calculate productivity
(fp/effort hours) and quality (defects/fp) levels of performance.

Some Answers

Can I use Function Points on an Agile project?

Yes, function points can be used on an Agile project. In fact, both story points and function points
can be used on Agile projects and serve to effectively manage the project and measure

We already know that story points are used to size the user stories for a given sprint/iteration.
Stories can also be sized using function points. However, you don’t need to use function point size to
estimate how long a collection of stories in a sprint are going to take because you have already set
up a 2, 3, or 4 week cadence for your sprints.

Function Points are most useful and frequently used at the beginning of an Agile project and upon
delivery of a release or some significant delivery of functionality. In the beginning of an Agile project, you may use function points to size the entire backlog and use that size information along with additional historical data points to estimate a total project cost and a predicted delivery date. At the backend of the project you may capture total function points delivered to look at performance levels and compare Agile project performance levels to performance levels of other methodologies currently in use.

Story Points are much easier and faster than Function Points, right?

This is a true statement; story points are quicker and easier than function points. The question really becomes, which method is more appropriate for the task at hand. Sitting down with the Agile team and assigning story points to selected stories for a sprint backlog is an excellent exercise in approximating the complexity and required effort of selected stories. This is a collaborative approach that involves the team and provides a group understanding of each work element (story) and what may be involved. Even if story points were not assigned, the discussion alone would be of significant value in driving team efficiency.

Function points require a more detailed examination of the information (stories) available and achieving reproducible counts requires expertise and practice. There are specific guidelines to be applied and calculations to be made. It may be unrealistic to expect every team member of an Agile team to have this skill set. As a result, the use of function points throughout an organization is usually performed by a central specialist team thus allowing for comparisons among the various Agile teams and portfolios.

Function points are also a size measure that serves both the developer and the end user. For the developer, they are used to manage the project outcomes. For the end user (product owner) Function points can be a useful vehicle for setting expectations with regard to identifying (and agreeing) what features and functions are being developed and deployed. However, the direct involvement of the Agile team members in sizing the tasks they are going to work on has motivational benefits over the seemingly imposed sizing of the Central FP counting team.

Easier and faster are nice, but that is not the issue. The issue should be about which metric or set of metrics will provide you with the information you need to best manage the software deliverable, to make decisions and to manage expectations.

Of course, the real issue with the speed and ease of story points is that they are hard to scale across many Agile teams. For the Agile teams themselves this is not an issue but for the organization which needs to build product road maps, annual budgets, resource plans and so on, the loss of coherence is a significant one.

Is there a relationship between story points and function points?

The narrative below references the following example:

Iteration 1 – the team completed 10 stories (in a two week sprint) that were assigned a total of 50 story points. The function point size for those 10 stories was 100. The stories were focused on simple transaction I/O processing.

Iteration 2 – the team completed 5 stories in their second two week sprint. The stories were assigned 55 story points in total. The function point size for those 5 stories was 25.

Question: Assuming the team has achieved a fairly consistent velocity (50) why isn’t there a correlation between SPs and FPs?


Story points are often assigned with some consideration of required level of effort. In the first iteration the stories involved fairly simple processing and therefore were assigned an average of 5 story points each. In the second iteration, the stories represented more complex processing and were assigned an average of 10 story points.

Function Point Analysis does not consider level of effort. It is accounting for the features and functions being delivered. The stories in iteration 1 were about processing inputs and outputs and accounted for a high number of function points. In the second iteration the stories required a greater degree of processing logic, but the features and functions being delivered were fewer.

Story Points or Function Points

Story Points are a relative measure whereas function points are a well-defined consistent method of sizing.

Does this mean that function points cannot be used to estimate at a sprint level? Sprints are time boxed, usually as two week iterations. The desired state is to achieve a steady flow of work from sprint to sprint (velocity). For Agile teams, this is adequately measured using story points. Function points are more appropriately applied to measure the overall project outcome. This can be done upon delivery of a release and/or function points can be applied when the product backlog is first developed as a means to estimate the total level of effort that may be required across all sprints.


Story points versus function points; so do we settle on one or the other or both? The answer is, yes.
Both these measures are useful and serve the intended purpose to more effectively manage a software deliverable.

Function points are good for measuring the overall product deliverable at the beginning and at the end. The FP size information at the beginning of a project can be used to estimate overall schedules and costs. Also, the size information upon delivery can be used to measure performance.

Story points are an effective method for managing the flow of work in an Agile project. They too serve a purpose of estimating the amount of work that can be accomplished by the team in a defined period of time (sprint/iteration).

Clearly, sometimes the best use of these two methods overlaps and so it is important to make strategic decisions about when and how they will be used rather than local, tactical decisions.

Written by Default at 05:00

Top Blog Posts of 2015

Every December we like to share our top blog posts from the past year. This year we thought, "Why wait until December?" Of course, we encourage you to follow the pack and see why these posts are so popular! They cover the range of our areas of focus (Agile, function points, TMMi, estimation and more!), so there's a little something for everyone! Without further adieu, here are the top 5 blog posts (the ones that have the most views this year) from January through June of 2015:

1. Estimating Software Maintenance - Learn more about a unique and proven approach for estimating maintenance and support activities using a new type of "sizing" model.

2. Agile Transformation of the Organization - The key to successfully implementing enterprise Agile is to implement strategic change. Learn how!

3. How to Manage Vendor Performance - Learn how you can use Function Point Analysis to measure your vendor's performance.

4. Scaling Agile Testing Using the TMMi - The Test Maturity Model integration (TMMi) is a framework for effective testing in an Agile environment. Learn how to put it to use.

5. Exploratory Testing and Technical Debt - Exploratory testing (ET) is a type of manual testing. Learn more about the type of technical debt it creates.

Be sure to check back in December to see how that list compares to this one!

Written by Default at 05:00

How Do We Know If We Are Getting Value for Our Software Vendors?

(You can download this report here.)

Scope of this Report

Discuss what is meant by value, the process of sizing and estimating the software deliverable and the benefits of those results

  • What is “Value”?
  • Functional Value
  • More on the estimation process
  • Case study example
  • Conclusion

What is “Value”?

We can look at value for software development from vendors in terms of how much user functionality is being delivered by the software vendor. In other words, how many user features and functions are impacted as a result of a project. We can also consider whether the software deliverables were completed on time and on budget to capture “value for money” and the monetary implications of timeliness. Finally, we can see if the software project delivered what was expected from a user requirements’ perspective and if it meets the users’ needs.

This last, more subjective, assessment of value gets into issues of clarity of requirements and the difficulties of responding to emergent requirements if the initial requirements are set in stone. It is outside the scope of this report but we believe the best way to address this issue is through Agile software development which is covered in several other DCG Trusted Advisor reports.

Functional Value

To quantify the software, we must first size the project. Of course, there are several ways to do this with varying degrees of rigor and cross-project comparability. Function Points and Story Points both have sizing perspectives that take a user’s perspective of the delivered software. Since Function Points Analysis is an industry standard best practice sizing technique we find that it is used more often for sizing at this Client-Vendor interface.

Function point analysis considers the functionality that has been requested by and provided to an end
user. The functionality is categorized as pertaining to one of five key components: inputs, outputs,
inquiries, interfaces and internal data. Each of the components is evaluated and given a prescribed
weighting, resulting in a specific function point value. When complete, all functional values are added
together for a total functional size of the software deliverable. After you have established the size of the software project, the result can be used as a key input to an estimating model to help derive several other metrics that could include but are not limited to cost, delivery rate, schedule and defects. A good estimating model will include industry data that can be used to compare the resulting output metrics to benchmarks to allow the client to judge the value of the current software deliverable under
consideration. Of course, there are always mitigating circumstances but at least this approach allows for an informed value conversation (which may result in refinement of the input data to the estimating

5 Key Components of Function Point Analysis

Of course, if you can base your vendor contract even partially on a cost per function point metric, this provides an excellent focus on the delivery of functional value although it is wise to have an agreed independent third party available to conduct a function point count in the event of disputes.

More on the Estimation Process

We have mentioned the importance of the estimation model and the input data in achieving a fair assessment of the functional value of the delivered software. We have also hinted that these will be issues to be discussed if there is disagreement between client and vendor about the delivered value. Hence, it is worth digging a little deeper into the estimation process.

The process for completing an estimate involves gathering key data that is related to the practices, processes and technologies used during the development lifecycle of the software deliverable. DCG analyzes the various project attributes using a commercial software tool (e.g. SEER-SEM from Galorath), assessing the expected level of effort that would be required to build the features and functions that had to be coded and tested for the software deliverable. The major areas for those technical or non-functional aspects are:

  • Platform involved (Client-server, Web based development, etc.)
  • Application Type (Financial transactions, Graphical user interface, etc.)
  • Development Method (Agile, Waterfall, etc.)
  • Current Phase (Design, Development, etc.)
  • Language (Java, C++, etc.)

Sophisticated estimating models such as those built into the commercial tools considers as well that are too numerous to mention include numerous other potential inputs including parameters are related to the personnel capabilities, develop environment and the target environment.

Given the size of the software deliverable and the complexity of the software deliverable represented by some of all of the available input parameters, we also need to know the productivity of the software development team that is developing the software. This can be a sensitive topic between Client and Vendor. We have often seen that the actual productivity of a team might be different from the reported productivity as the Vendor throws people on the team to make a delivery date (bad) or adds trainees to the team to learn (good) – mostly, for value purposes, the Client only cares about the productivity that they will be billed for!

Once we have established the development team’s rate of delivery or functions points per effort month, we can then use that information along with all the previous information (size, complexity) to deliver the completed estimate.

The end result of the sizing and estimating process would show how long the project will take to complete (Schedule), how many resources will be needed in order to complete (Effort) and the overall cost (Cost) of the software deliverable.

Sizing and Estimating Process

Case Study Example

DCG recently completed an engagement with a large global banking corporation who had an ongoing engagement with a particular vendor for various IT projects. One such project involved a migration effort to port functionality from one application platform to another new platform. The company and the vendor developed and agreed on a project timeline and associated budget. However, at the end of the allocated timeline, the vendor reported that the migration could not be completed without additional time and money.

The company was reasonably concerned about the success of the project and wanted more information as to why the vendor was unable to complete the project within the agreed-upon parameters. As a result, the company brought David Consulting Group on board to size and evaluate the work had been completed to-date, resulting in an estimation of how long that piece of work should have taken.

The objectives of the engagement were to:

  • Provide a detailed accounting of all features and functions that were included in the software being evaluated
  • Calculate the expected labor hours by activity, along with a probability report (risk analysis) for the selected releases

DCG’s initial estimate was significantly lower than what the vendor was billing for that same set of development work. With such a significant difference in the totals, it was clear that something was off. DCG investigated the issue with the vendor to explore what data could be missing from the estimate, including a review of the assumptions made in the estimate regarding:

  • Size of the job
  • Degree of complexity
  • Team’s ability to perform

In the end, the company and the vendor accepted the analysis and utilized the information internally to resolve any issues relevant to the project and as a result the company also decided to use another software vendor for any future software projects resulting in a significant cost saving.


This case study highlights a typical business problem wherein projects are not meeting agreed-upon parameters. In cases such as these, Function Point Analysis proves to be a useful tool in measuring and evaluating the software deliverables, providing a quantitative measure of the project being developed. The resulting function point count can also be used to track other metrics such as defects per function point, cost per function point and effort hours per function point. These metrics along with several others can be used to negotiate price points with current and future software development vendors to ensure that the company is receiving the best value for their IT investment.

The estimation process helps in keeping vendors accountable for the work they are producing by providing solid data on the realistic length of a project as well as the relative cost of the project. Quantitative estimates on project length allow companies to better manage their vendor relationships with increased oversight and an enhanced understanding of the expected outcome for their software deliverables.

Written by Default at 05:00

Top 5 Reasons Projects Fail

Reasons IT Projects Fail

IT projects fail all the time - too often, we'd say here at DCG, since most of the reasons projects fail are preventable. Unfortunately, many of our clients are plagued by the same issues, all of which lead to bottlenecks, delays, overspending, unhappy employees and unhappy customers.

However, successful software projects are possible! We can avoid these issues, and the best place to start is by examining exactly what those issues are. Download "The 5 Reasons Projects Fail" to find out the 5 most common issues our clients experience. Then read on to discover what you can do to prevent them.

Download: 5 Reasons Projects Fail

Written by Default at 05:00

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!