CIOs Need to Lead the Digital Transformation

Gartner recently released the results of its “2017 CEO Survey: CIOs Must Scale Up Digital Business”.  As I read through it, I saw many links to the messages I have been communicating about driving the business value of software and that will be discussed in my new book “The Business Value of Software” being published by CRC Press in August 2017. 

CIOs The Gartner research found that the top priorities for CEOs in 2017 are 1) growth, 2) technology-related business change and 3) product improvement and innovation.  These three priorities are interconnected and are driven by the digital transformations occurring at many organizations.  Therefore, it is essential that the CIO and their team be intimately involved in the strategic discussions related to these three areas. 

Part of these strategic discussions needs to be measuring the success of the initiatives.  This is a topic that I have discussed in-depth when talking about visualizing the value of software and Gartner emphasizes it in their report.  In order to drive value of a software development initiative, for example, it is essential to clearly understand the goals and objectives of the initiative, and collaborate with the business unit to discuss, define, measure, and prioritize projects based on their ability to deliver on the value expectations.  In the Gartner research, they found that 53 percent of the respondents could not provide a clear metric for success.  It is not only critical that the C-suite have distinct metrics for a software development initiative, such as revenue, sales and profit goals, but that they communicate these goals to the entire technology team that are making the day-to-day tactical decisions that will impact the strategic direction of the project and ultimately the business value. 

The report also highlights that 57 percent of organizations will be building up their in-house information technology and digital capabilities in 2017 versus 29 percent that will be outsourcing this function.  Either way, the IT/digital team needs to be considered a partner in developing solutions that drive business value and not just a tactical arm that develops and implements the solutions.

CIOs need to step up.  They should establish and lead the digital strategy for their organization, collaborating tightly with the appropriate business unit managers and then communicating the goals to the IT team in order to deliver on the expected business value.  By defining metrics based on business value, the success of a project can be measured throughout the development lifecycle, stakeholders can be held accountable and projects can be modified throughout the process to realign it with its goals and objectives. 

If you are interested in help with your value delivery metrics, feel free to contact me.

Michael D. Harris



Written by Michael D. Harris at 12:20

Software Value: Impact on Software Process Improvement | DCG

Business value has not always been the primary driver of software process improvement, but that is changing.  This is the main point of an excellent article by Richard Turner in the March/April edition of CrossTalk, “The impact of Agile and Lean on Process Improvement.”

Turner’s article is a concise and refreshingly frank walk through the history of software process improvement from the perspective of an expert who has been intimately involved.  With a hint of frustration that I certainly share, Turner captures perfectly the thinking that has led to a move away from process improvement initiatives like CMMi in commercial software development organizations:

“One of the drawbacks of earlier process improvement approaches was the concept and distribution of value. The overall value of the process improvement was often situational at best and nebulous at worst.  Where it was seen as a necessity for competitive credibility [as was the case for my development group at Sanchez Computer Associates back in 2001], the value was in passing the audit rather than in any value to the organization and the customer.  In other cases, the value was essentially associated with the success of one or two champions and disappeared if they failed, changed positions or left the company [as I did].  On those occasions where PI was primarily instituted for the actual improvement of the organization, the internal focus on practices was often valued as a way of cutting costs, standardizing work [We certainly needed to make our processes repeatable] or deploying better predictive management capabilities rather than improving the product or raising customer satisfaction.”

While I agree with 95% of Turner’s analysis here, in my experience both passing the audit and standardizing our processes raised customer satisfaction.  We went from having one customer ready to give us a reference to most of our customers being referenceable on the basis of solid evidence that we had fixed the reliability of our software development

Turner contrasts historic process improvement initiatives, mostly targeted at waterfall operations, where business value was not a prime driver to today’s initiatives where, “With the emergence of Agile and Lean, the concept of value became more aligned with outcomes.  The focus on value stream and value-based decision making and scheduling brought additional considerations to what were considered best practices.”

Turner recognizes that in today’s Agile and Lean software development teams, the teams themselves are responsible for their own processes.  Mostly, this is a strength because creative people are likely to optimize processes under their control out of simple self-interest (which benefits the organization).  Where this falls down in my experience is where, “These organizations rely on cross-fertilization of personnel across multiple projects to improve the organization as a whole.”  To put it bluntly, this rarely happens.  Teams can be self-organizing but groups of teams don’t typically self-organize.  Hence, there is still a place for organizational process improvement – with a lean, software value driven emphasis – in the most modern software development organization.  By way of evidence, scrum teams that are working together on the same program struggle to develop ways to coordinate and synchronize their efforts unless a framework such as SAFe is introduced through a process improvement initiative. 

That said, I will leave the last word to Turner, “Process improvement that does not improve the ability to adapt has little value.”


Michael D. Harris, CEO

Written by Michael D. Harris at 13:36

Can Function Points Be Used to Estimate Code Complexity?

Software code can get really complex and we all agree that complex code is harder to maintain and enhance.  This leads to lower business value for the application as the inevitable changes required for the application to keep pace with a changing environment cost more and more to implement for a given function size. Can function points be used to estimate code

Consequently, I was a little surprised to see the title, “Cozying up to Complexity,” at the head of a book review by Robert W. Lucky in the January 2017 edition of the IEEE Spectrum.  Lucky reviewed the new book by Samuel Arbesman, “Overcomplicated.” Lucky identifies Arbesman’s three key drivers of increasing technological complexity: “accretion”, “interconnection”, and “edge cases”.  Accretion is the result of the continual addition of functionality on top of and connecting existing systems.  Connecting ever more systems leads to interconnection complexity.  Edge cases are the rare but possible use cases or failure modes that have to be accounted for in the initial design or incorporated when they are discovered.  Over time, these edge cases add a lot of complexity that is not apparent from majority uses of the system.  Increased software complexity can be a problem for outsourcing software development because more complex code is more difficult to maintain and more difficult to enhance.  This becomes a problem for software vendor management as costs go up due to reduced vendor productivity.

There are measurements and metrics for software complexity but Lucky reports that Arbesman’s suggested solutions for complexity including the novel idea that we should not take a physicists mathematical view to try to build a model.  Instead, we should take a biologists view: record the complexity we find (e.g. in nature) and look for patterns that might repeat elsewhere.  Arbesman does not necessarily see increased complexity as a bad thing.

If we accept that some level of complexity is a good and necessary thing to achieve the “magic” of current and future software capabilities, I wonder if there is a way to identify the point of maximum useful complexity?  Perhaps “useful complexity” could be measured in function points per line of code?  Too much complexity would be indicated by a low “useful complexity” value – trying to shoehorn too much functionality into too few lines of code.  At the other end of the spectrum – what Arbesman might refer to as his edge cases – we might see too little functionality being delivered by too many lines of code.

My train of thought was as follows:

  • A program with zero functionality (and zero function points) may have complexity but I’m going to exclude it.
  • A program with 1 function point must have some lines of code and some small complexity.
  • For a program with a reasonable number of function points, I (as a former ‘C’ programmer) could make the program more complex by reducing the number of lines of code.
  • Adding lines of code could make the program less complex and easier to maintain or enhance by spreading out the functionality (and adding explanatory comments although these don’t usually count as lines of code) up to a certain point, after which diminishing returns would apply.  The question is where is that point.
  • It must also be true that there must be a certain complexity inherent in coding a certain number of function points.  This implies a lower limit for the complexity given a fixed number of function points.
  • This suggests that, for a given number of function points, there might be a non-linear inverse relationship between complexity and lines of code.

I’d welcome people’s ideas on this topic.  Thoughts?

Written by Michael D. Harris at 10:03

Using Software Value to Drive Organizational Transformation

I was delighted to read a thought leadership article from McKinsey recently, “How to start building your next-generation operating model,” that emphasizes some key themes that I have been pushing for years (the quotes below are from the article):

  • The importance of orienting the organization around value streams to maximize the flow of business value – “One credit-card company, for example, shifted its operating model in IT from alignment around systems to alignment with value streams within the business.
  • Perfection is the enemy of good enough – “Successful companies prioritize speed and execution over perfection.
  • Continuous improvement relies on metrics to identify which incremental, experimental improvements work and which don’t.  Benchmarking and trend analysis help to prioritize areas where process improvement can offer the most business value – “Performance management is becoming much more real time, with metrics and goals used daily and weekly to guide decision making.”
  • Senior leaders, “hold themselves accountable for delivering on value quickly, and establish transparency and rigor in their operations.
  • “Leading technology teams collaborate with business leaders to assess which systems need to move faster.”

Using Software Value to Drive Organizational Transformation

There is one “building block” for transformation in the article to which I am a recent convert and so kudos to the McKinsey team for including it in this context.   Their “Building Block #2” is “Flexible and modular architecture, infrastructure and software delivery.”  We are all familiar with the flexible infrastructure that cloud provides, but I have been learning a lot recently about the flexible, modular architecture and software delivery for application development and application integration that is provided by microservices frameworks such as the AnyPoint PlatformTM from Mulesoft.

While they promote organizing IT around business value streams, the McKinsey authors identify a risk to be mitigated in that value streams should start to build up software, tools and skills specific to each value stream.  This might be contrary to the tendency in many organizations to make life easier for IT by picking a standard set of software, tools and skills across the whole organization.  I agree that it would be a shame indeed if agile and lean principles that started life in IT software development are constrained by legacy IT attitudes as the agile and lean principles roll out into the broader organization.

There are a lot more positive ideas for organizational transformation in the article, so I recommend that you take a few minutes to read it.  My only small gripe is that while the authors emphasize organizing around value throughout, they do not mention prioritizing by business value.  Maybe at the high level that McKinsey operates in organizations that concept is taken for granted.  My experience is that as soon as you move away from the top level, if business value priorities are not explicit, then managers and teams will use various other criteria for prioritization and the overall results may be compromised. 

Written by Michael D. Harris at 14:16
Categories :

Algorithms: What are They Worth and What Might They Cost You?

Every so often, I read an article that gets me thinking in a different way about software value and software risk.  Danilo Doneda of Rio de Janeiro State University and Virgilio Almeida of Harvard University recently published an article entitled, “What is Algorithm Governance?[1]

Doneda and Almeida suggest that the time may have come to apply governance to algorithms because of the growing risks of intentional or unintentional, “… manipulation, biases, censorship, social discrimination, violations of privacy and property rights and more,” through the dynamic application of a relatively static algorithm to a relatively dynamic data set.  

By way of example, we have probably all experienced the unintended consequences of the application of a reasonably well understood algorithm to new data.  We all have a basic grasp of what the Google search algorithm will do for us but some of you might have experienced embarrassment like mine when I typed in a perfectly innocent search term without thinking through the possible alternative meanings of that set of words (No, I’m not going to share).  At the other end of the spectrum from the risk of relatively harmless misunderstandings, there is a risk that algorithms can be intentionally manipulative – the VW emission control algorithm that directed different behavior when it detected a test environment is a good example. 

For those of us who deal with outsourcing software development, it is impossible to test every delivered algorithm against every possible set of data and then validate the outcomes. Algorithm Governance, Software risk management consulting by DCG Software Value

If we consider software value, from a governance perspective, it should be desirable to understand how many algorithms we own and what they are worth.  Clearly, the Google search algorithm is worth more than my company.  But, are there any algorithms in your company’s software that represent trade secrets or even simple competitive differentiators?  Which are the most valuable? How could their value be improved?  Are they software assets that should be inventoried and managed?  Are they software assets that could be sold or licensed?  If data can gather and sell data then why not algorithms?

From a software metrics perspective, it should be easy to identify and count the algorithms in a piece of software.  Indeed, function point analysis might be a starting point using its rules for counting unique transactions, each of which presumably involves one or more algorithms, though it would be necessary to identify those algorithms that are used by many unique transactions (perhaps as a measure of the value of the algorithm?).  Another possible perspective on the value of the algorithm might be on the nature of the data it processes.  Again, function points might offer a starting point here but Doneda and Almeida offer a slightly different perspective.  They mention three characteristics of the data that feeds “Big Data” algorithms, “… the 3 V’s: volume (more data are available), variety (from a wider number of sources), and velocity (at an increasing pace, even in real time).  It seems to me that these characteristics could be used to form a parametric estimate of the risk and value associated with each algorithm. 

It is interesting to me that these potential software metrics appear to scale similarly for software value and software risk.  That is, algorithms that are used more often are more valuable yet carry with them more risk.  The same applies to algorithms that are potentially exposed to more data. 

[1] Doneda, Danilo & Almeida, Virgilio A.F. “What is Algorithm Governance.” IEEE Computer Edge. December 2016.


Mike Harris, CEO

Written by Michael D. Harris at 15:07

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!