Vaughan Merlyn describes an approach to assessing the ‘goodness’ of IT capabilities, both in terms of their current state and desired state. Over the past 20 years Vaughan has designed and facilitated hundreds of assessments, both as part of multi-company research projects – that generated a substantial base of assessment data – and through individual assessments as part of consulting engagements. During this time he has developed a number of assessment principles that are described in this article.
The problem with assessments is that people don’t like being assessed, but they love being part of an assessment process! By and large, people like to know how they are doing, especially from an organisational perspective. But they are mistrustful (rightly so!) of consultants or other ‘agencies’ that come in and assess them or their organisations. Self-assessments, supported by a facilitator – who can bring experience and act as an impartial ‘judge’ to resolve differences of perspective, opinion or interpretation – work best as they allow those being assessed to retain ownership, both of the results and resultant actions.
Furthermore the assessment must be transparent and repeatable. Like any meaningful scientific experiment, the process should lend itself to repetition with consistent results. In fact, repetition over time may well be important to sustained investment in capability improvement activities. Too many assessments are conducted, discussed and then swept under the table. Not only is this wasted effort, but it will be that much harder, or even impossible, to get people to participate in future assessments.
Why assess IT capabilities?
I think there are some parallels in the question “Why have a medical check-up?” Sometimes, we have a medical check-up because we suspect something might be wrong with our health – perhaps we are more tired than we think we should be, or we get tightness in the chest if we exert ourselves. This falls into the category “I think I might have a medical problem – I need to find out if I do, what it is and what I need to do about it!” At other times we have a medical check-up because we like to be proactive about our health – assure ourselves that all is as well as we think it is, and find out about unrecognised problems before they become critical. And at other times, an external force leads to the medical check-up – applying for new medical insurance, for example.
So, the corollary is that we should assess IT capabilities when:
- We think we might have a problem, eg costs too high, performance too low, etc.
- We think everything’s just fine, but would like to prove it!
- Someone of importance wants us to be assessed, eg the CEO, an audit committee, a major client, etc.
The results of an assessment must be multi-dimensional
This actually gets to the question of ‘goodness’. I believe there are three important aspects of ‘goodness’ as it relates to IT capability:
Performance: This relates to efficiency – what resources it takes to achieve a given result.
Value: This relates to the effectiveness of an IT capability – what benefits are being derived from the capability.
Health: This relates to the ability to perform and deliver value over time. We’ve all seen heroics, where, for example, a project team moves mountains in the final weeks of a project by working 20-hour days, 7 days a week. It’s a wonderful thing to behold, and sometimes is necessary and may even promote ‘good health’ for the organisation as people pull together and participate in a miracle. But it’s not sustainable.
What is meant by ‘capability’?
Capability is simply the ability to perform actions. As it applies to human capital, capability is the sum of expertise and capacity.
When it comes to IT capabilities it’s important to note:
- Capability maturity models such as CMMI put processes as the central construct of a capability – and the key to capability maturity assessment. In practice not all IT capabilities are inherently process-centric. Some depend more on people’s skills and competencies (think Business Relationship Manager, for example), while others depend more on deliverables than they do on specific processes. For a more detailed treatment of this distinction see the table below.
- You don’t need to ‘own’ any given IT capability: you can ‘rent’ it as in outsourcing or contracting, for example.
- Not all IT capabilities exist in an IT (or IS) organisation. Some are embedded in business units or other organisations. For example, the capability to choose, procure and maintain personal computing devices may belong to the business – think BYOD (Bring Your Own Device) as this rapidly growing movement is often referred to.
Process-based assessments only go so far!
Frameworks like the SEI CMMI maturity assessment are appropriate for capabilities that are heavily process-dependent. These include IT operational processes that are highly predictable and repeatable. But, drawing from Henry Mintzberg‘s work on Designing Effective Organizations, not everything demands standardisation of work processes. If the goal is to make work consistent, repeatable, predictable and of high quality, there are three approaches:
Typically, all three types of standardisation apply to varying degrees – the mix being a function of the nature and complexity of the work you are doing.
For highly complex work the emphasis is on the people, which is why surgeons go through years of training, board certification and residencies. It’s no use handing them a detailed process map to follow and expecting an untrained person to achieve a quality result.
For work such as bridge building, the emphasis will be on the deliverables – various types of blueprint, work breakdown structures and so on.
For routine, sequential work, the emphasis will be on defining the tasks to be followed and the sequence in which to follow them. Ideally, the work can be so ‘routinised’ that it can be automated. Think data centre operations and the shift over the years to ‘lights out’ data centres.
Detailed processes are great at helping manage work that is routine and sequential in nature – which is one of the reasons why ITIL has gained so much traction in the last few years. For work that is inherently collaborative, and may require more visual enablement, standardising on deliverables may be more apparent – think discovery and solution delivery. For work that is more complex and exploratory, training and performance support systems are more appropriate.
Not all IT capabilities are born equal
It is helpful to classify IT capabilities into one of three different types, as illustrated in the graphic below.
Value chain capabilities: At the core are those capabilities that take inputs, add value and deliver outputs to a customer or end consumer. In the world of IT, these tend to be services and products. Think of these value chain capabilities as those that the end customer appreciates and is willing to pay money for.
For example, as a business user, I may have a business problem I’d like IT help to solve. That problem (or opportunity) is the input to a value chain. The first capability that will approach that problem adds value by analysing the problem, identifying and proposing a solution. As the business user, I appreciate that value has been added by drilling into my stated problem and offering one or more proposed solutions.
The next capability in the value chain might take the chosen solution, and develop and deploy that solution. Again, as the business user, value has been clearly added – taking a proposed solution and delivering it. The final capability where value can be added is supporting and maintaining that solution – again, a recognisable way of adding value for me, the customer.
Ultimately, as the business user or consumer, these are the only capabilities I care about and am willing to pay for (directly or indirectly) because of the value they add for me. Unfortunately, while these value chain capabilities are necessary, they are not sufficient.
Enabling capabilities: Value chain capabilities typically draw upon other capabilities that enable them. Think of these as shared services that are common to other capabilities, or to other instances of problems/solutions working their way through the value chain. Examples of IT services that might enable the value chain capabilities include project management, IT operations and IT supply.
Alignment and governance capabilities: Value chain capabilities also typically depend upon other capabilities that ensure that the work they are doing is aligned and governed to ensure they are operating effectively and in the interests of the enterprise. For example, determining which business problems will be addressed, which solutions will be selected, how staff and resources will be allocated are all important controls that value chain capabilities will be subjected to.
The diagram above gives a normative IT capabilities model. One can debate the specific labels for each of these capabilities, but essentially, any enterprise that depends upon information technology to any degree needs each of these IT capabilities. Of course, the devil, as they say, is in the detail, and the detail exists in the drill-down decompositions for each of these high-level IT capabilities.
How do you know what IT capabilities you need?
There must be a clear and explicit linkage from business strategy to required IT capabilities. There are many methods for achieving and expressing this linkage, and this is in the realm of strategy formulation. At its simplest, a given business strategy will require a set of business capabilities. In turn most, if not all, business capabilities will depend upon one or more IT capabilities. Common techniques for achieving this linkage include:
- Strategy mapping
- Business capability mapping
- Capability road mapping.
The big danger with most strategic alignment methods is that they are inherently reactive; ie to enable ‘x’ business capability or strategy, we need ‘y’ IT capability. But how do you know that the business strategy is properly informed by IT possibilities? This is where the first in the value chain capabilities (see graphic above) comes into play – Discovering Business-IT Potential – and where the role of the business relationship manager is so key. So, you don’t just need the IT capabilities the business thinks it needs – you also need IT capabilities that create IT ‘savvy’ and equip the business to understand and fully exploit IT potential.
The figure below shows a capability assessment approach that uses four top-level dimensions – Purpose, Commitment, Ability and Accountability. Each dimension is decomposed into lower (leaf) level dimensions. Purpose, for example, is a function of how clearly and effectively the service(s) produced by a capability are defined, and how clearly the goals for that capability and principles by which the capability operates are defined.
Note there is a hierarchy implied among the top-level dimensions. It is unreasonable to expect management commitment to a given capability if the purpose and goals for that capability are unclear or inappropriate. It is unlikely that appropriate ability is in place without the necessary management commitment. It is unreasonable to expect clarity of accountability for a given capability if ability is lacking.
In practice, it’s best not to disclose this hierarchical relationship until the assessment is underway, when it can be used as a validation mechanism. For example, if the assessment team is scoring Accountability as fully in place, when they’ve scored Purpose or Commitment or Ability as not in place or only partially in place, then it’s appropriate to challenge the team’s conclusions and probe the inconsistency.
The multi-level assessment dimensions provide several options for the assessment method:
- Assess a capability at the top-level, but use the leaf levels to clarify what is meant by the top level. For example, I can assess the degree to which the purpose of a given capability is in place by thinking about the effectiveness and clarity of the service definition for that capability, and the quality of the goals and guiding principles for that capability.
- Assess a capability at the leaf-level dimensions.
- Mix and match between top-level and leaf-level dimension based upon the needs (purpose and goals of the assessment) and feasibility (available time, available knowledge) of the assessment situation.
Each capability across the four dimensions can be rated on the following scale:
- Fully in place – this is our universal practice and will be found to be used consistently, with few, if any, exceptions.
- Mostly in place – this is common practice, though is not universal or consistent, or there are frequent exceptions. We know how to do this well – but we need to get better in practice.
- Partially in place – this is not common practice, though we have many of the necessary characteristics, but not all of them. We have some work to do to strengthen the capability as it exists as common practice.
- Not in place – we have few, if any, of the necessary characteristics. We have a great deal of work to do to develop this capability.
Note that there is clearly room for interpretation in these ratings. This is more art than science, and for most IT capabilities, we are not dealing with highly mature processes and statistical process control! From my experience, that is ok, and is why I previously said that my preference is to use a facilitated self-assessment approach. It is usually the dialogue this generates that has the most value, and leads to the insight and commitment from the team to initiate and sustain improvement efforts.
Here’s an outline of the approach:
- Identify the scope and depth of the assessment. This will help determine the number and boundaries of capabilities to be assessed. I think the ideal is between five and nine, assuming you are going for full coverage of the IT landscape.
- Identify ‘leaders’ for each capability. These will be people who will identify the SMEs (subject matter experts) and stakeholders (key customers and suppliers) for the assessment focus sessions. An ideal group size is between seven and nine people.
- Provide training for the leaders on the method – I find that a one-hour session is more than adequate. The leaders will then know who to invite and what to expect.
- Schedule the assessment focus sessions. Allow two hours per session and time box the sessions.
- Distribute the assessment results for commentary and feedback.
- Present the findings to the IT leadership team.
- Create a high-level plan of actions resulting from the assessment.
- Communicate the high-level plan to all assessment participants.
But underneath all this, I have found the real power of capability assessment to be the dialogue and insight it leads to – a way for suppliers, capability groups and customers to talk in a disciplined way about what they do, what works well, what needs improving and, to a degree, how best to improve it. That is the magic – to view capability assessment as a social activity.
Capability assessment as a continuous process
The insight I have gained from facilitating hundreds of capability assessments has led me to conclude that it’s both a social activity and one that should be a continuous (as opposed to periodic) process. As a result my colleagues and I now use wiki platforms, with their semantic capabilities, as a single, integrated solution to drive greater engagement in the continuous improvement process. More on this innovative approach is given in my recent Formicio article Empowering IT Organisational Performance using a Semantic Wiki.
I welcome your thoughts.