Follow us...

A Simple Way to Choose a Vendor Tool

By Tom Karasmanis, IIBA Chief Architect

In HOW TO CHOOSE A VENDOR TOOL (IIBA Newsletter, February 2012) we looked at a classic approach to selecting a vendor tool. While it is not an onerous process, I find I sometimes need something quicker and lighter. In this article I describe a more streamlined tool selection technique I have used that has worked for me in the past.

Its use should be limited to situations where we want:
  • Quick results over thorough process 
  • Speed of choice over long analysis 
  • Good enough over best possible choice 
Such situations may arise when the desired tool:
  • Is inexpensive 
  • Has a low learning curve 
  • Will be used within a single team or business unit 
  • Is not strategic 
  • Is not complicated 
  • Does not have many dependency relationships with other tools 
In other words, the consequences of not selecting the best tool are not critical; we are willing to choose a good enough tool and have it faster and at less cost. Also, if a better tool is discovered later, we are highly unlikely to switch as long as the existing tool continues to meet our requirements. An example tool category might be a screen capture tool.

The steps I follow in this process are:

  1. Establish objectives and scope
  2. Understand what problems the tool must address
  3. Define the required tool features
  4. Select the winner
  5. Try it
Compared to the classic approach several steps are missing entirely:
  • There is no short listing of tools or vendors: The approach eliminates this step as part of streamlining the process and making it quicker and simpler. 
  • There is no contract negotiation step: The assumption is the cost of acquisition is small enough to pay list price or to just have procurement secure the best possible price based on volume and other factors.
Step 1: Establish Objectives and Scope

This part of the process still needs to be done properly or we run the risk of completely missing the mark. We need to understand from the sponsor (the one paying for the tool) what their objectives are and their limitations in terms of budget, timelines, functionality (i.e. scope). The objectives should be few, focused and clear.

During this time, we begin capturing assumptions, constraints, dependencies, issues, and risks and continue to update these throughout the entire process.

Compared to the classic approach, I left out understanding the success criteria. This is not to say we do not need to have this understanding. However, given the situations described above for which this process is intended, the success criteria are simple: deliver the required functionality while meeting the sponsor’s criteria (budget, timelines, etc.) If the success criteria are more complex, this might suggest this is a more complex undertaking and that the classic approach is needed.

Since the tool is not strategic, we also do not need to worry too much about corporate strategy and enterprise architecture. However, for situations where there is a corporate body responsible for this type of oversight, it would be prudent to seek some kind of approval to proceed with this approach.

I understand this last point could be contentious for some, i.e. one could make the argument that when we look at total cost of ownership, every tool is strategic and should be part of a corporate strategy. I agree that when we consider the total cost of ownership (including the cost of selecting, procuring, learning and using the tool as well as the investment of users’ time), the cost of choosing the wrong tool and having to switch tools at some point is more expensive than one might think. However, my arguments for using this process anyway for the situations described above are:

  • Often the cost of a deeper and more thorough analysis is more than even the total cost of ownership. 
  • As long as the tool is good enough, we will not switch to a better tool, if and when that comes along. 
  • The “best tool” is a relative term; what is best today is no longer the best tomorrow when newer tools become available, existing tools get better or worse, and our requirements change – in today’s dynamic environment, Pareto rules!
Another way of putting the last point is that in spite of best efforts with the classic approach, sometimes it is not much more effective than a simpler approach such as this. So for “less important” tools, let’s keep it simple.

Step 2: Understand What Problems the Tool Must Address

In this simplified process, this is a critical step. We are selecting a tool that is “good enough”. Good enough means addressing the existing challenges, so it is imperative that we understand these challenges well.

To accomplish this efficiently, we identify the tool users and have them identify and describe the current problems they need resolved. We should apply good business analysis techniques here (such as “5 Whys”) to ensure we have understood and found the real problems.

Step 3: Define the Required Tool Features

Having identified and described the users’ challenges (problems), we can now determine what tool capabilities these users require from the tool. The tool capabilities are just product features or solution requirements. These capabilities should align with the sponsor’s objectives and fall within the established scope.

In this step, we have a significant difference as compared to the classic approach. Once all the tool features have been identified, the typical technique applied in this step is to determine a weighting or importance for each feature. Often these requirements are grouped by category. I find this can often be a very time consuming step and often ends up being inaccurate anyway; and the more features, the more likely to have less effective weightings.

Instead of determining weightings, we simply list all the requirements and sort them into a prioritized list from highest priority (most important) at the top, to lowest priority (least important) at the bottom. I find this to be a far simpler exercise. When users look at two requirements, it is easier to determine their relative importance than to determine their absolute importance (i.e. weighting) in the overall list. For the techies reading this, think “bubble sort”.

Finally, there will be a “virtual line” separating the “must have” requirements from all the rest.

Step 4: Select the Winner

Identify candidate products via some kind of product survey. Sources of product identification include Internet search, referrals (e.g. team members or colleagues who know of a product in the target tool category), and media articles.

Next, for each product we simply check off the features that are met. In the event of ambiguity as to whether a feature is met or not, refine the feature description to make it more specific, so that it is easier to rate the feature as met or not met. Sometimes, this may require splitting a feature into more detailed features.

The winning product is the one with the longest contiguous list starting from the top. A variation is to use “feature groups”. If there is a group of features that are deemed equally important, then the number of features met in that group determines the score for that feature group. This might be easier to understand with a simplified example.


In the table above, we:

  1. Identified and ranked the features (“Features” column)
  2. Identified the “must have” (mandatory) features (features #1 to #3 in a special font)
  3. Scored the tools against these features (checkmarks in columns “T1” to “T5” – the feature is either met or not) 
  4. We eliminate those that do not meet the “must have” features:
    1. T1 has all the features except #3, but since #3 is a mandatory one, T1 is out of consideration
    2. T2 to T4 meet all the “must have” requirements, so they are candidates
    3. T5 is also out, since it does not meet the “must have” requirements
  5. We look at the candidate tools (T2 to T4) to determine the winner:
    1. T3 has only the bare minimum features
    2. Since T2 and T4 have all the minimum features that T3 has, plus more, then T3 is eliminated
    3. Between T2 and T4, as we go down the list:
      1. Both have feature #4, but T2 has #5 whereas T4 has #6 to #8, but not #5
      2. T2 is the winner, because it satisfies a higher priority feature (#5), even though T4 satisfies a higher number of lower priority features (#6 to #8)

One could argue that maybe it is better to have many more lower priority features than fewer higher priority features. If this argument is a concern, then I recommend the classic approach. However, for simple tool selection, I feel the higher priority feature represents the next most important feature desired by users, so that should trump the lower priority features. Common sense should prevail: if it was a choice between T2 with only feature #5 vs. T4 with features #6 to #12, then I would put the decision to the users and look for a quick consensus. If there was significant discussion and no agreement, then I would stick with T2. Remember, we are talking about tools with a simple feature set.

In the event of a tie between two or more tools, find additional features, rank them and score them until a winner emerges. Another approach to resolve a tie is to have one or two users try the winning tools for a short period of time and see which feels better (e.g. more intuitive interface).

I described a variation in selecting a winner using “feature groups”. Using the example above, let’s assume the users have decided that features #5 and #6 are of equal importance. Using this variation, features #5 and #6 form a “feature group” and are treated as one feature. The number of features a tool meets within the feature group determines the tool score for that feature group. In the example above, T2 meets #5 while T4 meets #6, so they each score a 1 and are tied for that feature group. This means we keep going down the list to determine a winner. Since T2 does not have #7 while T4 does, then T4 is the winner. Note that in the analysis above, T2 won because feature #5 was deemed higher priority than #6. One more example: if there was a T6 with features #1 to #6, but not #7 to #8, then it would win over both T2 and T4 because it would score a 2 for the feature group #5 to #6 and we would not need to look at feature #7.

Since the product is not strategic, product longevity is not paramount and so vendor analysis should be kept very high-level and quick. A quick search on the Internet to see how long the product has been in business, looking at product reviews, and looking through forum discussions should be sufficient to determine if the vendor is viable.

Step 5: Try It

If you are comfortable with the selection and the team is a small size, you can just roll out the product.
However, if you have any doubts, you can simply choose a small subset of users and let them try it first. The rollout for them is normal; there are no negotiations with the vendor. If there is a trial period, by all means take advantage of it, otherwise acquire a few licences, deploy them and have the team try it for a period of time. If there are no show stoppers, deploy to the remainder of the team. If any problems are discovered, select the next best tool and repeat this step.

Closing Remarks

I have applied this approach when I had to choose a simple tool for myself or my team and I had to make the choice relatively quickly. I found it worked well for the intended situations. Keep in mind the intended use. Should you decide to use this approach yourself, I would be interested in any feedback:

  • Are there any other situations where it should or should not be applied?
  • What worked and what did not?
  • How can this approach be improved?