Follow us...

Seven Steps in Choosing a Vendor Tool

By Tom Karasmanis, IIBA Chief Architect

In this article we explore a basic, but proven approach to choosing a vendor tool. The technique applies to any tool; not just business analysis tools. The approach presented should be familiar to anyone who has used basic problem solving techniques or business analysis methodology. It can be summarized in the following basic steps:
  1. Establish objectives, scope and success criteria
  2. Understand the problem
  3. Define solution requirements
  4. Produce product short list
    • Product survey
    • Vendor validation
    • Product scoring
  5. Evaluate Short List
    • Vendor presentations
    • Evaluate trial version
  6. Contract negotiation
  7. Run a pilot
Step 1: Establish Objectives, Scope, and Success Criteria

Before we can formulate requirements it is important to understand the nature of the problem and the current challenges being faced today; in other words, why do we need an automated solution?

Prior to beginning any work, we meet with the sponsor to understand the objectives and scope of this initiative as well as what success looks like; i.e. the success criteria. The sponsor has a specific problem and will set the high level objectives independent of specific stakeholder needs, such as provide an automated solution for requirements gathering for projects to reduce delivery time and cost of producing requirements as well as improve their quality. The success criteria will reflect the benefits and other conditions important to the sponsor.

One important consideration is that tools fall into several categories and many tools are in more than one category. For example, within the business analysis discipline, we have the following key tool categories:
  • Storyboarding / prototyping
  • Requirements management
  • Use cases
  • Modeling
  • GUI specification (often combined with storyboarding / prototyping)
If we limit the tool selection project to one category, how do we evaluate tools that cover more than one category? They will often be more expensive and may not win in a single-category competition, but it could be that they are the best overall tool for the organization to adopt across all categories.

Step 2: Understand the Problem

To understand and validate the problem, we first identify the stakeholders, their roles and responsibilities within the organization, and their role with respect to the tool. Who will use the tool to author or create artefacts (primary actors; responsible); who will use the tool in a supporting role, such as approving or reviewing the artefacts (secondary actors; accountable, consulted); and who requires view access (secondary actors; informed)?

The typical stakeholders in a classical project environment are Project Managers, Business Analysts, Quality Assurance personnel, Architects, Developers, Subject Matter Experts, Sponsors, as well as managers of these groups. Depending on the situation, third parties (such as a website host or software vendor) might be included if they are tightly integrated into the project development lifecycle.

Having identified and described the stakeholders, we can now do a needs analysis to first understand their current challenges and then determine what capabilities these stakeholders require from an automated solution. These needs should align with the sponsor’s objectives and fall within the established scope. At the same time, we capture assumptions, constraints, dependencies, and issues which continue to be updated throughout the evaluation.

Common challenges stakeholders face without an automated business analysis tool include poor quality requirements, the inability to trace requirements effectively, ineffective communication of requirements to stakeholders, and poor reuse of requirements.

Step 3: Define Solution Requirements

The solution requirements are the business capabilities of the solution; what business functions must it be able to perform and how important are each of these in ranking candidate solutions?

The usual approach is to define a set of detailed capabilities or features that the solution must have and assign a weighting to each of these. Often these requirements are grouped by category. The requirements, of course, must satisfy the stakeholder needs established in the previous step; otherwise the requirement is scope creep. When determining the weighting of the requirements, take into account the project development lifecycle used in your organization, the team’s knowledge and experience, stakeholder expectations, and the sponsor’s objectives.

Some selection criteria to consider are:
  • Cloud vs. desktop vs. centralized (LAN)
  • Offline / disconnected support
  • Fit-to-process vs. fit-to-tool vs. custom
  • One do-it-all vs. multiple specialized
  • Small steps vs. big steps
  • Staff experience with similar tools
  • Support for BPMN, UML, and/or swim lanes
  • Traceability support (hierarchical, stakeholder, ad hoc)
  • Import / export features
  • Version control
  • Structuring requirements (packages, models)
  • Reporting (important, but very specific to each situation, so not covered here)
  • Customization
  • Sharing
  • Workflow
  • Approving
Some requirements will be mandatory, meaning if a product does not meet any one of them, it is disqualified. These should be identified prior to selecting products to avoid bias. At the same time, there should be a solid reason for making a requirement mandatory as any products not meeting it will be disqualified regardless of how many other benefits it provides.

Step 4: Produce Product Short List

Once we have all the requirements identified and prioritized, we do a product survey to identify candidate products that may satisfy our requirements. Try not to disqualify something that potentially meets most of the requirements and wait for the scoring. On the other hand, in some product categories there may be so many products, that some judgement should be used to keep the list manageable.

Once a list of products is collected, due diligence should be performed on the vendors. Sometimes the product may be a great product, but the vendor does not meet the criteria necessary for an organization to do business with them. If the vendor demonstrates financial instability, poor customer service, or is a takeover target, it may not be wise to invest in their product regardless of how it scores. Eliminate products by vendors that do not meet your criteria for those with whom you would do business.

Next we score the products according to the selection criteria established in the previous step, calculating the weighted scores. The top three to five products are on the short list.

Step 5: Evaluate Short List

Once the products are ranked, vendor presentations are the usual next step. If the tools are small or not strategic (e.g. a screen capture utility), a vendor presentation may not be necessary. However, for the typical business analysis tool (e.g. a requirements management tool), vendor presentations are recommended. They give the evaluation team and other stakeholders the opportunity to ask ad hoc questions as well as see advanced features that cannot be learned easily during a trial period. The evaluation team should insist that vendors present using a live version of the software, not just slides or a scripted demo.

An optional activity here is to install and run free trial software to explore more details about each product. One problem is not all software has a free trial period, although most do or one can be arranged through the vendor. This step will also depend on the amount of time and budget available. By running the trial software after the vendor presentations, the team can take advantage of the knowledge gained during the vendor presentations.

At this point, the tool rankings are adjusted by adjusting the scoring based on any new information from the vendor presentations and the winner identified.

Step 6: Contract Negotiation

This step is provided to reference where in the process it typically occurs. A contract is negotiated to cover pricing, installation, delivery, support, trial period to conduct a pilot, and other such issues. However, details of contract negotiation are out of scope of this article.

Step 7: Run a Pilot

A best practice would be to conduct a pilot and use the tool on a small project that will sufficiently exercise the key features of the tools, engage the key stakeholders, but still be a small and manageable project. This will give evaluators the chance to collect some real feedback on the use of the tool and validate the critical requirements.

This step should be considered optional and depends on time and budget available for the tool selection criteria and how close the tools are. Choosing a tool entirely based on a numerical scoring without actually using and experiencing the tool has its risks. If the vendor presentations and trial software was sufficient for evaluative purposes, then this step can be skipped; otherwise, the pilot should be conducted.

The pilot should be run as a regular project with real deadlines and deliverables. Real feedback should be gathered and evaluated to determine the suitability of the tool for its intended purpose and whether it meets the requirements in practice. If expectations are met, then the tool can be rolled out to its intended target organizational group; if not, then the scoring should be adjusted and the runner up be evaluated at step 6.

Roll Out

This step is also provided as a reference as details of rolling out the tool are out of scope of this article.