Enterprise software RFPs with open questions requiring short essay answers are difficult to evaluate. See how RFP scoring takes the gambling out of selecting best-fit software.
Most enterprise software RFPs (or RFIs or RFQs) contain hundreds or thousands of requirements. When vendors respond to these RFPs, how do you deal with so many requirements? How do you take the gamble out of selecting software? The CIO of one large organization told us that he sometimes reads RFP responses at bedtime. There has to be a better way. There is, and that is to use an RFP scoring system.
The purpose of the scoring system is to identify the best-fit software product for the particular organizational needs. This means evaluating products against the requirements, and distilling their ratings down into one number. We call this number the fit score and use it to rank products, and ultimately make the selection. Although this may seem like a gross simplification, ultimately only one product will be purchased. More complex scoring systems only get in the way of that final decision.
To score an RFP, write requirements as closed questions that can be answered by selecting a rating from a drop-down list. For better usability, the drop-down list should always display the rating name, and not just the weight. To ensure rating consistency and accuracy, rating descriptions must be available. When vendors respond to a requirement on the RFP, they select a rating from the drop-down list and can add supporting comments. See example below of a Product Rating Table.
|8||Exceeds||Does substantially more than is required. There is room to grow into this requirement, and there is a reasonable possibility the extra functionality will be used at some point in the future.|
|7||Fully meets||The software adequately meets this requirement "out of the box", and no compromises are required. In addition, not setup or configuration is required.|
|6.9||Fully meets (config)||The software adequately meets this requirement when properly configured. No external software is required. Examples are: workflow routing, form or screen layouts, report design, custom fields, setup of lists & categories, user accounts.|
|5.1||Fully meets (code)||The product can fully meet this requirement with a "reasonable" amount of code. Note: the product is designed to support these code fragments, e.g. a macro triggered on a certain event. It does not refer to customizing the source code of the product.|
|5||Fully meets (option)||When optional products supplied by the vendor are added to the base configuration, the software adequately meets the requirement. No compromises are needed.|
|4||Fully meets (3rd party)||The software fully meets this requirement with the addition of a 3rd party product.|
|3.1||Fully meets (customize)||The software can fully meet this requirement with reasonable custom code developed by the purchaser, e.g. by modifying existing tables in the database or by editing product source code. Note: Only applies where the purchaser has access to the source code, and almost never applies to cloud products.|
|3||Mostly meets||The software meets this requirement to a large extent. Deficiencies can be reasonably easily overcome with minimal effort.|
|2||Partly meets||The software has significant deficiencies in meeting this requirement, but they can be overcome with considerable effort.|
|1||Slightly meets||The software has the required feature, but serious deficiencies exist in the implementation that can’t easily be worked around.|
|0.5||Future release||This feature is not currently in the product, but is planned in a future release.|
|0||Does not meet||Product does not meet the requirement at all, or the feature is completely missing.|
A previous blog post described how to weight requirements for importance. To calculate a product’s score for one requirement, multiply the requirement importance weight by the product rating weight. Sum all scores for each group of requirements, and multiply by the group weight. Finally, sum all group scores to get the overall product fit score.
Normalizing the fit score to a percentage is a way of measuring and comparing products against your specific requirements. To normalize scores, create a hypothetical reference product that fully meets every requirement and calculate its score. Put the real score over this reference score and multiply by 100 percent to get the normalized fit score. Then rank software products by the fit score.
When vendors don’t respond to all requirements in the RFP, you might want to calculate two versions of the fit score for a better perspective:
- The estimated fit score includes only rated requirements. Unrated requirements are ignored.
- The absolute fit score includes all requirements. Unrated requirements score zero.
As the percentage of requirements rated increases, the estimated score approaches the absolute score. If more than 90 percent of all requirements are rated, the estimated score is usually accurate. However, verify that 100 percent of the showstopper and critical requirements have been rated.
While fit scores measure how well products meet all requirements, for a deeper look at the relative strengths and weaknesses of individual products, use a heat map. You can examine and compare the weak areas of competing products against your requirements. See the example below.
Each group (column 1) contains one or more related requirements. Req Count (column 2) is the number of requirements in each group. On the right (columns 4 – 9), the numbers in the blocks show the fit score for each product for that group of requirements. The color also indicates the match: White is a 100 percent match for your requirements. The redder the color, the weaker the product for that particular group.
Column 3, the Group Average column, shows the average score for each group of requirements across all products. If there are groups with relatively low scores, e.g. Security \Access Control at 59 percent below, this indicates that no products come close enough to your needs for that group. If there are too many groups like this, consider:
- Adding other products to the evaluation, usually more high-end.
- Adding third party products to resolve the deficiencies.
- Changing the scope of the evaluation by revising those groups of requirements.
Example of heat map used to compare products
Even the best-fit software is not perfect, and one of the major benefits of this kind of analysis is that it sets realistic expectations. You know how well the software will work in your environment before making the purchase. There are no implementation surprises caused by missing or weak functionality.
Scoring RFPs is a practical way to deal with hundreds or thousands of requirements. Software products are ranked by how well they meet your specific needs while the heat map compares products in detail. If selecting software is in any way likened to gambling, you have stacked the odds in your favor.
The next post will describe a way to solve the problem of aggressive vendors and “over-optimistic” RFP responses.
This article was originally published on CIO.com on April 13, 2015