Tips For Optimizing Your Recommendation Engine for Customer Outcomes

Posted by on

    

Optimize_Your_Recommendation_EngineIn today’s B4B world, meeting business outcomes for the customer is increasingly more important. If suppliers can present valid solutions to the pressing issues facing customers at the right time, then win rates will improve, suppliers’ business will grow, and customers will gain success. Can these offers, or at least the presentation of the offers, be automated? 

Many service organizations say no, both in surveys and in their lagging technology purchases of offer management software. The most common explanation is that service offerings and processes are too complex, which is then followed by the problem that the required data is lacking. In this blog post we address how service organizations can leverage recommendation engines to optimize the timing of offerings, connect these to existing sparse data streams, and build out the foundation for richer data to come in future data handshakes. 

Approaches to Prepare a Recommendation Engine

With a host of recommendation applications in the market, many TSIA members start with a knowledge management (KM) solution. In order to successfully leverage a KM solution to make recommendations, an organization must gather, maintain, and improve an extensive knowledge database and then connect it to offers. For example, in the context of turning service staff into revenue generators, TSIA recommends three approaches:

  1. Map the top 10 to 15 problems to offers and, subsequently, train your staff.

  2. Map and include offers in the most popular KM articles.

  3. Invest in offer management software.

By completing the data mapping in your KM solution for items 1 and 2, you are providing the foundation for the offer management software to be built upon. That data can be used to start the outcome offer discussion with automated recommendations.

Why prepare for, and invest in, a recommendation engine? In the context of improving case resolution, recommendation engines can utilize the data to help:

  • Decrease time to resolve

  • Increase customer success rates

  • Increase deflection rates 

Warning: Accurate Recommendations Are Critical to Success

While recommendations can help optimize offerings, they can also go horribly wrong and actually dissuade a customer from wanting to do business with your organization. When recommendations are not personalized, there is a strong correlation with a customer being dissatisfied, as displayed in Figure 1.

Figure 1

 Fig_1

 Source: Gigya
Click image to enlarge.

 

How to Avoid Poor Recommendations

There are three main approaches to recommendation engines that should be used in order to have accurate, relevant, and optimized recommendations:

  1. Item Clustering

  2. Similarity Logic

  3. Peer Influence

Item clustering is the best approach to start with if your organization has little to no consumption data. This is due to the fact that this approach is driven by the content itself and the internal data you have tied to it. As an example, many news organizations use this approach on their websites to recommend other articles to read. Internally, an organization like the Washington Post will tie data to each article. Specifically, this would include, but is not limited to:

  • Author

  • Region (e.g., USA, Americas, Europe, Asia, etc.)

  • Keywords

  • News Type (e.g., Politics, Business, Sports, etc.)

Item clustering recommends other articles that share these common data points without requiring consumption data on that user’s reading habits.

Similarity logic is the second approach. This is probably the most common approach used and certainly the most recognizable. This is the infamous, “People who bought this also bought that,” recommendation. This approach is more difficult to implement when your organization does not have a lot of usage/consumption data.

Peer influence is the final approach. Simply put, this approach is based on the ratings system you see on websites like Amazon.com. This allows users to rate, usually on a 5-star scale, the item in question. In the services industry, that would mean allowing users to rate offerings based on their satisfaction with achieving a specified outcome. In order to use this approach, however, data must be collected.

Tying Recommendation Approaches Together: The Hybrid Approach

It’s important to note that you are not pigeon-holed into picking one approach. In fact, most recommendation engines create a hybrid approach that takes data from all three in order to truly optimize the recommendation. The hybrid approach uses each of the underlying algorithms to serve up recommendations. Each of the approaches detailed above have an algorithm that will score other content in the database based on the original offer/article the user was interested in viewing. From there, you can combine the scores with simple averages or more complex weighting systems. Some platforms allow the users to experiment with different weights and/or turn off different recommendation functionality.

How TSIA Uses This Approach

As TSIA works to collect more consumption data, we have decided to initially invest in item clustering based solely on internal knowledge management data. The first step was to create our Service Operating Framework (SOF), an example is shown in Figure 2. Next, we have tagged our content to the SOF, which gives us internal data that we can use with item clustering to help optimize our recommendations.

Figure 2
Fig_2


Click image to enlarge.
 

As you can see, TSIA has tagged specific content (orange documents) to identified service business challenges (blue rectangles). We can now use this data to make further recommendations to a user. For example, if someone reads the document DM 8 related to increasing PS Profits, we can recommend document DM7.

Our next planned step is to connect our SOF to benchmark data. Our benchmark data is a proxy for certain member business outcomes. For example, in Figure 3, the outcome of Growing PS Revenues at the top is captured in our PS benchmark as revenue growth in the past year. When we make these connections, this can help us identify which offer to recommend to which member. For example, if we have no background on a user, but we know that his company is not doing well in “Growing PS Revenues,” we can now offer up content on that subject.

Figure 3

Fig_3

Click image to enlarge.

We’ve also connected other metrics (such as attach rate) and practices to this ultimate outcome of “Growing PS Revenues,” as shown in Figure 4. By building a sequence of practices and metrics related to an outcome, we can provide a very structured solution to the outcome’s challenge. And then by tagging articles to these practices, we can provide a small set of recommended readings for each step in the sequence.

 

Figure 4

Fig_4

Click image to enlarge.

 

Finally, by connecting these internal pieces of knowledge back into our member outcome data in the benchmark, we can personalize the recommendation to the user based on his or her company’s performance. For example, if attach rate is low, we can recommend the nine articles related to improving PS attach rate, instead of all of the 300+ recommendations that have been tagged to growing PS revenues.

Key Takeaways

I know it's a lot to take in, so here are some of the key takeaways you can reference that will help you get started on the right track. 

  • Automated recommendation engines can leverage you KM to create next-best offers, but can be dangerous if done poorly.

  • Successful recommendations leverage a combination of methods  (item clustering, similarity logic, and peer influence) but can require a large amount of consumption data to be optimized.

  • If you are short on consumption data, you can initially focus on item clustering that aligns with sequential solutions to common customer challenges.

  • Regardless of the recommendation method, tying recommendations to customer outcomes and metrics is your first step to creating automated outcome-based offerings.

About the Author

Bryan_Girkins_HeadshotBryan Girkins is a research analyst for TSIA, and brings with him his experience from his time at Deltek researching technology services within the public sector. Specifically, he analyzed industry trends affecting contractors as the Government started adopting cloud-based solutions and moved away from overly complex solutions. Bryan also brings experience as a Software/Business Analyst for Avaya Government Solutions. In that role, he worked on the product development team for a next generation case management system for the Technology Division of the Administrative Office of the United States Courts, Office of Court Administration (AO/OCA/TD). He can be reached at bryan.girkins@tsia.com.

 
Save 20% on conference registration

Topics: knowledge management, best practices, customer outcomes, optimization

Comments

Subscribe to the Blog

Posts by Topic

see all

Follow TSIA