Why You Need To Measure Seller Behaviour Change Right Now

mentor reviewing performance of sales rep

Charles Darwin said it’s not the strongest or smartest species that survives, but rather the species that’s the most adaptable to change.

Chuck might as well have been talking about sales enablement. Sales reps’ tactics often need to change to meet the required revenue goals set by leadership. And no matter how clever you think your programs are, leadership won’t care if they don’t get you closer to your revenue goals. And the only way you’re going to do that is to change seller behaviour. 

So the question is: are you even changing how reps do their jobs. And, more importantly, are you measuring that seller behaviour change? Too many times, sales enablement managers make the mistake of measuring enablement effectiveness by measuring how much professional sales training happened, and MAYBE revenue outcomes. 

But it’s missing the step in the middle.

To get a complete picture of your sales enablement funnel, from program consumption to revenue impact, you need to measure every stage, not just the start and the end. This means looking at:

  • What training your reps consumed
  • How (and how much) that training changed behaviour
  • The revenue results of that new (hopefully better) seller behaviour

In this post, I’m going to take you through the three pieces of the new sales enablement funnel, what metrics need to be measured for each piece, and how an outcome-based enablement tool is going to be a massive help.

The New Sales Enablement Funnel

In short, it goes like this:

Program happens -> Seller Behaviour changes-> Revenue outcome is achieved

All three pieces are important and interconnected, so you need to measure all of them.

How to make it work

For this whole thing to work together, you can’t start with the program. Instead, turn the whole thing upside down and start with the outcome:

Revenue outcome -> Seller Behaviour -> Program

Figure out where you want to go first, then work out the behaviours and skills for sales you need to change to get there. Finally, design a program specifically targeting those precise behaviours.

This approach has a couple of perks:

  • Sellers and managers are a lot more likely to buy into your program if you can prove how it’s going to make them money.

  • You can align your initiative to the overall revenue strategy of the organization more easily.

  • You will be in a much better position to finesse your programs over time (more on that in a minute).

Next, to make this whole thing work, you’re going to need some tools and technologies in your sales tech stack to:

  • Deliver and track program completion. There are tons of options out there, from the humble spreadsheet to LMS systems to sales enablement and readiness tools. You need to be able to assign enablement programs out and track their completion rates step by step.
  • Evaluate seller behaviour. Call intelligence platforms like Gong and Chorus are the market leaders here, but if that’s not in your wheelhouse, you can look at things like activity metrics in your CRM to see if seller behaviour is changing.

  • Measure revenue outcomes. Your CRM is where your revenue data will live, but you need some way to tie it back to the original program. Outcome-Based Enablement tools like us will do this best, but if you’re working with only a handful of sellers and are an Excel wiz, a bit of elbow grease and a lot of vlookups will get you there.

Now let’s dive further into each of these pieces.

Revenue Outcomes

Remember, start with the revenue outcome.

Key metrics: Use the sales velocity equation to break down revenue into four key components: average deal size, number of opportunities, win rate, and sales cycle length. (there are others, but these four are the big ones). First, you need to pick your metric. So go talk to your sales team and find out what metric they’re struggling with. What metric is going to stop them hitting their objective?

Pro tip: As you go through this exercise, it’s going to be tempting to be pulled into a discussion of how. Keep the conversation focused on the what metric and leave the how you’re going to get there for another conversation.

The Seller Behaviour Piece

Key metrics: This is most likely to show up in the number of activities or the conversion rates between sales stages. But it’s not just about doing the activities, it’s about the quality of the activities. In order to show a change in seller behaviour, you’ll need to benchmark the associated KPIs before a program and after a program. Some things, like the quantitative aspect of behaviours, can be measured immediately. Take the following two examples:
  • Suppose it was determined that deals were not moving forward because reps were engaging low-level staff, so reps were told to book more meetings with VP level staff. Tracking behaviour change in this case is simply seeing if the number of such meetings increases.
  • Suppose it was found that sales reps were taking too long to get back to prospects, which was leading to a lot of leads going cold. The task would then be to reduce lead response time. This can be measured directly in your sales software.

Qualitative aspects require performance to be reviewed over a longer period of time. It’s not so much about whether the activity happened, but more about going into the activity and seeing how well it was performed. It allows you to truly answer the question: did the training change seller behaviour? Take the following examples.

  • Between the demo stage and trial stage in your sales funnel, the average conversion rate is lower than that of the top performers. Using a business intelligence tool to track calls, you see a pattern emerging: reps who were underperforming were talking too much and missing churn indicators, which the top performers did very well on. You run a program to train reps to listen to prospects’ pain-points rather than trying to hard sell.

  • A disease management partner was facing enrolment and retention challenges. It was found that customer engagement was low in the initial calls leading to many lost prospects. Using call tracking, it was found that many emotional objections came up in these calls (given the nature of the industry). Reps were ill-equipped to handle such objections. A program was run on how to tackle the top emotional objections, sales coaches developed a new introduction to calls, and the assessment process was changed from an interrogation style to a more natural and conversational approach.

  • In a factory equipment company, the core metrics were changed from new transactions to account development. This required a change in the way reps approached conversations, from having a transactional approach of helping customers buy single parts or prototypes, to a more far-sighted approach of focusing on earning their on-going business.
In each of these examples, a business intelligence tool can track calls post-training to measure uptake of training and if your programs did indeed change seller behaviour.

Here’s a real-life example of this:

TouchBistro, an iPad restaurant POS system, leveraged the LevelJump + Gong partnership to run programs, track seller behaviour change, and tie enablement to the ultimate outcomes. 

TouchBistro wasn’t hitting revenue goals because of small deal sizes, which was a result of heavy discounting. So to increase deal size, they changed their sales approach from discount-based to value-based.

They refined their messaging to focus restaurant pain-points, and using LevelJump, they rolled out micro-enablement programs for each message.

Then, with Gong, they monitored reps’ calls to measure if reps were using the new messaging post-training, and if so, what revenue impact it had.

The tools showed a 175% increase in value-based selling and a 34% increase in deal size. The program was a monumental success.

The Program Piece

Key metrics: Program completion rates, enablement content consumed, quiz scores, certification rates, etc.

At this stage, you are designing programs to change seller behaviour. The key metrics to track are consumption metrics – did people actually do the training? And assessment metrics – did they absorb the training they just did? 

This is pretty straightforward. If sellers aren’t taking your training or aren’t remembering much, then it’s unlikely to have any effect down the line.

For example, let’s suppose in this quarter the desired outcome is to decrease how long it takes a deal to close. Ideally, you would meet with sales leaders to first agree that decreasing the sales cycle is a valid objective, and second, figure out what behaviours need to change to meet this outcome. Suppose you agree on the following:

  • Reps need to focus their energy on fewer accounts at a time
  • They need to map the key influencers as well as decision-makers
  • Reps need to uncover (or establish) a compelling event early in the sales cycle to build urgency

From this list, you will understand what programs need to be run.

“But Spencer!” you cry. “I have no idea what behavior leads to faster sales cycles!”

Never fear, my friend. That’s where a call or revenue intelligence tool like Gong can help. Go back and find your 10 shortest and 10 longest sales cycles, and listen to the calls. What are the fast closers doing differently? How are they building urgency and driving the cycle? Once you know those behaviours, you can start cloning them with enablement programs.

Final thoughts and wrap up

Behaviour change is hard. To change seller behaviour, it takes time and patience and a lot of diligence. Sales managers often end up stuck because they know it’s a good idea, but are focused on the short term objectives in front of them. 

And who can blame them? Sales is a time-sensitive function. Given infinite time, even a room full of monkeys can hit revenue goals (or is it “produce the works of William Shakespeare”?).

In any case, modern sales enablement is far more evolved than a room full of monkeys.

But since sales managers are under constant pressure, it is perhaps doubly important to constantly measure metrics across all three pieces of the new sales enablement funnel.

More pertinently, enablement managers need to benchmark seller behaviour and outcomes before and after programs, and measure the same at regular intervals to show sales managers the trend of change.

To wrap it up, here’s what you need to do and measure to effectively ride the new sales enablement workflow:

  • Talk to the sales team and work backward from the desired revenue outcome to pick the metric that will stop them hitting their objective.

  • Benchmark the KPIs associated with the seller behaviours that your training will seek to change.

  • Once the training programs are designed and given to reps, measure completion and assessment metrics as an early indicator of behaviour change and revenue outcomes.

  • Use call tracking to see if the training works. Compare the new measurements of the pre-decided KPIs to the benchmarks.

  • Measure actual revenue outcomes and tie them back to the change in seller behaviour.

In the end, you’ll have a solid set of data to show the extent to which your programs were taken and understood, the change in seller behaviour they caused, and revenue outcomes they drove.

Ultimately, enablement needs to help the sales team do their job: make more money for the business. If sellers are not adapting to the need of the hour, then no amount of training is going to do any good. Enablement needs to change seller behaviour to reach revenue goals. Natural selection won’t be kind to organizations that fail to do this.

Image credit: Scott Graham via Unsplash