How do I improve my GTM execution with experimentation?

OpenView’s 2023 SaaS Benchmarks clearly shows go-to-market execution as the number one concern for SaaS founders.

There is no point having a great product, a great team, and cash in the bank if you can’t figure out how to sell.

I start this post with that graphic, because take a look at what is second - product execution.

The process for developing a great product through experimentation has been well defined over the last 20 years.

We also use many of those techniques when growing revenue in a Product Led Growth motion (because the product is the source of the revenue)

But when it comes to Sales Led Growth, we rarely follow a process of experimentation.

Instead we follow the HiPPO.

The Highest Paid Person’s Opinion.

“We should do this, we should do that”

“Why?”

“Because I said we should.”

Here are five questions I ask to help clients start thinking about their go to market motion through the lens of experimentation.

What go-to-market experiments are you running right now?

This is a good opening question, because founders and revenue leaders can nearly answer it, but not exactly.

They are trying things. They are rolling out new ideas.

But they aren’t really experiments.

An experiment is:

“a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact.”

In an experiment you are trying to test something out, to prove that it is true or not.

In a simple product experiment you might want to prove that a green button on a landing page will covert at a higher rate than a blue button.

When you consider go-to-market initiatives that you are implementing - have you really defined the thing you are trying to prove?

What hypotheses do you have?

Assuming we agree that you aren’t running any true go-to-market experiments today, let’s now consider how the upcoming initiatives could be reframed as an experiment.

Here’s some things you might be working on:

  • Changing your commission plan

  • Rolling out a new call recording tool

  • Creating new email templates

  • Adjusting seller’s territories

  • Changing pricing

  • Reducing account holdover periods

These are all typical initiatives a sales led function might be conducting.

How would you frame this as a hypothesis?

A hypothesis frames a situation and then describes what you believe the result will be.

“I believe that if I water the plants then they will grow more quickly”

A hypothesis has two variables:

  • The independent variable (the thing you will change for your experiment): how much you water the plants

  • The dependent variable (the thing you will monitor for the outcome): how much the plants grow

In the product context,

“We believe that if we change the button colour to green then more web visitors will click it

In your GTM context,

“We believe that if we increase the accelerators for our top performers we’ll reduce voluntary attrition

“We believe that if we reduce the number of accounts in a territory the SDRs will increase their opportunity creation

In my experience, HiPPOs (Highest Paid Person’s Opinion) are good at enforcing the independent variable (we will change this), but is not good at defining what the intended outcome is or, as we will now see, how to test it.

How will you gather hypotheses from your team?

A hypothesis is an idea, a suggestion, and therefore you want to capture as many of these as possible from across your go-to-market team.

It is your individual contributors who are closest to your customers - they will have a long list of hypotheses if you ask them for it.

Depending on the tech you use create a process:

  • Very simple: Google Form, which populates a spreadsheet

  • Medium complexity: Asana, Airtable, Trello, Monday.com

  • Higher complexity: custom object in Salesforce or Hubspot

Every team call you have, every monthly kick off, every 121, encourage hypotheses submission from the team.

“I believe if we did this, then this would be the impact to our revenues…”

How will you prioritise your experiments?

Hacking Growth by Sean Ellis and Morgan Brown defines a methodology for prioritising your experiments and I’m yet to read anything that beats it.

The ICE framework stands for:

  • Impact: on a scale of 1 to 10 if this hypothesis were true what would the impact be to our revenues?

  • Confidence: on a scale of 1 to 10 how confident are we that this hypothesis will be true?

  • Ease of implementation: on a scale of 1 to 10 how easy would it be to implement this change

There is no point running experiments where there is low impact, where you don’t believe in the hypothesis, or it would be so complex to implement that even if it were true, you wouldn’t move forward with it.

Each hypothesis receives a score out of 30, and this allows you to pick off the top two or three to test each cycle.

How will you test your hypotheses?

Having prioritised your hypotheses, we again need to take a leaf out of product development’s book.

We test our hypothesis in a small scale before using the results of the experiment to determine if we roll out the suggestion to the entire customer base.

In our example of the green and blue buttons, we can show a green button to a subset of web visitors - maybe 15%.

If the 15% that see the green button do indeed click it at a higher rate than the blue button then we can update the button for the entire website and move onto a new test - this is called A/B testing.

In product development, these tests can run in weeks, days and in some cases just hours as they gain enough evidence to prove or disprove the hypothesis.

This process of testing is normal in product development - but in go-to-market we see initiatives being rolled out without any testing - “here is the new pricing/model/territory - get on with it.”

Let’s take the example of the hypothesis that if we increase the accelerators that will reduce voluntary attrition in our top performers.

On the face of it that would be very hard to test - attrition is something that happens over a relatively long period of time, and the impact of accelerators would only come into play as new deals get closed.

But think through with your team:

  • Could you survey the team about the accelerators?

  • Could you have 121s and get a pulse rating?

  • How else could you track the flight risk of top performers?

  • Could you deploy the additional accelerators as a SPIFF to single region/team?

  • Could you track your historical attrition of top performers and measure against the current trend?

Figure out how to test your hypothesis. If you can’t, its not an experiment.

Go faster by shortening the experimentation cycle

Go-to-market teams have a horrible habit for annual planning.

In month 11 and 12 of the year there is frantic work to adjust commission plans, territories, pricing, packaging.

That gets launched to the sales teams at an annual kick off, and then very little changes for the remainder of the year until “planning season" starts again.

In effect these are one year ‘experiments’ - way too long.

Our product colleagues would laugh at us if we said we were going to wait a year to see the results of these changes.

Three ways to accelerate the speed:

Increase the volume of hypotheses into the top of the funnel.

Mention go-to-market experiments on your sales all hands calls. Recognise and reward those that log ideas. Reinforce that ideas from the field are often those that can have the biggest impact on revenue. Provide submitters with access to senior executives or additional career development.

Reduce the testing time to weeks or months

Where a test relates to something of high volume, such as an email template, change to a website, or event landing pages - these can be tested in days and weeks.

For something that is of lower volume, consider how you can break the test down into a shorter timeframe. Results in three quarters time are too slow.

Implement the changes to demonstrate the impact

Experimentation for the sake of it doesn’t impact your revenue. Having proved a hypothesis, you need to demonstrate to the wider team that you can move fast and make the required changes.

Updating pricing, changing the sales process, launching new events.

You want your sellers to see you have a high rate of activity - idea comes in, prioritised, tested and deployed in weeks and months, not years.

Conclusion

Adding experimentation into your GTM plan helps accelerate your learning as you uncover your path to repeatable, consistent revenue growth.

Encourage hypotheses from your teams, and develop a system for prioritising, testing and implementing your results.

Good luck, and if you need help - I love helping founders and revenue leaders implement this process.


Get started

Whenever you are ready, there are three ways that I can help you accelerate your revenue.

  1. RevOps Maturity Assessment - Take my free 22 question assessment and receive specific suggestions on how to improve your revenue growth.

  2. Business Model Design Workshops - I’ll work with you and your team to design or refine a business model and value propositions for a new or existing product.

  3. Pipeline Emergency Rescue - I’ll fix your pipeline problem in 12 weeks, working across your revenue teams to create and launch refined value propositions, buyer enablement tools, and new campaigns.

Previous
Previous

Why do we have Planning Season in RevOps?

Next
Next

Buyer Enablement Tool Creator Template