a green and white logo

"Our conversions are down. Quick, give me a best practice to improve sales ASAP."

It's very easy to assume what works and what doesn't work on a website.

But consider this:

Even the most obvious ideas can have the exact opposite effect.

For example, offering as many payment methods as possible won't work if customers already have card details stored in browser auto-fill functions.

It is why no idea is too "best practice" to avoid testing.

And some of those beliefs are actually myths that are hindering your experimentation program.

Here are 5 important ones that you need to avoid to truly scale your experimentation results. 

text

The 5 myths affecting your culture of experimentation

  1. Testing is just about optimization

    Myth: Experimentation is an add on to the development process 
    Truth: Experimentation and development are part of the same process. 

    Whether a new feature, product, or campaign, the task is to ship everything fast.

    Experimentation is only reserved for the stage once everything is ready.

    It happens AFTER investing weeks, months, or quarters on the building part.

    Imagine walking through a dark forest for hours without checking a compass or map - makes no sense right?

    By doing so, it becomes guesswork if you're building the right product or feature for your audience.

    And what if you aren't?

    This is why you need to introduce experimentation early in the development process.

    The earlier you test something, the sooner you identify potential opportunities for improvements— or errors in judgment that need correcting.

    Experiment Nation
    The Global CRO community

    Testing your hypotheses early on in the ideation space can make space for bigger and better ideas.

    You can avoid assumptions about what matters. Like:

    • Who the target users are 
    • Their needs and pain points 
    • What value they will get 

    diagram

    No doubt with experimentation you can ship ideas your customers actually want to use. 

    See it in action with this product experimentation guide

  2. More tests mean more impact 

    Myth: Velocity is about the number of tests
    Truth: Conducting more tests isn’t always the answer 

    The world of experimentation is a numbers-obsessed one. Everywhere you look, you're surrounded by programs, case studies, and advice on conducting "more" tests.

    But it seems this experimentation advice doesn't align with better business outcomes.

    Here's what we found: When programs move beyond 30 tests per developer per year, the expected impact drops by a whopping 87%. 

    Volume at the cost of quality can harm performance and the expected impact of your experiments.

    That is why you need to carefully plan how to use your developer resources. Maybe you’re not eyeing 30 tests per developer just yet, but consider this: The highest expected impact occurs at 1-10 annual tests per engineer. 

    Running 1-10 tests per developer per year provides enough room for each developer so that every test you run can be effectively planned and executed with high quality.

    It requires:

    • Setting clear hypotheses and sharing them with the experimentation team 
    • Carefully planning resources 
    • Providing autonomy for decision-making 

    Quality -> Velocity -> Impact (and conversions) 

  3. Analytics is simply data 

    Myth: Having loads of data will give you the insights you need
    Truth: Having data is easy. Making sense of it is hard.

    The real meaning of the word 'analytics' has become entirely lost after being mixed with data.

    Folks often refer to the interrogation of data or even just reporting numbers when they talk about analytics. Number crunching is what it is. 

    Analytics is not the above. It is breaking down results into component parts that allow an understanding of the whole.

    For that, you require critical thinking and be assisted through the use of data.

    What you need: 

    • Identifying the problem 
    • Designing the analyses and experiments 
    • Recognizing and avoiding cognitive bias through results 

    This is real analytics. And experimentation helps you have it in place.

  4. Experimentation means A/B testing 

    Myth: A simple test will increase your website conversions. 
    Truth: Delivering impact takes more than a variant.
    Most tests: Button color & copy. 

    Focusing on quick wins when starting your program is a great way to get momentum. Helps build that culture of experimentation with the ability to report results on different initiatives – and ideally celebrate that big win. Perfect.

    However, if you’re ready to scale here’s why we recommend moving beyond running simple A/B tests.

    • No long-term learning 
    • No hypothesis apart from "what if..." 
    • No major uplift or impact on business goals 

    When scaling, the focus needs to shift from velocity to impact.

    The key to that is running complex experiments.

    And delivering impact takes more than a tiny change. 

    For example, only a third of experiments make more than one change, yet show much better returns.

    chart, bar chart

    Source: Lessons learned from 127k experiments 

    Complex experiments:

    • Have more developer resources with a diverse portfolio of iterative changes (pricing, discounts, checkout flow, data collection, etc.) 
    • Document every change and improve how users interact with your website/app and their behavior directly. 
    • Choose experimentation metrics that measure operational efficiency, quality, and program adoption - depending on your company's needs. 

    With complex experiments, you can deliver value, build trust, and get resources to scale your program. 

  5. There is more failure than success in experimentation

    Myth: Win rate is the only thing that matters

    Truth:Win rate alone is a vanity metric

    Roughly, only 12% of experiments win on the primary metric. Does this mean experimentation is about facing more failure than success? 

    It's not that simple. Win rate alone is a vanity metric. Sure, it's important, especially when your program is new to get leadership buy-in. But when scaling, expected impact is a better indicator of success. 

    It focuses on uplift delivered and we all know businesses care about ROI the most. At the same time, the need to flip things around and start learning from your tests 100% of the time. 

    By focusing on the losing and inconclusive results, you get to eliminate anything that can have a negative effect. Plus, stop investing time and resources into areas that won't generate capital for the business.

    So, failure happens if you don’t learn from all your experiments. 

    And the magic happens when you embrace experimentation across the digital product lifecycle. Here’s what that looks like in practice.

    text

How to revive your experimentation program

Here are 5 things that can help you bring your program back on track: 

  1. Test the entire digital experience 
  2. Optimize on every device 
  3. Research + Data-driven marketing > Opinions
  4. Have integrations for key systems 
  5. Focus on key metrics 

Here’s an example of how that looks like in practice: 

  1. Here's what we recommend especially if you’re running an ecommerce business: 

    • Make the path from your landing page to having the product in hand seem fast and easy 
    • A clear delivery window with upfront pricing.
    • An easy-to-understand returns policy to reduce commitment pressure.
    • Unboxing videos to help visualize the product experience.
    • How-to-use section to show what to do when the product arrives.
  2. Improve your messaging by communicating your value proposition and product details.  

    • Where am I and what does this site sell?
    • How does it solve my problem and what do other previous customers think about it?
    • What makes this the best product/service on the market?
    • What strong reasons do I have to purchase now?
    • How do I get it/ what do I do next?
  3. Remove anything that distracts visitors from the main call to action.  

    • Aggressively cut content and remove things like countdown timers.
    • Use a sticky add-to-cart instead of sticky headers.
    • Direct users to relevant products on collection pages.
  4. Align your content with visitor intent.

    • Keep the messaging consistent with the pre-click experience.
    • Consider which stage your traffic is at in the consideration cycle and show content accordingly.
    • For example, show longer form explanatory content if a visitor is at the awareness stage.

To learn more on how to achieve incremental changes in the long term, check out this conversion rate optimization article.

Final takeaways

Here are three takeaways to help you avoid myths and build a successful program: 

  1. Go with ABCD instead of just AB tests. 
  2. Run complex experiments and add critical thinking to your data.
  3. Choose the right metrics that improve your overall site, not just get you win rates. 
Keep learning

Experimentation is a fundamental way in which the whole business explores and develops ideas. It validates (without investing a lot of resources) if an idea is a great idea and what is the most efficient way of building it.

To learn more about the process, here are a few resources we recommend: