Biz Tips: Organizing Growth Experimentation

Biz Tips: Organizing Growth Experimentation

GROWTH:

Organizing Growth Experimentation

The Right Way to Conduct Growth Experimentation

https://medium.com/media/4a1c0ca20dae06053ac8c59ebd5cbe96/href

Oooooh, a new shiny object! 💠 Marketing technology continues to evolve at a far faster pace than marketing teams are able to internalize, interconnect and leverage. Added to that fact is that while in theory, more data could mean better signals, in practice, it often means more noise that drowns out the signals.

We love to say we’re conducting “experiments” because it sounds scientific. We love to say we’re only trying something as a “pilot” because it sounds non-committal and thus something that can hopefully avoid bureaucratic processes.

Understandable. Note, however, that while you may want to run a pilot to test a new martech tool, we should not confuse that with an experiment. An experiment tests a falsifiable hypothesis. As marketers, our hypotheses should be about who is using our product, why, in what way, and how much value (and what kind) our product is delivering.

Any learning process based on the scientific method is experimentation-centric. The handy graphic below is a good refresher on what exactly the scientific method is:

Image borrowed from newmr.org

We can and should take a more disciplined approach to marketing experimentation. An ideal approach is: 1) is truly governed by the scientific method, 2) increases institutional knowledge, and by extension, competitive advantage, and 3) results in a better experience for our customers.

I’m going to walk through how we organize our marketing experiments, using our own product, GLIDR. The basic steps are:

  1. List and prioritize your assumptions.
  2. Structure and conduct a scientifically valid experiment.
  3. Organize and analyze the evidence.

If you’re familiar with the Build/Measure/Learn loop, this is essentially the same concept.

Assumptions:

Every idea has some assumptions behind it. For the sake of example, let’s say that I’m interested in testing my hypothesis that our product will resonate better with a particular audience segment. “Resonance” will be measured by website visitor to free trial conversion. That’s a fairly simple hypothesis and at first glance, one might not notice any related assumptions I’ve made. However, there are:

  • I’m assuming that the average reader comprehension for my product messaging is healthy. If it’s not, and nobody understands my product messaging, it doesn’t make a lot of sense to test segment vs. segment performance.
  • If average reader comprehension is healthy, I’m still assuming that we have a good baseline of data for the control group segment.
  • I’m assuming that my value proposition resonates with the test group.
  • I’m assuming that the website visitor to free trial conversion rate is a good quantitative indicator of how well my product resonates.

Listing our assumptions like this can help us evaluate the experiment results in richer context. It can also help us decide which crucial assumptions are underpinning our business and therefore, which should be tested through experiments. In fact, given the listed assumptions above, we should probably move on to a messaging comprehension test soon after we’ve completed testing our audience segmentation hypothesis.

Join The Rockstar Entrepreneur Community Now: Start Rockin Now

Leave a Reply

Your email address will not be published. Required fields are marked *