You know what’s worth celebrating? Running an A/B test between 3 versions of copy and the variant yields a 36% increase in email opt-ins with a 97% significance level. Rolled out across every web property, a friend’s optimization team learned this could potentially (and conservatively) yield $750,000 based on the average value of the email subscriber. This was an actual outcome of a test that was run – one of the first tests this team executed to kick off their testing program. To them, it felt great. $750,000 was significant for the small company. In their head, confetti would rain from the sky, they would have a hotel room booked that night – with a bathtub full of $1 bills… no, $100 bills, their position would be impervious. Maybe that’s just how I envision it, but nevertheless the team couldn’t wait to share the news.
In an alternate universe, the boss gave the team an afternoon off, shared the results with his boss, and used this test as a case study to help propel the team’s A/B testing program. Instead, it was met with cautious optimism.
Boss: “That’s good to hear, but before we show anyone the data can you separate the traffic mobile vs. non-mobile? Also, is there a way to split out people who might have registered before vs. those who haven’t? I’m just not sure we’re ready to make this change.”
Why would this throw the team off? Their boss obviously cares about the details! There are a few things wrong with this. For one, it looks like the boss is creeping into analysis paralysis territory. That means that this extra information likely won’t contribute to the story… and as a result end up being a distraction that diminishes the outcome of the test. By slowing down the testing process, the boss is impacting its inertia (well duh). The team had (hopefully) done its due diligence up-front and was ready to celebrate a major win. Reducing that momentum can affect the perception of their job. When the team assumes that the results of every basic test will be heavily scrutinized and generate hours of additional work, it affects their morale.
Don’t get me wrong. I’m totally on board with digging deeper, but there’s a time and a place. Maybe they don’t completely trust the revenue figure. Sure, there might have been a little saturation in the mix – but the estimate was conservative. They were 97% confident email sign-ups increased by 36%, though! That’s an opportunity to broadcast that as a success and help legitimize their efforts to foster a more data-driven culture.
It’s important to ask questions and generate alternative hypotheses, but the time to do that is up-front… before the test is run. Asking to do segmentation and a deep-dive after a very basic A/B test is an impediment. It kills the enthusiasm and inertia of the team executing the test and breeds a culture of unnecessary skepticism – not to be confused with a culture of curiosity. Beyond that, it’s important to celebrate any outcome with the team (whether a variant wins or loses). By celebrating outcomes you’re fostering a culture that doesn’t view your “control” winning as a failure, but instead a learning opportunity. Celebrating outcomes adds critical momentum to help your team move on to the next experiment.
The flip side. I’ve put a lot of thought into this because it’s an interesting case study. This wasn’t a one-off, so my question was: If this is happening frequently, why not change your process and bring the boss in and have a conversation with the boss about the test’s execution and measurement so there aren’t any surprises? By making some stakeholders part of the process, it makes it easier to celebrate the outcome because there’s a larger sense of ownership. It’s also an opportunity to educate some folks about the KISS principle (Keep It Simple, Stupid). Educating your stakeholders can be as important as the test itself along with ensuring they’re comfortable with any test variant winning.