Building a Testing Culture with the 3 Second Scan

Yet another article trying to diagnose and solve a problem that every organization could likely solve differently. I’m solving this problem for companies that are in very immature stages of testing/optimization. That means you may have run one or two tests but want to run tests continuously and simultaneously. Go to any other site to find best practices. Yes, hit something like 90% statistical significance, pick 1 KPI, QA your shit, watch traffic volume, pay attention to local maximums, etc…

You’re probably doing all that. If you aren’t, click on one of those links above and read about it. It’s good stuff, but it has been said before in a better format than I have patience/talent to provide. Okay, so what is this thing that fewer people are talking about? To grow your program into a well-oiled machine you need to make tests appear less expensive. That doesn’t mean cheapen it… but let’s break down why this is critical.

Keep it simple

You’re smart. You don’t have to tell everyone else you’re smart. At the end of the day, people only want to know a few things. They want to know why we tested the thing, how we measured it, what won, and how we’ll operate as an outcome. That’s why I (alongside an incredibly talented designer) created the “3 Second Scan” template.

The “3 Second Scan” A/B Testing Presentation Template

What’s a “3 Second Scan” template? It’s designed to be able to be glanced at and understood by ANYONE in under 3 seconds. Okay, maybe it takes a little more than 3 seconds, but we felt this was as close as we were going to get. It’s simple. It’s printable on an 8.5×11 sheet of paper. It’s lower fidelity. I can hang this on a wall. I can’t hang a PowerPoint presentation.

Okay, so is our work done here? No. We’ll still get smartypants people asking for the data…  and rightfully so! That’s what an appendix for. Send out the raw data. Send out pivot tables in an Excel doc. You can make this as ugly as you want. The person who wants to dive into the data likely wants to manipulate it on their own and create follow-up hypotheses. Fantastic. Have at it! The more you use the 3 Second Scan template, the less they’ll care about drilling down to the nitty gritty details.

So what does this have to do with appearing “less expensive”? For one, it’s a lot faster. There’s less format jockeying. You’ve already defined EXACTLY what you’ll show up-front. You have a next step on the sheet – so your stakeholders are more inclined to say “Let’s move forward with your recommendation”. This minimizes overanalyzing.

The more pages in the slide deck, the more data you show, the more formatting you perform, the more ambiguous you leave the results, the more formal the ceremonies, the more you use language like a “failed test”… the more work people perceive you put into the test and the more time they think you wasted if a variant didn’t beat the control. This is BAD. It makes your stakeholders more risk-averse. People who haven’t adopted a testing mindset think we should only run tests that are “winners”. I would think the same thing if I thought my analysts were sinking hours or days into running a single test.

Next Steps

Change doesn’t happen overnight, but leveraging some kind of 3 Second Scan template is a good place to start. Feel free to copy this exact template. I built this in Word (I’m a masochist). I’ve used this in tests that have had 4 or 5 variants. It takes a little formatting work but we’re talking 15 minutes – not 3 hours.

You’ll still need to slice and dice the data for the folks who want to do their own analysis… and that’s OK! You probably already have raw data available and you can now just send them a spreadsheet as something like an appendix. I’ve found this process has made testing more collaborative. We received alternative hypotheses and ended up running more tests because they felt like they had a degree of control (they did – and that was a good thing).

Google Analytics Data Quality Checklist

Google Analytics is interesting. It has the lowest barrier to entry of any analytics tool (because it’s free) but is deceptively complex. As analysts, we should understand that no implementation is perfect. You can hire a team of consultants to audit and correct low-hanging fruit of your implementation, but your business isn’t stagnant. It’s fluid. Even if it’s as fluid as molasses you will still need to monitor your data quality. I can’t write a blog post that will solve that problem for you.

Instead, I want to help you solve problems that consultants will help you solve when they initiate the engagement. When I conduct site audits for clients, there is a set of consistent recommendations that are very “out-of-the-box”. The logic is consistent and brands also consistently overlook them. Instead of paying someone to go in and look at this fundamental stuff, here’s how you can do it yourself… and I’m assuming you know how to create goals, leverage eCommerce reports, enable demographic reports, internal search, and connecting GA to developer tools & AdWords.

Create Filtered and Unfiltered views


Historical data in Google Analytics cannot be changed. Once it’s in the reports… that’s it. It’s in your best interest to ensure that you have a backup plan in case you screw something up. That’s why it’s a best practice to have a Filtered view and an Unfiltered view. The Filtered view will have all of your “clean” data. This will be your primary view that you use to conduct analysis. The Unfiltered view (in addition to being your insurance policy) will also help you build out your filtered view.

Let’s say you force your URL’s to lowercase in your Pages report. There are some filters that you may want to implement that are case sensitive. If you are unable to see the case of these strings then your filters won’t work. You get the idea. That doesn’t mean you can’t enable eCommerce reports in the Unfiltered view (you should), that doesn’t mean you can’t create goals or link AdWords (you should). All this means is you keep the raw data as-is. No trimming of query strings, no forcing lowercase, no excluding internal traffic.

Force the Request URI to lowercase


See the problem here? When you’re analyzing user behavior on the homepage, you might be missing a significant chunk of traffic. This could affect how you interpret content affinity and campaign performance/reconciliation. This doesn’t just impact your reporting. Google Analytics imposes some restrictions on data volume:

Daily processed tables store a maximum of 50k rows for standard Analytics and 75k rows for Google Analytics 360.

This means if you exceed 50,000 unique page names (or 50,000 of any dimension), the remaining results will be bucketed into this “(other)” label:


Gross. So how do you fix it?

Go to Filters and create a Custom Filter. From there, click on the “Lowercase” option from the radio buttons lower on the page:


Select “Request URI” from the list and save. That’s it. Ensure you’re saving this filter on the Filtered view and not applying it to the Unfiltered view.

Remove useless query strings


So you’re now forcing things lowercase. Your work here is done. NOT SO FAST! Looks like we found more duplicate line items because of those pesky query strings. Time to knock those out. Query strings are split into name/value pairs (/index?name=value&name=value&name=value). You’ll need to take an inventory of all of the NAMES of query strings and get rid of them in the View Settings page in your Admin section:


It’s CaSe SeNsItIvE so make sure you’re searching for these strings in the Unfiltered view! That means listing “qstring1” will NOT work if the URL in the query string is listed as “qString1”. It DOESN’T matter if you have a filter applied to one of your views. I found the most effective way to take inventory of the query string names is to simply search for the following: \?

The slash that precedes the question mark escapes that character. Google Analytics’ search defaults to Regular Expressions so this will literally search for a question mark. From there, I just keep a text editor file open and add to the list. It’s tedious, but you really should only have to do a large-scale cleansing once.

Eliminate Spam… all of it


Spam in your GA account is annoying as hell. Let’s go ahead and assume you have already gone into your View Settings and checked the box above. If you haven’t… do it. It’s not perfect, but Google works hard to keep their bot filter up-to-date. So how do we identify and exclude the rest of the spam? Spam bots are primarily vehicles to get you to click on bogus websites – like


How do we know this is spam? 241 sessions, 241 new users, 100% bounce rate and 100% new sessions are a dead giveaway. Even when a popular site links to your own website, these metrics are basically not possible.

So from here you have a few options. You can create a filter to remove this domain; however, this spam traffic is often triggered without anything physically visiting your site. That means it isn’t associated with a hostname (for instance, my hostname is That said, it’s often easier to create a filter that includes your hostname:


This will cover 99% of bot issues.

Exclude internal traffic


You do a lot of testing. Your developers do a lot of testing. Your boss and employees do a lot of testing. There’s a surprising amount of traffic that comes from internal resources. If you don’t have one already, create a filter that will filter out your office’s IP address (or range of addresses). Talk to your development or IT team to understand what that range is.


By NO means is this list comprehensive. There are other considerations that you might want to take like cross-domain tracking, effective channel groupings and campaign tracking, and enhanced eCommerce among others. What are some other out-of-the-box items that you look for when you conduct an audit?

My Secret Sauce: Building a Forgettable Donation Form

A big part of my job over the past 2 years has been managing a single donation form. During this time I’ve operationalized the A/B testing program and turned our focus from the “big and sexy” to… well… boring and iterative. It’s not all that bad, though! We’ve grown our revenue substantially each year – largely attributed to our mission to understand what makes donors tick. At the risk of getting sued for sharing trade secrets, here’s our secret sauce: our form is forgettable.

Working at a nonprofit often means you work around a lot of incredibly passionate people. Every change you implement carries an element of emotion with it… a lot is at stake! In our case any change we make could directly affect the lives of children. We want everything we build to reflect our commitment to the mission which often means putting a patient on just about everything. It pulls our heart strings. We’re more than just fundraisers, we’re experience builders! So what’s on my go-to checklist for building our donation form experience?

Well, we need to collect payment. Ugh, how sterile. Let’s include a quote about our mission… and add in a patient. The big question is whether the patient should look at me or the form. Oh and why not add in where the money goes? We really want them to remember WHY they’re making the donation.

Reality check time. If a donor is on the form we’ve already done something right. They’re on the form. They’re already emotionally invested. If your form triggers emotion it’s likely frustration. The last thing I want to hear from a donor is that they remember filling out the donation form. We’ve found that the most productive variables to test on the form are the donation amounts. Adding a patient did nothing noticeable. Adding a value proposition actually distracted our users and reduced conversion rate. Sure, maybe if we spent another few months testing different permutations we might find something that works; but we consistently win when we test variables that have zero weight on emotion.

So to the nonprofit world, here’s my gift to you: make your form forgettable and you will make money. You don’t want anyone to remember filling out your damn form – they should remember the mission instead. To the person who will inevitably comment “emotion worked for us!”… you might want to check on your acquisition strategy. A form isn’t a landing page. Just sayin’.

What I learned working analytics at a “fixer-upper”

I think any analytics professional who has worked in consulting has run into these clients. There’s one person in the company who wants to be incredibly data-driven. How do we do this? Let’s bring in the experts who can lead us to data-driven nirvana. I LOVED these clients. You can spit in any direction and hit a winning recommendation and you can sell every single one of your services to them… and to be honest, they sometimes NEED it! They’re paying you to be a subject matter expert. You drop in some obvious recommendations and next steps and wait for them to clear the bottlenecks.

So it might be tempting to think: “It would really be great to work client-side, piggyback on their enthusiasm to become more data-driven, and reap the benefits of leveraging my expertise to help the company make more money!” Well… it’s not that easy. Here’s what I learned from working several years with what I would consider a “fixer-upper”. The tl;dr of this is there is a reason it’s a fixer-upper.

You’re only as fast as the slowest person

Within the first few months I had several analyses out with a large list of recommendations. My impression was that the product owner would look at it and say “Wow, these are great recommendations and they even come with revenue projections! Let’s use this analysis as ammunition and tackle these right away!” Instead, I was shown the backlog… all 350 items. “Oh, and did we mention we only have 1 front-end developer?” The developer was also working on several projects for the PO, was shared between multiple departments, and ultimately put in his 2 weeks notice. If it isn’t a developer it’s a designer or QA resource. You’re only as fast as the department that’s being suffocated the most.

How do you solve for this? Simple – use your analysis to build a case to hire folks and clear the bottleneck.

Your entire company is requesting additional resources

While your recommendations are fantastic, the email team (which pulls in over 20% of digital revenue) is short-staffed and desperately needs support so they can stop working nights and weekends. Paid search is also trying to scale their revenue… and oh man, one of the product specialists just left and needs to be replaced. These are all critical and urgent. Let’s wait to see what the revenue numbers are this quarter. They’re down? Shocking! Oh, and about those new hires… now just isn’t a good time.

You won’t be the design or UX messiah

The site looks bad because someone designed it that way. It hasn’t changed because someone wants to keep it that way. No amount of common sense or best practices will go farther than a minor tweak. After 6 months of persistence, the designer probably thinks you’re out to take his job and is twice as defensive. There’s some history someone wants to explain to you which clearly justifies why the site looks the way it does. Or maybe the VP of Merchandising complimented the site at some point. By this point you’re going through an existential crisis. Maybe I’m projecting.

Out of Touch
It reached a point where I wasn’t sure if this was me or the people I was working with.

You’ll have plenty of time… for reporting

Your analysis recommendations are slowly crawling through the backlog. You’re tired of trying to sell through A/B testing to a design team that doesn’t want to change the site. So what now? At this point you’re either working solely with the acquisition teams or (more than likely) you’re reporting. Don’t worry, that tag management system you implemented has you doing some fantastic implementation work so you can tell the team how many times people clicked on their new product sorting category.

The conclusion and the caveat

This is a personal anecdote and not always the case with fixer-uppers. However, it’s incredibly an incredibly common scenario that I’ve worked with both in consulting and client-side. These companies are fantastic to work with when you can pace your work with the client… 4 hours one week, 14 another. Unlike consulting, everything on client-side is a battle YOU have to fight. You’re working on this client for 40 hours per week (at least). It’s tempting to look at the transformation you might be leading with a client and think that working full-time on a site that desperately needs help is rewarding – and for those who are patient enough (to fight and wait and fight and fight and wait and fight) will eventually be rewarded with that transformation. Unfortunately, passion and motivation tend to wane over time.

Export DTM Rules with Tagtician

It has been a wild few weeks. We publicly released our beta about 2 weeks ago and have gotten some fantastic feedback – some simple and some more challenging. I’ve personally thanked everyone who has given feedback so far, but want to extend a thanks to those who are still actively using the tool. We’ve taken many steps to resolve any issues you may have encountered when using the tool. I’m happy to say we’re at a pretty stable point. For those hold-outs who disabled Tagtician because it was behaving funky – give it another shot and let us know if something still looks off. This is an actual beta, not one of those “here’s a full release but we’re going to call it a beta” type of things. If you aren’t already using Tagtician as your tag management companion, then you’re crazy let us know what we’re missing. I respond to every suggestion, criticism, and love letter.

Let’s talk about exporting. I’ve been waiting for this feature for SO long… so I built it. As of today (v0.84), you can export your entire DTM library to CSV. We even have a video explaining exactly how it works. I want to start with the first question we ask ourselves before we release a feature. Why would someone want to export their DTM library?

As a consultant, I want to…

  • Get a quick snapshot of my new client’s implementation
  • Quickly determine the scale of an implementation
  • Make notes
  • Send tangible documentation to colleagues
  • Store the current state of an implementation

As an analyst, I want to…

  • Own a local copy of my entire implementation (like an SDR)
  • Have the flexibility to switch tag management systems
  • Reference rules on-the-fly
  • Make notes and do the stuff the consultant wants to do…

Think of this as an insurance plan for your implementation. Alright, so what does it look like? Clicking on the button below will trigger the CSV download of all of the rules (or Data Elements) you see on the screen:

Export Rules Button

When you open the file, this is what it looks like:

Rule Export Template

We still have some formatting work to do… and it could probably use information like site name. However, we think this is a sufficient starting point. We hope you love it! If you don’t already have it installed, grab Tagtician off of the Chrome store.