The Price Tag of Analytics Precision

Precision in digital analytics is unobtainable. This isn’t an edgy statement, it’s a fact. You will never achieve precision in your Adobe Analytics account. Your A/B test data will always contain noise. If your company is larger than a mom-and-pop shop then your automated QA tools will never be totally comprehensive. As an analyst, you must figure out a way to be effective while not becoming exhausted trying to achieve analytics data precision. Let’s start with a list of noise sources we get out-of-the-box with any analytics tool.

Uncontrollable Noise

  • Clever spam bots inflate/distort data
  • Ad blockers stop data collection
  • People clear their cookies
  • Testing site traffic
  • Some visitors use proxies

You get the idea. We have no choice but to live with this noise. While this can be mitigated, it cannot be eliminated. Uncontrollable noise has been a nuisance and have gotten worse over the past 5 or 6 years with the spread of ad blockers and sophisticated spam bots. Let’s assume you’ve taken the right steps to minimize this noise. Apart from some basic filters, there isn’t a heck of a lot more you can do to prevent this from affecting your data.

Controllable Noise

  • Routine site releases
  • Dynamic tagging with CSS selectors
  • Legacy tracking code
  • Broken site code
  • Inconsistent data layer

“Controllable” might not be the best way to put it since you might not really have much immediate control over this stuff. This is often what I spend most of my time on when I QA. They’re the easiest levers to pull.

Thoughts

It’s up to you, but analytics precision can often stop at “good enough”. This means you aren’t burning cycles ensuring every single click tracking event is perfect before every single release and it means you are comfortable delivering data even though you know some testing traffic slipped through the cracks. I make decisions with data that isn’t precise every day. You do, too… as long as you’re confident it’s directionally accurate. What’s important is that you’re making decisions with data. Not enough people are doing that and instead going down an endless rabbit hole of QA. I might be the minority here, but I believe it’s better to occasionally make a wrong decision with data than it is to let precision paranoia scare me into driving fewer decisions.

That said, there are some QA tools that are supposed to make your life easier. When it comes to data quality, there are tools out there that can automate the QA monitoring; but as we shorten our sprint timeboxes and quickly approach continuous integration, maintaining the integrity of our implementation only becomes more cumbersome.

As you can see from this very scientific model, there is a direct relationship between the cost of analytics precision and site release frequency. I’m just using common sense here – something’s gotta give and I can guarantee that they won’t slow down to wait for your click tracking to be perfect. The reason I built the Tagtician web app was to provide QA to users without the overhead of micromanaging simulations and server calls every time the site changed. The philosophy behind the tool directly ties back to the representation in the chart above. Improving the speed and efficiency of your development cycles shouldn’t create that much more work for the analyst. At some point, the value of the analyst is completely undermined by the cost of maintaining the tracking and complex simulations in other QA tools. With or without an automated QA tool, the momentum behind shorter and shorter release cycles is creeping upward.

Is your company moving on the path towards continuous integration? How do you think this trend will affect your job? Do you think it will increase the time you allocate to QA work?

Tagtician enters the automated analytics QA space

The Tagtician team and I are proud to announce our newest product – automated analytics QA. More specifically, automated QA for Adobe’s Dynamic Tag Manager (and Adobe Launch). Thousands of you use our DTM Cheat Sheet and the very first DTM Chrome extension debugger. We built those tools to solve problems we personally encounter on a daily basis. We built them with user experience in the front seat. We’ve just changed the way you will use DTM (yet again) by building an automated analytics QA tool. We specifically designed it to actually be used by analytics practitioners.

Server calls aren’t what cause data loss

The majority of the time, it’s a site change or a rule error. Your marketing pixels likely have an SLA guaranteeing 99.9% uptime. The IT team has their own set of tools to monitor 404’s, page load timing, and JavaScript errors. The SEO team has tools to give them automated tips about changes they should make on each page. As an analyst, it doesn’t mean much to see each individual server call on each page. Tagtician automatically imports your entire DTM library out of the gate and specifically validates your rules.

Automated QA should save time

One of the biggest challenges we wanted to tackle was addressing the overhead that other automated QA tools require just to maintain them. It shouldn’t require a team to maintain your QA tool. At some point you’re losing more time and money on a tool than you would if you didn’t have it in the first place. We get that accurate data is important, but the cost of precision is often too high. That’s why Tagtician automatically seeks and tests your Event Based Rules. That’s right, no more micromanaging simulations each time there’s a site or rule change. When Tagtician detects an anomaly, it will alert you to exactly what and where the problem is so you can fix it without having to search through hundreds of rules.

Let me prove it

Okay… so you’ll save time, you’ll be QA’ing actual rules instead of server calls, and you won’t be paying for the salaries of an army of salespeople. To borrow a line from Optimizely – we’ll be the tool that you’ll actually use. Schedule a demo and I’ll personally show you how it works.

Early thoughts on Adobe Launch (new Adobe DTM)

The 2017 Adobe Summit week flew by and boy are my arms tired… okay strike that awful joke from the record. Summit was exhausting, to be sure. One of the most relevant announcements was the introduction of Adobe’s complete rewrite of the TMS Adobe DTM titled Adobe Launch. I went to a few presentations that walked us through the entire workflow and wanted to share some early thoughts about it.

The name “Adobe Launch”

Who cares about a name, right? It makes sense. You launch stuff. It’s a launch pad for user-created extensions (we’ll talk about these later)… yadda yadda yadda. Adobe wants to depart from the Adobe DTM brand – because it’s a totally different tool. I’ve already heard:

“So how do we launch… Launch?”

This might get a little annoying after a week or two. “Launch” is a very frequently used term. Speaking of which, try Googling “Adobe Launch”. Yeah, SEO’s going to be a battle. I think people would have been okay with leaving it as “Adobe DTM” and “Classic DTM”.

Rule Creation

Rule creation makes a lot more sense now. Page Load Rules are now grouped with Event Based Rules (because a page load is technically an event that occurs in the DOM). The if…then syntax just completely makes sense. They also added rule Exceptions as part of the conditions so you slackers don’t even have to use Regular Expressions. Google Tag Manager was a little ahead of its time on that one, I guess. Otherwise, structurally, it’s very similar to today’s DTM.

Publishing Workflow & Libraries

Currently DTM supports 2 environments – staging and production. Launch will default to 3 environments by default but will support any number of environments. So if you have a localhost, dev, pre-staging, staging, post-staging, pre-prod, pre-pre-prod… you get the idea. Each environment will have its own set of tracking code that can be individually toggled to be hosted on the cloud, FTP, or file download. This is a fantastic addition that many teams were surely wishing for.

Adobe also introduced Libraries. These are the equivalent to the GTM’s workspaces feature. You can batch rules, data elements, etc into these Libraries and deploy them into various environments independently. This has been a much-needed feature for teams with multiple DTM users.

So then there’s the approval workflow. The approval workflow is very pretty. It has a nice drag-and-drop interface that progresses libraries between workflow phases. It even lets you know when the library has actually been BUILT (so no more refresh/_satellite.buildDate spamming). This is an excellent addition sitting in the middle of a relatively abstract approval workflow. In the workflow you’re making a few decisions:

  1. What rules go into the library?
  2. What environment will this library correspond with once approved?

These are very straightforward questions; but if you have 5 environments, 4 libraries, and rule dependencies affecting multiple libraries it can get pretty complex pretty fast. That said, I think they did the best they could do given the constraints.

Extensions

Adobe Launch announced the inclusion of Extensions. These are basically plugins that add on to Launch’s interface. Not happy with the UI? Build an extension to change it. Have to constantly write scripts over and over to trigger an event? Build an extension. Want to implement Coremetrics? Build an extension. Adobe has made the tool completely modular. In my opinion, this is brilliant. Not only does it increase the breadth of tool coverage, but also outsources a lot of work and puts it in the hands of publishers (even if the publisher is the Adobe Analytics team). This makes a LOT of sense and is a very welcome addition.

The Open API

So the speakers at Summit kept selling this as the “big gun” of Adobe Launch. From an analyst’s perspective, I’m more cautiously optimistic. It turns DTM Launch into a sandbox. While this is nice, it has the potential to create a lot of noise. In one session an audience member asked about API security and governance. The response was “With great power…”. While that’s a laugh and while this could be a cornerstone feature, I’m not yet convinced that this won’t create more work for the Launch team by forcing them to curate, debug, and ultimately support an amalgamation of modules using the API.

I consider myself relatively responsible but it adds another talking point for other TMS platforms. It adds complexity and signals a turn towards a more developer-dependent implementation process. I only bring this up as a thing because so many pitches sold DTM as a tool that is “so easy a marketer could use it!”

One great part of the API is that users can basically export their rule library (very similar to GTM). While it’s one of the best features of the Tagtician extension, I’m happy to see them build something in-house that helps with documentation.

The Verdict

tl;dr 9/10 rating. It’s great and I’m looking forward to getting my hands on it. The new features make total sense for Adobe and the users. My only point of concern is API governance. If there really is a big development community around this open API then there needs to be something that helps keep things organized. Additionally, with more flexibility comes a greater degree of complexity. This isn’t necessarily bad but it does seem that Adobe is pivoting from the “so easy a marketer can do it” position.

Quick Tip to Test GTM’s dataLayer.push Calls

When I code I Google things. A lot. Sometimes I get so caught up in writing the right code that I forget there is an easier way to test things. Google Tag Manager’s dataLayer object is a little interesting in that it isn’t exactly a true JS object like you would see with the w3c standard for data layers… but it kinda is. When I build out tracking specifications I want to ensure that I’m sending the right data. Setting the dataLayer header object is pretty easy to test. Just paste your object in the console and BOOM – QA done.

dataLayer.push() is a little trickier and I don’t want to wait for developers to implement code I send over and then see errors because I made a dumb mistake. I don’t think you need any more exposition, so here’s how you test your dataLayer.push() calls:

Your first step is to put together the dataLayer.push() call that you’d like to use. Let’s pretend it’s to track video views:

dataLayer.push({‘event’: ‘videoView’, ‘videoName’:’Cool Video’, ‘videoProgress’:’100%’});

 

Note: If you do a direct copy/paste into your console it may paste in the wrong kind of quotes. If you get an unexpected syntax error try replacing the quotation marks in the console.

Next you’ll go to GTM and click Preview so you can Preview and Debug (I’m assuming you know what this does).

Now go to your site and open up your console. If you have your dev tools docked like I do, it should look like an ugly stack pictured below:

 

Now simply paste your code in the console, execute it, and ensure the event fires. I’ve obviously done this a few times while playing around with it. Now you can be 100% certain that the data you want to send to GTM is correct.

Building a Testing Culture with the 3 Second Scan

Yet another article trying to diagnose and solve a problem that every organization could likely solve differently. I’m solving this problem for companies that are in very immature stages of testing/optimization. That means you may have run one or two tests but want to run tests continuously and simultaneously. Go to any other site to find best practices. Yes, hit something like 90% statistical significance, pick 1 KPI, QA your shit, watch traffic volume, pay attention to local maximums, etc…

You’re probably doing all that. If you aren’t, click on one of those links above and read about it. It’s good stuff, but it has been said before in a better format than I have patience/talent to provide. Okay, so what is this thing that fewer people are talking about? To grow your program into a well-oiled machine you need to make tests appear less expensive. That doesn’t mean cheapen it… but let’s break down why this is critical.

Keep it simple

You’re smart. You don’t have to tell everyone else you’re smart. At the end of the day, people only want to know a few things. They want to know why we tested the thing, how we measured it, what won, and how we’ll operate as an outcome. That’s why I (alongside an incredibly talented designer) created the “3 Second Scan” template.

The “3 Second Scan” A/B Testing Presentation Template

What’s a “3 Second Scan” template? It’s designed to be able to be glanced at and understood by ANYONE in under 3 seconds. Okay, maybe it takes a little more than 3 seconds, but we felt this was as close as we were going to get. It’s simple. It’s printable on an 8.5×11 sheet of paper. It’s lower fidelity. I can hang this on a wall. I can’t hang a PowerPoint presentation.

Okay, so is our work done here? No. We’ll still get smartypants people asking for the data…  and rightfully so! That’s what an appendix for. Send out the raw data. Send out pivot tables in an Excel doc. You can make this as ugly as you want. The person who wants to dive into the data likely wants to manipulate it on their own and create follow-up hypotheses. Fantastic. Have at it! The more you use the 3 Second Scan template, the less they’ll care about drilling down to the nitty gritty details.

So what does this have to do with appearing “less expensive”? For one, it’s a lot faster. There’s less format jockeying. You’ve already defined EXACTLY what you’ll show up-front. You have a next step on the sheet – so your stakeholders are more inclined to say “Let’s move forward with your recommendation”. This minimizes overanalyzing.

The more pages in the slide deck, the more data you show, the more formatting you perform, the more ambiguous you leave the results, the more formal the ceremonies, the more you use language like a “failed test”… the more work people perceive you put into the test and the more time they think you wasted if a variant didn’t beat the control. This is BAD. It makes your stakeholders more risk-averse. People who haven’t adopted a testing mindset think we should only run tests that are “winners”. I would think the same thing if I thought my analysts were sinking hours or days into running a single test.

Next Steps

Change doesn’t happen overnight, but leveraging some kind of 3 Second Scan template is a good place to start. Feel free to copy this exact template. I built this in Word (I’m a masochist). I’ve used this in tests that have had 4 or 5 variants. It takes a little formatting work but we’re talking 15 minutes – not 3 hours.

You’ll still need to slice and dice the data for the folks who want to do their own analysis… and that’s OK! You probably already have raw data available and you can now just send them a spreadsheet as something like an appendix. I’ve found this process has made testing more collaborative. We received alternative hypotheses and ended up running more tests because they felt like they had a degree of control (they did – and that was a good thing).