The Analytics GPS

Create incremental value with analytics using a GPS

I use my GPS almost every day. The whole concept of a GPS is fascinating. The way they work is that 3 satellites measure the distance between their position and your position. This calculates your 2D position (latitude and longitude). Google Maps only needs your 2D position to tell you where you are and where you should go.

How a GPS works

But why 3 satellites? Why not only 2? Since the satellites measure the distance between you and them, it means you could be at any point where those 2 overlap. That’s not helpful. A third satellite allows devices to use a mathematical process called trilateration to precisely pinpoint your location. Who am I kidding? You probably already knew this. How do we use this same trilateration process to create incremental value with analytics?

You might have already guessed that this has nothing to do with satellites or a literal GPS. To be effective analysts, we have to have multiple sources of input to provide recommendations. For most of us, that starts with the data… usually from Adobe Analytics or Google Analytics. This data answers the question: What did our users do?

What did our users do?

Quantitative Measurement

When you report in a vacuum, this is what you get. You can answer questions about revenue, engagement, content affinity, acquisition; but then what? As the CEO, this doesn’t get me any closer to helping me drive my business in the right direction. It’s already clear why this is a problem. There’s no clear indication of why any of this data matters and driving valuable decisions based on data alone will likely yield unproductive results. Some people spend an entire career in this circle – and often folks stuck here aren’t making any recommendations at all.

A high bounce rate on the email unsubscribe page is a worthless data point. A spike in conversion rate looks great on paper, but the data point alone leads me nowhere. We need another satellite to point us in a better direction. Often we explain behavior through gathering feedback directly from the user by asking them: What did you come here to do on the site?

What do our users WANT to do?

What do our users want to do?

We’ve deployed surveys, conducted focus groups, and compiled data from user testing. Based on analytics and survey data, we see that the primary audience is a 45 year old female. We understand where users are experiencing friction on the site through user testing. We know that based on a survey that they don’t want to see certain products on the homepage and that the price is too high. With this data, we can provide proactive recommendations to facilitate a better general user experience for this audience and also several other audiences that were teased out of our analysis. We did it! Right?

Obviously… wrong. We’re still missing that third satellite. While we’re accommodating the experience for users who are currently visiting the site, no one bothered to stop and ask the question of whether these are the users the business wants to target. There isn’t any indication of whether the actions users are taking on the site are the ones the rest of the business wants them to take. Ironically, the last question is the most accessible one to ask but is often the most difficult to get a straight answer. It implies that someone has to take a risk and pick a direction. The question is: What do WE want users to do?

What do WE want users to do?

What do we want users to do

Someone needs to decide on a strategy. Maybe it’s the CEO. Maybe it’s the VP or Director. Someone needs to step up and decide what will drive the needle on the business’s stock price. In my experience, this last question is the most difficult for clients to answer. That’s because there’s an associated risk. It forces someone to choose a strategy and direction, implying accountability. This also gives meaning and direction to the work everyone does. Without this, you’re likely making the wrong recommendations and your organization could be moving in the wrong direction.

How do we complete the analytics GPS?

The Analytics GPS

Oh, you thought it was easy? You thought there was a silver bullet at the end of this? There isn’t. The truth is that you don’t just depend on the GPS – everyone else does, too. While you might get all of the answers to these questions, there are still steps you might have to take to evangelize this information. Let’s talk about how you can answer these first.

How to answer: What did users do?

This is in our control. Let’s ensure we trust our data and the definitions of our dimensions and metrics make sense to us and the rest of the organization. Maybe that means building out a data dictionary. The only thing more important that you understanding the data is your stakeholders understanding the data.

How to answer: What do users WANT to do?

This might not be in our control since it’s often a function of a UX team (or whatever they’re calling themselves these days). There are still steps you can take to collect this data – like leveraging usertesting.com or deploying surveys through your tag management system. The best route is to see what’s already available by building a connection between your department and the team that conducts this research.

How to answer: What do WE want users to do?

Find the person who has “ownership” of what you’re trying to improve and ask them. If they don’t know, ask their boss. There’s no tool that will give you this answer, but there are tools available that will help you ask them the right questions. In short, schedule stakeholder interviews… and schedule them quarterly, if possible. Strategies change.

What if I can’t answer one?

Well… why not? The analytics piece is easy (what users did). I often find that the other two might get tricky. You can still do a semi-effective job if you can only answer what users did and what we WANT users to do. However, we’re still not answering whether we’re pulling in an audience that wants to do it. All of the measures we take to optimize the site might be for the wrong people! Always recommend conducting user research so you can at least say you tried.

The point of this model isn’t to put the responsibility of the entire company on the shoulder of the analyst. In fact, it’s the opposite. It protects us. It’s to keep us out of trouble when we need to push back on reporting for things that don’t matter to the business. It’s to make our work more strategic and less operational. Lean on this as a GPS to help guide your business and your work in the right direction. This the foundation upon which data driven companies are built.

3 great ways to use Tagtician… and what’s next

You use Tagtician already to view rules, check out libraries, and audit data elements. Let’s take a closer look into how Tagtician helps you become a better analyst and how it builds transferable skills (crazy, right?). I don’t believe it’s beneficial to use a tool unless it makes you a better practitioner. You may know about these use cases. If so, great – skip to what’s next below.

Determining Load Order

The way Tagtician displays rules is very intentional. What we do is we look for each rule that fires and list them sequentially based on when they are triggered. For instance, Page Top will always fire before Page Bottom, which will fire before On Load. How do you know what is firing first? The bottom of the list shows the rules that fire first while the top shows what fires last. Why is this useful? If you have dependencies between rules, you will want to understand when variables are being set or when you’re grabbing data from the DOM.

Simple Documentation

So people have had a tough time finding the Export button since we redesigned the interface. We moved it into the context of the rules with the intent to draw an association with the physical rules in the list as opposed to looking more like a global export. Anyways, documentation is often left behind with tag management implementations. Exporting your rules on a monthly basis is a recommended best practice to store versions of your implementation. While DTM stores versions, it can’t be distributed or viewed in bulk. So how does storing this documentation make you a better analyst? Doing so shows you take governance seriously. It’s a head start to build your SDR. It could also mean you are leveraging Tagtician to quickly acclimate to new implementations even before you get access.

Quick Search

When things go wrong, finding out what’s screwed up isn’t easy. We’ve had the search capabilities for a while but without the highlights. Adding in the highlighted text will shine light on blind spots in your rules. As an analyst, you want to make your boss and stakeholders confident that you are able to track down issues with their implementation. This accelerates that search without having to go to the DTM interface and dig around. This is also probably the most obvious time saver.

 

What’s Next?

Data Layer Debugging

We’re trying to find ways to reduce the amount of times you have to flip between Tagtician and other Chrome tools. One way to do this is by exposing the data layer in a user-friendly way within a rule shelf. The way it will work is we will automatically look for window.digitalData, which is the name of the w3c standard for data layers. We’re automatically detecting it because we don’t want to have to build in a bunch of interfaces and business rules. This could change in the future and updates will include other popular permutations. If you have a request for an object name to add, please reach out to me and we’ll add it in.

Adobe Launch Support

We will (again) change the way you manage your rules. We are working on hooking Tagtician into the Adobe Launch API so you can modify and push rules into staging straight from the rule shelf that you’re used to. This means you’re not going to have to go to another website to make changes. You can work with everything in the context of your site. So let’s be clear about timing. Launch is going to slowly roll out to some folks but it’s still going through a lot of changes. We will support Adobe Launch.

Re-imagined, Integrated Automatic Scanning

We built an awesome scanner, shopped it out to people, and it worked wonders. There was one problem. It felt like going to a doctor’s office. It’s a fantastic tool, but I didn’t even use it. I didn’t use it because it was bad. It automatically creates simulations and it validates your rules with a simple click and minimal configuration! However, I don’t believe the future of QA is on a unique web property. We’re going to great lengths to bring this technology to you instead of the other way around. Automatic scanning will be a remarkably affordable way for you to keep an eye on your rules. Look for more news about this later this year.

August 2017 Web Analytics Wednesday in Atlanta

So I haven’t made a blog post in a while – been a little busy with Tagtician and the move back to Atlanta. Posting on my site has taken somewhat of a back seat lately. In an effort to make my life more difficult, I’m also interested in building the culture of analytics in Atlanta. One of the best ways I met other people in the industry was through Web Analytics Wednesdays. It has been a while since it has been hosted in Atlanta and I’d like to get it started again. We hosted one a month or two ago and it was a (very) small but mighty group. For other folks in Atlanta – I’d love to have some more people come and hang out.

Here are the details for this month’s WAW:

Location: 191 Peachtree Tower, floor 40 in the VML office (the Deloitte building)
Date & Time: 8/23 at 6:30PM

If you have trouble finding it, please message me on Twitter or shoot me an email (jimalytics -at- gmail)

Agenda: None – this is just a meet-up. Maybe when it grows a bit we can look into speaking opportunities (kind of a chicken/egg dilemma with this). Hope I see you there!

Transition from Adobe DTM to Adobe Launch with Tagtician

The transition to Adobe Launch from Adobe DTM promises to be simple and painless. You don’t have to update your header/footer code and it will preserve the fundamentals you grew to love in DTM in some way, shape, or form. There will undoubtedly be automated tools that help with that process making it completely seamless. Adobe DTM is known for making implementation easy and accessible. That said, it isn’t responsible to blindly trust that everything was imported from DTM to Launch with 100% precision.

We all know it’s easy to mess up rules. I’ve done it countless times. I’ll target an element that’s on more pages than I expected or add a custom condition that only sometimes works. We’ve all been there. The migration to Adobe Launch reintroduces some of those risks. The tool will change and it’s not worth risking a total disaster when you can leverage Tagtician to ensure your migration proceeds as expected.

Tagtician Dashboard

 

The Tagtician Advantage

Our customers will boast a unique advantage when Adobe Launch is finally released because they have a tool that knows their rule library. Our crawler knows what rules to look for and where to find them. When they’re ready to transition to Adobe Launch, our customers get the confidence that the transition was successful without having to touch a thing. That’s being proactive.

How it works

  1. Provide a URL to scan (like tagtician.com)
  2. Press the “Scan” button
  3. Tagtician’s smart crawler scans the site to seek out rules (even Event Based Rules)

Done. That’s it. When the scan is finished you have a full report of what rules were found and on what pages. From there, analyze the results and discover rules that might appear where you don’t expect them to. If you’re a completionist, you can provide extra URLs our crawler might have missed to fill in gaps. Every subsequent scan will then compare the data against that first scan (unless you tell it otherwise). Tagtician now also validates data layers. 

This is a tool specifically built for the Adobe Launch transition and maintenance. Furthermore, we know how to keep prices competitive because our “salespeople” are practitioners. We built and priced Tagtician to allow immediate activation without all the legal red tape. Seeing is believing. Let’s talk about how Tagtician can help secure your Adobe Launch transition with a demo.

The Price Tag of Analytics Precision

Precision in digital analytics is unobtainable. This isn’t an edgy statement, it’s a fact. You will never achieve precision in your Adobe Analytics account. Your A/B test data will always contain noise. If your company is larger than a mom-and-pop shop then your automated QA tools will never be totally comprehensive. As an analyst, you must figure out a way to be effective while not becoming exhausted trying to achieve analytics data precision. Let’s start with a list of noise sources we get out-of-the-box with any analytics tool.

Uncontrollable Noise

  • Clever spam bots inflate/distort data
  • Ad blockers stop data collection
  • People clear their cookies
  • Testing site traffic
  • Some visitors use proxies

You get the idea. We have no choice but to live with this noise. While this can be mitigated, it cannot be eliminated. Uncontrollable noise has been a nuisance and have gotten worse over the past 5 or 6 years with the spread of ad blockers and sophisticated spam bots. Let’s assume you’ve taken the right steps to minimize this noise. Apart from some basic filters, there isn’t a heck of a lot more you can do to prevent this from affecting your data.

Controllable Noise

  • Routine site releases
  • Dynamic tagging with CSS selectors
  • Legacy tracking code
  • Broken site code
  • Inconsistent data layer

“Controllable” might not be the best way to put it since you might not really have much immediate control over this stuff. This is often what I spend most of my time on when I QA. They’re the easiest levers to pull.

Thoughts

It’s up to you, but analytics precision can often stop at “good enough”. This means you aren’t burning cycles ensuring every single click tracking event is perfect before every single release and it means you are comfortable delivering data even though you know some testing traffic slipped through the cracks. I make decisions with data that isn’t precise every day. You do, too… as long as you’re confident it’s directionally accurate. What’s important is that you’re making decisions with data. Not enough people are doing that and instead going down an endless rabbit hole of QA. I might be the minority here, but I believe it’s better to occasionally make a wrong decision with data than it is to let precision paranoia scare me into driving fewer decisions.

That said, there are some QA tools that are supposed to make your life easier. When it comes to data quality, there are tools out there that can automate the QA monitoring; but as we shorten our sprint timeboxes and quickly approach continuous integration, maintaining the integrity of our implementation only becomes more cumbersome.

As you can see from this very scientific model, there is a direct relationship between the cost of analytics precision and site release frequency. I’m just using common sense here – something’s gotta give and I can guarantee that they won’t slow down to wait for your click tracking to be perfect. The reason I built the Tagtician web app was to provide QA to users without the overhead of micromanaging simulations and server calls every time the site changed. The philosophy behind the tool directly ties back to the representation in the chart above. Improving the speed and efficiency of your development cycles shouldn’t create that much more work for the analyst. At some point, the value of the analyst is completely undermined by the cost of maintaining the tracking and complex simulations in other QA tools. With or without an automated QA tool, the momentum behind shorter and shorter release cycles is creeping upward.

Is your company moving on the path towards continuous integration? How do you think this trend will affect your job? Do you think it will increase the time you allocate to QA work?