Early thoughts on Adobe Launch (new Adobe DTM)

The 2017 Adobe Summit week flew by and boy are my arms tired… okay strike that awful joke from the record. Summit was exhausting, to be sure. One of the most relevant announcements was the introduction of Adobe’s complete rewrite of the TMS Adobe DTM titled Adobe Launch. I went to a few presentations that walked us through the entire workflow and wanted to share some early thoughts about it.

The name “Adobe Launch”

Who cares about a name, right? It makes sense. You launch stuff. It’s a launch pad for user-created extensions (we’ll talk about these later)… yadda yadda yadda. Adobe wants to depart from the Adobe DTM brand – because it’s a totally different tool. I’ve already heard:

“So how do we launch… Launch?”

This might get a little annoying after a week or two. “Launch” is a very frequently used term. Speaking of which, try Googling “Adobe Launch”. Yeah, SEO’s going to be a battle. I think people would have been okay with leaving it as “Adobe DTM” and “Classic DTM”.

Rule Creation

Rule creation makes a lot more sense now. Page Load Rules are now grouped with Event Based Rules (because a page load is technically an event that occurs in the DOM). The if…then syntax just completely makes sense. They also added rule Exceptions as part of the conditions so you slackers don’t even have to use Regular Expressions. Google Tag Manager was a little ahead of its time on that one, I guess. Otherwise, structurally, it’s very similar to today’s DTM.

Publishing Workflow & Libraries

Currently DTM supports 2 environments – staging and production. Launch will default to 3 environments by default but will support any number of environments. So if you have a localhost, dev, pre-staging, staging, post-staging, pre-prod, pre-pre-prod… you get the idea. Each environment will have its own set of tracking code that can be individually toggled to be hosted on the cloud, FTP, or file download. This is a fantastic addition that many teams were surely wishing for.

Adobe also introduced Libraries. These are the equivalent to the GTM’s workspaces feature. You can batch rules, data elements, etc into these Libraries and deploy them into various environments independently. This has been a much-needed feature for teams with multiple DTM users.

So then there’s the approval workflow. The approval workflow is very pretty. It has a nice drag-and-drop interface that progresses libraries between workflow phases. It even lets you know when the library has actually been BUILT (so no more refresh/_satellite.buildDate spamming). This is an excellent addition sitting in the middle of a relatively abstract approval workflow. In the workflow you’re making a few decisions:

  1. What rules go into the library?
  2. What environment will this library correspond with once approved?

These are very straightforward questions; but if you have 5 environments, 4 libraries, and rule dependencies affecting multiple libraries it can get pretty complex pretty fast. That said, I think they did the best they could do given the constraints.

Extensions

Adobe Launch announced the inclusion of Extensions. These are basically plugins that add on to Launch’s interface. Not happy with the UI? Build an extension to change it. Have to constantly write scripts over and over to trigger an event? Build an extension. Want to implement Coremetrics? Build an extension. Adobe has made the tool completely modular. In my opinion, this is brilliant. Not only does it increase the breadth of tool coverage, but also outsources a lot of work and puts it in the hands of publishers (even if the publisher is the Adobe Analytics team). This makes a LOT of sense and is a very welcome addition.

The Open API

So the speakers at Summit kept selling this as the “big gun” of Adobe Launch. From an analyst’s perspective, I’m more cautiously optimistic. It turns DTM Launch into a sandbox. While this is nice, it has the potential to create a lot of noise. In one session an audience member asked about API security and governance. The response was “With great power…”. While that’s a laugh and while this could be a cornerstone feature, I’m not yet convinced that this won’t create more work for the Launch team by forcing them to curate, debug, and ultimately support an amalgamation of modules using the API.

I consider myself relatively responsible but it adds another talking point for other TMS platforms. It adds complexity and signals a turn towards a more developer-dependent implementation process. I only bring this up as a thing because so many pitches sold DTM as a tool that is “so easy a marketer could use it!”

One great part of the API is that users can basically export their rule library (very similar to GTM). While it’s one of the best features of the Tagtician extension, I’m happy to see them build something in-house that helps with documentation.

The Verdict

tl;dr 9/10 rating. It’s great and I’m looking forward to getting my hands on it. The new features make total sense for Adobe and the users. My only point of concern is API governance. If there really is a big development community around this open API then there needs to be something that helps keep things organized. Additionally, with more flexibility comes a greater degree of complexity. This isn’t necessarily bad but it does seem that Adobe is pivoting from the “so easy a marketer can do it” position.

Quick Tip to Test GTM’s dataLayer.push Calls

When I code I Google things. A lot. Sometimes I get so caught up in writing the right code that I forget there is an easier way to test things. Google Tag Manager’s dataLayer object is a little interesting in that it isn’t exactly a true JS object like you would see with the w3c standard for data layers… but it kinda is. When I build out tracking specifications I want to ensure that I’m sending the right data. Setting the dataLayer header object is pretty easy to test. Just paste your object in the console and BOOM – QA done.

dataLayer.push() is a little trickier and I don’t want to wait for developers to implement code I send over and then see errors because I made a dumb mistake. I don’t think you need any more exposition, so here’s how you test your dataLayer.push() calls:

Your first step is to put together the dataLayer.push() call that you’d like to use. Let’s pretend it’s to track video views:

dataLayer.push({‘event’: ‘videoView’, ‘videoName’:’Cool Video’, ‘videoProgress’:’100%’});

Next you’ll go to GTM and click Preview so you can Preview and Debug (I’m assuming you know what this does).

Now go to your site and open up your console. If you have your dev tools docked like I do, it should look like an ugly stack pictured below:

 

Now simply paste your code in the console, execute it, and ensure the event fires. I’ve obviously done this a few times while playing around with it. Now you can be 100% certain that the data you want to send to GTM is correct.

Building a Testing Culture with the 3 Second Scan

Yet another article trying to diagnose and solve a problem that every organization could likely solve differently. I’m solving this problem for companies that are in very immature stages of testing/optimization. That means you may have run one or two tests but want to run tests continuously and simultaneously. Go to any other site to find best practices. Yes, hit something like 90% statistical significance, pick 1 KPI, QA your shit, watch traffic volume, pay attention to local maximums, etc…

You’re probably doing all that. If you aren’t, click on one of those links above and read about it. It’s good stuff, but it has been said before in a better format than I have patience/talent to provide. Okay, so what is this thing that fewer people are talking about? To grow your program into a well-oiled machine you need to make tests appear less expensive. That doesn’t mean cheapen it… but let’s break down why this is critical.

Keep it simple

You’re smart. You don’t have to tell everyone else you’re smart. At the end of the day, people only want to know a few things. They want to know why we tested the thing, how we measured it, what won, and how we’ll operate as an outcome. That’s why I (alongside an incredibly talented designer) created the “3 Second Scan” template.

The “3 Second Scan” A/B Testing Presentation Template

What’s a “3 Second Scan” template? It’s designed to be able to be glanced at and understood by ANYONE in under 3 seconds. Okay, maybe it takes a little more than 3 seconds, but we felt this was as close as we were going to get. It’s simple. It’s printable on an 8.5×11 sheet of paper. It’s lower fidelity. I can hang this on a wall. I can’t hang a PowerPoint presentation.

Okay, so is our work done here? No. We’ll still get smartypants people asking for the data…  and rightfully so! That’s what an appendix for. Send out the raw data. Send out pivot tables in an Excel doc. You can make this as ugly as you want. The person who wants to dive into the data likely wants to manipulate it on their own and create follow-up hypotheses. Fantastic. Have at it! The more you use the 3 Second Scan template, the less they’ll care about drilling down to the nitty gritty details.

So what does this have to do with appearing “less expensive”? For one, it’s a lot faster. There’s less format jockeying. You’ve already defined EXACTLY what you’ll show up-front. You have a next step on the sheet – so your stakeholders are more inclined to say “Let’s move forward with your recommendation”. This minimizes overanalyzing.

The more pages in the slide deck, the more data you show, the more formatting you perform, the more ambiguous you leave the results, the more formal the ceremonies, the more you use language like a “failed test”… the more work people perceive you put into the test and the more time they think you wasted if a variant didn’t beat the control. This is BAD. It makes your stakeholders more risk-averse. People who haven’t adopted a testing mindset think we should only run tests that are “winners”. I would think the same thing if I thought my analysts were sinking hours or days into running a single test.

Next Steps

Change doesn’t happen overnight, but leveraging some kind of 3 Second Scan template is a good place to start. Feel free to copy this exact template. I built this in Word (I’m a masochist). I’ve used this in tests that have had 4 or 5 variants. It takes a little formatting work but we’re talking 15 minutes – not 3 hours.

You’ll still need to slice and dice the data for the folks who want to do their own analysis… and that’s OK! You probably already have raw data available and you can now just send them a spreadsheet as something like an appendix. I’ve found this process has made testing more collaborative. We received alternative hypotheses and ended up running more tests because they felt like they had a degree of control (they did – and that was a good thing).

Google Analytics Data Quality Checklist

Google Analytics is interesting. It has the lowest barrier to entry of any analytics tool (because it’s free) but is deceptively complex. As analysts, we should understand that no implementation is perfect. You can hire a team of consultants to audit and correct low-hanging fruit of your implementation, but your business isn’t stagnant. It’s fluid. Even if it’s as fluid as molasses you will still need to monitor your data quality. I can’t write a blog post that will solve that problem for you.

Instead, I want to help you solve problems that consultants will help you solve when they initiate the engagement. When I conduct site audits for clients, there is a set of consistent recommendations that are very “out-of-the-box”. The logic is consistent and brands also consistently overlook them. Instead of paying someone to go in and look at this fundamental stuff, here’s how you can do it yourself… and I’m assuming you know how to create goals, leverage eCommerce reports, enable demographic reports, internal search, and connecting GA to developer tools & AdWords.

Create Filtered and Unfiltered views

views

Historical data in Google Analytics cannot be changed. Once it’s in the reports… that’s it. It’s in your best interest to ensure that you have a backup plan in case you screw something up. That’s why it’s a best practice to have a Filtered view and an Unfiltered view. The Filtered view will have all of your “clean” data. This will be your primary view that you use to conduct analysis. The Unfiltered view (in addition to being your insurance policy) will also help you build out your filtered view.

Let’s say you force your URL’s to lowercase in your Pages report. There are some filters that you may want to implement that are case sensitive. If you are unable to see the case of these strings then your filters won’t work. You get the idea. That doesn’t mean you can’t enable eCommerce reports in the Unfiltered view (you should), that doesn’t mean you can’t create goals or link AdWords (you should). All this means is you keep the raw data as-is. No trimming of query strings, no forcing lowercase, no excluding internal traffic.

Force the Request URI to lowercase

casesensitive

See the problem here? When you’re analyzing user behavior on the homepage, you might be missing a significant chunk of traffic. This could affect how you interpret content affinity and campaign performance/reconciliation. This doesn’t just impact your reporting. Google Analytics imposes some restrictions on data volume:

Daily processed tables store a maximum of 50k rows for standard Analytics and 75k rows for Google Analytics 360.

This means if you exceed 50,000 unique page names (or 50,000 of any dimension), the remaining results will be bucketed into this “(other)” label:

other

Gross. So how do you fix it?

Go to Filters and create a Custom Filter. From there, click on the “Lowercase” option from the radio buttons lower on the page:

lowercase

Select “Request URI” from the list and save. That’s it. Ensure you’re saving this filter on the Filtered view and not applying it to the Unfiltered view.

Remove useless query strings

qstringsreport

So you’re now forcing things lowercase. Your work here is done. NOT SO FAST! Looks like we found more duplicate line items because of those pesky query strings. Time to knock those out. Query strings are split into name/value pairs (/index?name=value&name=value&name=value). You’ll need to take an inventory of all of the NAMES of query strings and get rid of them in the View Settings page in your Admin section:

qstrings

It’s CaSe SeNsItIvE so make sure you’re searching for these strings in the Unfiltered view! That means listing “qstring1” will NOT work if the URL in the query string is listed as “qString1”. It DOESN’T matter if you have a filter applied to one of your views. I found the most effective way to take inventory of the query string names is to simply search for the following: \?

The slash that precedes the question mark escapes that character. Google Analytics’ search defaults to Regular Expressions so this will literally search for a question mark. From there, I just keep a text editor file open and add to the list. It’s tedious, but you really should only have to do a large-scale cleansing once.

Eliminate Spam… all of it

botfiltering

Spam in your GA account is annoying as hell. Let’s go ahead and assume you have already gone into your View Settings and checked the box above. If you haven’t… do it. It’s not perfect, but Google works hard to keep their bot filter up-to-date. So how do we identify and exclude the rest of the spam? Spam bots are primarily vehicles to get you to click on bogus websites – like best-seo-stuff.com

spambot

How do we know this is spam? 241 sessions, 241 new users, 100% bounce rate and 100% new sessions are a dead giveaway. Even when a popular site links to your own website, these metrics are basically not possible.

So from here you have a few options. You can create a filter to remove this domain; however, this spam traffic is often triggered without anything physically visiting your site. That means it isn’t associated with a hostname (for instance, my hostname is jimalytics.com). That said, it’s often easier to create a filter that includes your hostname:

hostname

This will cover 99% of bot issues.

Exclude internal traffic

ipfilter

You do a lot of testing. Your developers do a lot of testing. Your boss and employees do a lot of testing. There’s a surprising amount of traffic that comes from internal resources. If you don’t have one already, create a filter that will filter out your office’s IP address (or range of addresses). Talk to your development or IT team to understand what that range is.

Takeaways

By NO means is this list comprehensive. There are other considerations that you might want to take like cross-domain tracking, effective channel groupings and campaign tracking, and enhanced eCommerce among others. What are some other out-of-the-box items that you look for when you conduct an audit?

My Secret Sauce: Building a Forgettable Donation Form

A big part of my job over the past 2 years has been managing a single donation form. During this time I’ve operationalized the A/B testing program and turned our focus from the “big and sexy” to… well… boring and iterative. It’s not all that bad, though! We’ve grown our revenue substantially each year – largely attributed to our mission to understand what makes donors tick. At the risk of getting sued for sharing trade secrets, here’s our secret sauce: our form is forgettable.

Working at a nonprofit often means you work around a lot of incredibly passionate people. Every change you implement carries an element of emotion with it… a lot is at stake! In our case any change we make could directly affect the lives of children. We want everything we build to reflect our commitment to the mission which often means putting a patient on just about everything. It pulls our heart strings. We’re more than just fundraisers, we’re experience builders! So what’s on my go-to checklist for building our donation form experience?

Well, we need to collect payment. Ugh, how sterile. Let’s include a quote about our mission… and add in a patient. The big question is whether the patient should look at me or the form. Oh and why not add in where the money goes? We really want them to remember WHY they’re making the donation.

Reality check time. If a donor is on the form we’ve already done something right. They’re on the form. They’re already emotionally invested. If your form triggers emotion it’s likely frustration. The last thing I want to hear from a donor is that they remember filling out the donation form. We’ve found that the most productive variables to test on the form are the donation amounts. Adding a patient did nothing noticeable. Adding a value proposition actually distracted our users and reduced conversion rate. Sure, maybe if we spent another few months testing different permutations we might find something that works; but we consistently win when we test variables that have zero weight on emotion.

So to the nonprofit world, here’s my gift to you: make your form forgettable and you will make money. You don’t want anyone to remember filling out your damn form – they should remember the mission instead. To the person who will inevitably comment “emotion worked for us!”… you might want to check on your acquisition strategy. A form isn’t a landing page. Just sayin’.