Failure to Launch – 3 Adobe Launch Gotchas

Adobe Launch hasn’t been out long, but I’ve managed to break it a few times. Okay, maybe Launch wasn’t broken. I was just doing something wrong and didn’t realize it. Before you get your hands dirty, store this in the back of your head so you don’t experience the same headache I suffered. Before I begin, I’d like to throw in a quick shout-out to Jan Exner who is (and will be) posting a lot of good stuff about the nuts and bolts of the platform. I’m looking forward to learning how to use the API via his blog *hint hint*.

Let’s cut to the chase and dive into my mistakes so far.

Gotcha 1: Your library isn’t building because of JavaScript errors.

Last Build Was Not Successful

In DTM, the code editor checked your code and wouldn’t let you save the script until errors were resolved. While the code editor will call out your error, it will still let you save it. The Adobe Launch custom code editor will only check your code on change. That means you can paste or type code into the editor and save it without triggering the code QA. While it’s nice to be able to save the code with errors (so we can fix it later), they will often go unnoticed until it’s time to build your library. That’s when you get that red dot and the message that says “Last build was not successful”.

Here’s the crazy part – there’s no indication of why the library build was unsuccessful in the publishing workflow. I assumed Launch was down for maintenance. After checking in the next day, the library still wouldn’t build! I deleted it and recreated it. I removed some rules, added others. I tried reverting changes. I tried everything. After asking around, I was pointed to the Library screen where there’s an error message tucked off to the side of the screen. AHA!

Build Error

If you listen carefully, you can already hear client care screaming. Let’s be real, though. There’s plenty of time to apply more appropriate treatment for these messages. I simply didn’t know where to look.

Gotcha 2: Libraries don’t default to an environment

So I’m trying to push out my latest round of changes – my third iteration of the day. It happens, I’m working with 4 vendors trying to slap their pixels on my marketing page. I click the button to create a library, name it, add the changes, and click save. Then I realize…

Saving New Libraries

Forgot to specify the environment. Now I can’t build it. Every. Single. Time. Maybe this is less of a “gotcha” and more of a waste of time (that’s what a “gotcha” is, right?). I’d say 75% of the time I save the library before specifying an environment… usually because I’m in a rush. That said, it’s a good feature to NOT default to an environment so the place you push your new tags is more intentional. I recommend you use this little acronym in the hope that it might help you remember. Make sure you ACE it. Always Check the Environment.

Yes it’s lame, but if it works… let me know because I still manage to screw it up.

Update from Corey Spencer @ Adobe in the comments:

Last week we released a feature we call “Active Library”. If you log into Launch you’ll see that when you start to edit a rule, element or extension you can select the library you want to publish this change to, and build right from that page. It should save you a ton of time.

Gotcha 3: Google Analytics extension only supports 1 GA implementation

You’re using an Adobe tool – why do you still run 3 different instances of Google Analytics on your site? Unbelievable! Yes, this is still relatively common. If you’re one of these people who run multiple instances of Google Analytics on your site, keep in mind that the Adobe Launch’s Google Analytics extension does not currently support multiple instances of Google Analytics (and no, you can’t add the same extension twice).

Adobe Launch Google Analytics

THAT SAID, you can still implement Google Analytics via the Custom Code module and deploy it outside of Adobe Launch’s convenient template. If I was a betting man, I’d wager multiple instances of Google Analytics accounts is probably on the roadmap.

Final Thoughts

These aren’t bad “gotchas”. This is actually pretty darn benign. “Problems” like choosing the environment will become more mechanical with practice. Additionally, once you figure out where build errors are, you’ll never forget it. I still haven’t yet explored all of the nooks and crannies of the tool and I’m sure there are some of you out there who have discovered some really good ones. In the meantime, check out the new Adobe Launch Cheat Sheet so you don’t find yourself trying to use stuff that used to work in Adobe DTM.

The Adobe Launch Cheat Sheet

Download -> The Adobe Launch Cheat Sheet

The Adobe Launch Cheat Sheet is finally here. It’s time to rip your Adobe DTM Cheat Sheets off your wall and slap the new one on! If you’re scared of change, trust me that switching to Adobe Launch is worth it and it can be easy, too. The first thing you’ll notice is the Launch Cheat Sheet looks very similar to the old one so let’s dive into what’s changed.

_satellite.rules and _satellite.dataElements objects are gone

Ouch. This hurts. This also wasn’t in the old DTM cheat sheet but I thought it was worth mentioning. The rule data is actually being stored in an object called _satellite.container, which is quickly obscured once the page loads. This is to mitigate risk of users or extensions tampering with the object and preventing rules from loading. While it sucks, it also makes sense. Fortunately, the newest (unreleased) version of Tagtician will let you explore that object without the risk of modifying critical variables.

View Staging Library is gone

This one hurts the most. The DTM switch and the Tagtician dropdown are basically useless now. Each environment has its own unique library. Therefore, it isn’t as simple as appending -staging to the end of the DTM filename. There are new approaches to loading staging libraries, but nothing very user-friendly. We’re cooking something up in Tagtician and will hopefully have it ready soon.

setCookie and readCookie have changed

These two functions are marked to be deprecated. That means you need to stop using them. It’s a real bummer, too. I used these all. the. time. So what’s taking its place in the new Adobe Launch?

Setting a cookie

Just as in DTM, you can set a cookie that will persist for a session or longer. This is useful because, unlike a Data Element, it won’t re-evaluate every time you reference it:

_satellite.cookie.set(name, value, {expires: days})

This sets the cookie. The {expires: days} bit is optional. Leaving that snippet out defaults to storing it for a user’s session. The snippet might look like this:

_satellite.cookie.set(“foo”, “bar”)

To set a user-level cookie for 2 years, your snippet might look like this:

_satellite.cookie.set(“foo”, “bar”, {expires: 730})

Getting a cookie

This is pretty self-explanatory. This gets your cookie:


Here’s an example:


Removing a cookie

New and exciting! You can now REMOVE your stored cookie (or any other stored cookie) with Adobe Launch using the following syntax:


Here’s an example:


Simple, right? I don’t think I need to go into much more detail.


Notify changed

For the uninitiated, _satellite.notify() was the go-to for people who wanted to leverage cross-browser console logs. This is useful because you can use it to debug your tracking – like answering the question of “What’s the output of [this function]?” Well, fear not. It hasn’t gone away, it has just changed form. Adobe Launch has replaced the old notify with the following:




Here it is in action:

Adobe Launch Cheat Sheet - Console Logger


Publish Date, Trim, and Text are out

_satellite.publishDate, _satellite.cleanText(), and _satellite.text() are all gone… and nothing of real value was lost. Publish Date was useful to determine the differential between when you press the “Publish” button and when the library was ACTUALLY built. With Launch, that happens on the spot.

In Summary

This is honestly the tip of the iceberg, touching only the core functions. There will be much more that can be unlocked from Adobe Launch that will be added to the Launch cheat sheet. For now, this will hopefully set you down the right path. Let me know if you have any feedback or spot any errors.

The Analytics GPS

Create incremental value with analytics using a GPS

I use my GPS almost every day. The whole concept of a GPS is fascinating. The way they work is that 3 satellites measure the distance between their position and your position. This calculates your 2D position (latitude and longitude). Google Maps only needs your 2D position to tell you where you are and where you should go.

How a GPS works

But why 3 satellites? Why not only 2? Since the satellites measure the distance between you and them, it means you could be at any point where those 2 overlap. That’s not helpful. A third satellite allows devices to use a mathematical process called trilateration to precisely pinpoint your location. Who am I kidding? You probably already knew this. How do we use this same trilateration process to create incremental value with analytics?

You might have already guessed that this has nothing to do with satellites or a literal GPS. To be effective analysts, we have to have multiple sources of input to provide recommendations. For most of us, that starts with the data… usually from Adobe Analytics or Google Analytics. This data answers the question: What did our users do?

What did our users do?

Quantitative Measurement

When you report in a vacuum, this is what you get. You can answer questions about revenue, engagement, content affinity, acquisition; but then what? As the CEO, this doesn’t get me any closer to helping me drive my business in the right direction. It’s already clear why this is a problem. There’s no clear indication of why any of this data matters and driving valuable decisions based on data alone will likely yield unproductive results. Some people spend an entire career in this circle – and often folks stuck here aren’t making any recommendations at all.

A high bounce rate on the email unsubscribe page is a worthless data point. A spike in conversion rate looks great on paper, but the data point alone leads me nowhere. We need another satellite to point us in a better direction. Often we explain behavior through gathering feedback directly from the user by asking them: What did you come here to do on the site?

What do our users WANT to do?

What do our users want to do?

We’ve deployed surveys, conducted focus groups, and compiled data from user testing. Based on analytics and survey data, we see that the primary audience is a 45 year old female. We understand where users are experiencing friction on the site through user testing. We know that based on a survey that they don’t want to see certain products on the homepage and that the price is too high. With this data, we can provide proactive recommendations to facilitate a better general user experience for this audience and also several other audiences that were teased out of our analysis. We did it! Right?

Obviously… wrong. We’re still missing that third satellite. While we’re accommodating the experience for users who are currently visiting the site, no one bothered to stop and ask the question of whether these are the users the business wants to target. There isn’t any indication of whether the actions users are taking on the site are the ones the rest of the business wants them to take. Ironically, the last question is the most accessible one to ask but is often the most difficult to get a straight answer. It implies that someone has to take a risk and pick a direction. The question is: What do WE want users to do?

What do WE want users to do?

What do we want users to do

Someone needs to decide on a strategy. Maybe it’s the CEO. Maybe it’s the VP or Director. Someone needs to step up and decide what will drive the needle on the business’s stock price. In my experience, this last question is the most difficult for clients to answer. That’s because there’s an associated risk. It forces someone to choose a strategy and direction, implying accountability. This also gives meaning and direction to the work everyone does. Without this, you’re likely making the wrong recommendations and your organization could be moving in the wrong direction.

How do we complete the analytics GPS?

The Analytics GPS

Oh, you thought it was easy? You thought there was a silver bullet at the end of this? There isn’t. The truth is that you don’t just depend on the GPS – everyone else does, too. While you might get all of the answers to these questions, there are still steps you might have to take to evangelize this information. Let’s talk about how you can answer these first.

How to answer: What did users do?

This is in our control. Let’s ensure we trust our data and the definitions of our dimensions and metrics make sense to us and the rest of the organization. Maybe that means building out a data dictionary. The only thing more important that you understanding the data is your stakeholders understanding the data.

How to answer: What do users WANT to do?

This might not be in our control since it’s often a function of a UX team (or whatever they’re calling themselves these days). There are still steps you can take to collect this data – like leveraging or deploying surveys through your tag management system. The best route is to see what’s already available by building a connection between your department and the team that conducts this research.

How to answer: What do WE want users to do?

Find the person who has “ownership” of what you’re trying to improve and ask them. If they don’t know, ask their boss. There’s no tool that will give you this answer, but there are tools available that will help you ask them the right questions. In short, schedule stakeholder interviews… and schedule them quarterly, if possible. Strategies change.

What if I can’t answer one?

Well… why not? The analytics piece is easy (what users did). I often find that the other two might get tricky. You can still do a semi-effective job if you can only answer what users did and what we WANT users to do. However, we’re still not answering whether we’re pulling in an audience that wants to do it. All of the measures we take to optimize the site might be for the wrong people! Always recommend conducting user research so you can at least say you tried.

The point of this model isn’t to put the responsibility of the entire company on the shoulder of the analyst. In fact, it’s the opposite. It protects us. It’s to keep us out of trouble when we need to push back on reporting for things that don’t matter to the business. It’s to make our work more strategic and less operational. Lean on this as a GPS to help guide your business and your work in the right direction. This the foundation upon which data driven companies are built.

3 great ways to use Tagtician… and what’s next

You use Tagtician already to view rules, check out libraries, and audit data elements. Let’s take a closer look into how Tagtician helps you become a better analyst and how it builds transferable skills (crazy, right?). I don’t believe it’s beneficial to use a tool unless it makes you a better practitioner. You may know about these use cases. If so, great – skip to what’s next below.

Determining Load Order

The way Tagtician displays rules is very intentional. What we do is we look for each rule that fires and list them sequentially based on when they are triggered. For instance, Page Top will always fire before Page Bottom, which will fire before On Load. How do you know what is firing first? The bottom of the list shows the rules that fire first while the top shows what fires last. Why is this useful? If you have dependencies between rules, you will want to understand when variables are being set or when you’re grabbing data from the DOM.

Simple Documentation

So people have had a tough time finding the Export button since we redesigned the interface. We moved it into the context of the rules with the intent to draw an association with the physical rules in the list as opposed to looking more like a global export. Anyways, documentation is often left behind with tag management implementations. Exporting your rules on a monthly basis is a recommended best practice to store versions of your implementation. While DTM stores versions, it can’t be distributed or viewed in bulk. So how does storing this documentation make you a better analyst? Doing so shows you take governance seriously. It’s a head start to build your SDR. It could also mean you are leveraging Tagtician to quickly acclimate to new implementations even before you get access.

Quick Search

When things go wrong, finding out what’s screwed up isn’t easy. We’ve had the search capabilities for a while but without the highlights. Adding in the highlighted text will shine light on blind spots in your rules. As an analyst, you want to make your boss and stakeholders confident that you are able to track down issues with their implementation. This accelerates that search without having to go to the DTM interface and dig around. This is also probably the most obvious time saver.


What’s Next?

Data Layer Debugging

We’re trying to find ways to reduce the amount of times you have to flip between Tagtician and other Chrome tools. One way to do this is by exposing the data layer in a user-friendly way within a rule shelf. The way it will work is we will automatically look for window.digitalData, which is the name of the w3c standard for data layers. We’re automatically detecting it because we don’t want to have to build in a bunch of interfaces and business rules. This could change in the future and updates will include other popular permutations. If you have a request for an object name to add, please reach out to me and we’ll add it in.

Adobe Launch Support

We will (again) change the way you manage your rules. We are working on hooking Tagtician into the Adobe Launch API so you can modify and push rules into staging straight from the rule shelf that you’re used to. This means you’re not going to have to go to another website to make changes. You can work with everything in the context of your site. So let’s be clear about timing. Launch is going to slowly roll out to some folks but it’s still going through a lot of changes. We will support Adobe Launch.

Re-imagined, Integrated Automatic Scanning

We built an awesome scanner, shopped it out to people, and it worked wonders. There was one problem. It felt like going to a doctor’s office. It’s a fantastic tool, but I didn’t even use it. I didn’t use it because it was bad. It automatically creates simulations and it validates your rules with a simple click and minimal configuration! However, I don’t believe the future of QA is on a unique web property. We’re going to great lengths to bring this technology to you instead of the other way around. Automatic scanning will be a remarkably affordable way for you to keep an eye on your rules. Look for more news about this later this year.

August 2017 Web Analytics Wednesday in Atlanta

So I haven’t made a blog post in a while – been a little busy with Tagtician and the move back to Atlanta. Posting on my site has taken somewhat of a back seat lately. In an effort to make my life more difficult, I’m also interested in building the culture of analytics in Atlanta. One of the best ways I met other people in the industry was through Web Analytics Wednesdays. It has been a while since it has been hosted in Atlanta and I’d like to get it started again. We hosted one a month or two ago and it was a (very) small but mighty group. For other folks in Atlanta – I’d love to have some more people come and hang out.

Here are the details for this month’s WAW:

Location: 191 Peachtree Tower, floor 40 in the VML office (the Deloitte building)
Date & Time: 8/23 at 6:30PM

If you have trouble finding it, please message me on Twitter or shoot me an email (jimalytics -at- gmail)

Agenda: None – this is just a meet-up. Maybe when it grows a bit we can look into speaking opportunities (kind of a chicken/egg dilemma with this). Hope I see you there!