Transition from Adobe DTM to Adobe Launch with Tagtician

The transition to Adobe Launch from Adobe DTM promises to be simple and painless. You don’t have to update your header/footer code and it will preserve the fundamentals you grew to love in DTM in some way, shape, or form. There will undoubtedly be automated tools that help with that process making it completely seamless. Adobe DTM is known for making implementation easy and accessible. That said, it isn’t responsible to blindly trust that everything was imported from DTM to Launch with 100% precision.

We all know it’s easy to mess up rules. I’ve done it countless times. I’ll target an element that’s on more pages than I expected or add a custom condition that only sometimes works. We’ve all been there. The migration to Adobe Launch reintroduces some of those risks. The tool will change and it’s not worth risking a total disaster when you can leverage Tagtician to ensure your migration proceeds as expected.

Tagtician Dashboard


The Tagtician Advantage

Our customers will boast a unique advantage when Adobe Launch is finally released because they have a tool that knows their rule library. Our crawler knows what rules to look for and where to find them. When they’re ready to transition to Adobe Launch, our customers get the confidence that the transition was successful without having to touch a thing. That’s being proactive.

How it works

  1. Provide a URL to scan (like
  2. Press the “Scan” button
  3. Tagtician’s smart crawler scans the site to seek out rules (even Event Based Rules)

Done. That’s it. When the scan is finished you have a full report of what rules were found and on what pages. From there, analyze the results and discover rules that might appear where you don’t expect them to. If you’re a completionist, you can provide extra URLs our crawler might have missed to fill in gaps. Every subsequent scan will then compare the data against that first scan (unless you tell it otherwise). Tagtician now also validates data layers. 

This is a tool specifically built for the Adobe Launch transition and maintenance. Furthermore, we know how to keep prices competitive because our “salespeople” are practitioners. We built and priced Tagtician to allow immediate activation without all the legal red tape. Seeing is believing. Let’s talk about how Tagtician can help secure your Adobe Launch transition with a demo.

The Price Tag of Analytics Precision

Precision in digital analytics is unobtainable. This isn’t an edgy statement, it’s a fact. You will never achieve precision in your Adobe Analytics account. Your A/B test data will always contain noise. If your company is larger than a mom-and-pop shop then your automated QA tools will never be totally comprehensive. As an analyst, you must figure out a way to be effective while not becoming exhausted trying to achieve analytics data precision. Let’s start with a list of noise sources we get out-of-the-box with any analytics tool.

Uncontrollable Noise

  • Clever spam bots inflate/distort data
  • Ad blockers stop data collection
  • People clear their cookies
  • Testing site traffic
  • Some visitors use proxies

You get the idea. We have no choice but to live with this noise. While this can be mitigated, it cannot be eliminated. Uncontrollable noise has been a nuisance and have gotten worse over the past 5 or 6 years with the spread of ad blockers and sophisticated spam bots. Let’s assume you’ve taken the right steps to minimize this noise. Apart from some basic filters, there isn’t a heck of a lot more you can do to prevent this from affecting your data.

Controllable Noise

  • Routine site releases
  • Dynamic tagging with CSS selectors
  • Legacy tracking code
  • Broken site code
  • Inconsistent data layer

“Controllable” might not be the best way to put it since you might not really have much immediate control over this stuff. This is often what I spend most of my time on when I QA. They’re the easiest levers to pull.


It’s up to you, but analytics precision can often stop at “good enough”. This means you aren’t burning cycles ensuring every single click tracking event is perfect before every single release and it means you are comfortable delivering data even though you know some testing traffic slipped through the cracks. I make decisions with data that isn’t precise every day. You do, too… as long as you’re confident it’s directionally accurate. What’s important is that you’re making decisions with data. Not enough people are doing that and instead going down an endless rabbit hole of QA. I might be the minority here, but I believe it’s better to occasionally make a wrong decision with data than it is to let precision paranoia scare me into driving fewer decisions.

That said, there are some QA tools that are supposed to make your life easier. When it comes to data quality, there are tools out there that can automate the QA monitoring; but as we shorten our sprint timeboxes and quickly approach continuous integration, maintaining the integrity of our implementation only becomes more cumbersome.

As you can see from this very scientific model, there is a direct relationship between the cost of analytics precision and site release frequency. I’m just using common sense here – something’s gotta give and I can guarantee that they won’t slow down to wait for your click tracking to be perfect. The reason I built the Tagtician web app was to provide QA to users without the overhead of micromanaging simulations and server calls every time the site changed. The philosophy behind the tool directly ties back to the representation in the chart above. Improving the speed and efficiency of your development cycles shouldn’t create that much more work for the analyst. At some point, the value of the analyst is completely undermined by the cost of maintaining the tracking and complex simulations in other QA tools. With or without an automated QA tool, the momentum behind shorter and shorter release cycles is creeping upward.

Is your company moving on the path towards continuous integration? How do you think this trend will affect your job? Do you think it will increase the time you allocate to QA work?

Tagtician enters the automated analytics QA space

The Tagtician team and I are proud to announce our newest product – automated analytics QA. More specifically, automated QA for Adobe’s Dynamic Tag Manager (and Adobe Launch). Thousands of you use our DTM Cheat Sheet and the very first DTM Chrome extension debugger. We built those tools to solve problems we personally encounter on a daily basis. We built them with user experience in the front seat. We’ve just changed the way you will use DTM (yet again) by building an automated analytics QA tool. We specifically designed it to actually be used by analytics practitioners.

Server calls aren’t what cause data loss

The majority of the time, it’s a site change or a rule error. Your marketing pixels likely have an SLA guaranteeing 99.9% uptime. The IT team has their own set of tools to monitor 404’s, page load timing, and JavaScript errors. The SEO team has tools to give them automated tips about changes they should make on each page. As an analyst, it doesn’t mean much to see each individual server call on each page. Tagtician automatically imports your entire DTM library out of the gate and specifically validates your rules.

Automated QA should save time

One of the biggest challenges we wanted to tackle was addressing the overhead that other automated QA tools require just to maintain them. It shouldn’t require a team to maintain your QA tool. At some point you’re losing more time and money on a tool than you would if you didn’t have it in the first place. We get that accurate data is important, but the cost of precision is often too high. That’s why Tagtician automatically seeks and tests your Event Based Rules. That’s right, no more micromanaging simulations each time there’s a site or rule change. When Tagtician detects an anomaly, it will alert you to exactly what and where the problem is so you can fix it without having to search through hundreds of rules.

Let me prove it

Okay… so you’ll save time, you’ll be QA’ing actual rules instead of server calls, and you won’t be paying for the salaries of an army of salespeople. To borrow a line from Optimizely – we’ll be the tool that you’ll actually use. Schedule a demo and I’ll personally show you how it works.

Early thoughts on Adobe Launch (new Adobe DTM)

The 2017 Adobe Summit week flew by and boy are my arms tired… okay strike that awful joke from the record. Summit was exhausting, to be sure. One of the most relevant announcements was the introduction of Adobe’s complete rewrite of the TMS Adobe DTM titled Adobe Launch. I went to a few presentations that walked us through the entire workflow and wanted to share some early thoughts about it.

The name “Adobe Launch”

Who cares about a name, right? It makes sense. You launch stuff. It’s a launch pad for user-created extensions (we’ll talk about these later)… yadda yadda yadda. Adobe wants to depart from the Adobe DTM brand – because it’s a totally different tool. I’ve already heard:

“So how do we launch… Launch?”

This might get a little annoying after a week or two. “Launch” is a very frequently used term. Speaking of which, try Googling “Adobe Launch”. Yeah, SEO’s going to be a battle. I think people would have been okay with leaving it as “Adobe DTM” and “Classic DTM”.

Rule Creation

Rule creation makes a lot more sense now. Page Load Rules are now grouped with Event Based Rules (because a page load is technically an event that occurs in the DOM). The if…then syntax just completely makes sense. They also added rule Exceptions as part of the conditions so you slackers don’t even have to use Regular Expressions. Google Tag Manager was a little ahead of its time on that one, I guess. Otherwise, structurally, it’s very similar to today’s DTM.

Publishing Workflow & Libraries

Currently DTM supports 2 environments – staging and production. Launch will default to 3 environments by default but will support any number of environments. So if you have a localhost, dev, pre-staging, staging, post-staging, pre-prod, pre-pre-prod… you get the idea. Each environment will have its own set of tracking code that can be individually toggled to be hosted on the cloud, FTP, or file download. This is a fantastic addition that many teams were surely wishing for.

Adobe also introduced Libraries. These are the equivalent to the GTM’s workspaces feature. You can batch rules, data elements, etc into these Libraries and deploy them into various environments independently. This has been a much-needed feature for teams with multiple DTM users.

So then there’s the approval workflow. The approval workflow is very pretty. It has a nice drag-and-drop interface that progresses libraries between workflow phases. It even lets you know when the library has actually been BUILT (so no more refresh/_satellite.buildDate spamming). This is an excellent addition sitting in the middle of a relatively abstract approval workflow. In the workflow you’re making a few decisions:

  1. What rules go into the library?
  2. What environment will this library correspond with once approved?

These are very straightforward questions; but if you have 5 environments, 4 libraries, and rule dependencies affecting multiple libraries it can get pretty complex pretty fast. That said, I think they did the best they could do given the constraints.


Adobe Launch announced the inclusion of Extensions. These are basically plugins that add on to Launch’s interface. Not happy with the UI? Build an extension to change it. Have to constantly write scripts over and over to trigger an event? Build an extension. Want to implement Coremetrics? Build an extension. Adobe has made the tool completely modular. In my opinion, this is brilliant. Not only does it increase the breadth of tool coverage, but also outsources a lot of work and puts it in the hands of publishers (even if the publisher is the Adobe Analytics team). This makes a LOT of sense and is a very welcome addition.

The Open API

So the speakers at Summit kept selling this as the “big gun” of Adobe Launch. From an analyst’s perspective, I’m more cautiously optimistic. It turns DTM Launch into a sandbox. While this is nice, it has the potential to create a lot of noise. In one session an audience member asked about API security and governance. The response was “With great power…”. While that’s a laugh and while this could be a cornerstone feature, I’m not yet convinced that this won’t create more work for the Launch team by forcing them to curate, debug, and ultimately support an amalgamation of modules using the API.

I consider myself relatively responsible but it adds another talking point for other TMS platforms. It adds complexity and signals a turn towards a more developer-dependent implementation process. I only bring this up as a thing because so many pitches sold DTM as a tool that is “so easy a marketer could use it!”

One great part of the API is that users can basically export their rule library (very similar to GTM). While it’s one of the best features of the Tagtician extension, I’m happy to see them build something in-house that helps with documentation.

The Verdict

tl;dr 9/10 rating. It’s great and I’m looking forward to getting my hands on it. The new features make total sense for Adobe and the users. My only point of concern is API governance. If there really is a big development community around this open API then there needs to be something that helps keep things organized. Additionally, with more flexibility comes a greater degree of complexity. This isn’t necessarily bad but it does seem that Adobe is pivoting from the “so easy a marketer can do it” position.

Quick Tip to Test GTM’s dataLayer.push Calls

When I code I Google things. A lot. Sometimes I get so caught up in writing the right code that I forget there is an easier way to test things. Google Tag Manager’s dataLayer object is a little interesting in that it isn’t exactly a true JS object like you would see with the w3c standard for data layers… but it kinda is. When I build out tracking specifications I want to ensure that I’m sending the right data. Setting the dataLayer header object is pretty easy to test. Just paste your object in the console and BOOM – QA done.

dataLayer.push() is a little trickier and I don’t want to wait for developers to implement code I send over and then see errors because I made a dumb mistake. I don’t think you need any more exposition, so here’s how you test your dataLayer.push() calls:

Your first step is to put together the dataLayer.push() call that you’d like to use. Let’s pretend it’s to track video views:

dataLayer.push({‘event’: ‘videoView’, ‘videoName’:’Cool Video’, ‘videoProgress’:’100%’});


Note: If you do a direct copy/paste into your console it may paste in the wrong kind of quotes. If you get an unexpected syntax error try replacing the quotation marks in the console.

Next you’ll go to GTM and click Preview so you can Preview and Debug (I’m assuming you know what this does).

Now go to your site and open up your console. If you have your dev tools docked like I do, it should look like an ugly stack pictured below:


Now simply paste your code in the console, execute it, and ensure the event fires. I’ve obviously done this a few times while playing around with it. Now you can be 100% certain that the data you want to send to GTM is correct.