By David Sprinkle
This is not a rant! At least, I hope it’s not. My team and I have been finishing up an especially painful deployment lately, and I felt like the client made some classic mistakes in their planning process that made this take far more time and resources than they expected. That made me think that maybe I could point out a few blunders we see often, in the hope that the rest of you could avoid the same pitfalls when you’re deploying new analytics. So I hope you can learn from others’ mistakes!
- Yes, You Are Going to Need Developer Resources.
I sat on a call once where a developer said, “Wait, you mean you want ME to do this?” I don’t care if you are using a tag manager, you are still going to need at least some time from one or more of your developers — someone who knows how your site works, is a competent coder, and has enough time for the size of your project. Before you invest money and resources in an analytics tool or consultants, please make sure you’re actually going to be able to get the development work done. And if you know you’re going to have limited development resources, let’s figure out what is realistic so we can have a successful project and plan for the next one.
- You’re Going to Need a Working Staging Site, Too.
Some Web development projects can be built out on local machines and then pushed straight to your live site. Web analytics is not one of them. To really test how the tracking will behave when real live people access your site, we need to test it like real, live people. This means we need to be able to do test purchases. We’ll need logins for any members-only areas. And if your only staging environment is only available three hours a day, when accessed from within your VPN, and behaves totally differently than your live site, then things are going to get expensive fast.
Now, I don’t mean to sound inflexible. Sometimes, we can test everything on the live site, or otherwise work around issues with your systems. But if you know you’ve got problems with your staging environment, make sure you plan your deployment timelines (and budget) accordingly.
- Expect More Than One Round of Development.
Quality assurance (QA) is what we do once you tell us you’ve deployed the code. We check the code to make sure it’s behaving as expected, in every real-world scenario we can imagine. I am not sure I have ever seen a deployment that got everything completely right the first time — and that’s OK! That’s why we do QA. A modern analytics deployment usually involves some complexity (yes, even for Google Analytics) and we want to make sure your data is as close to perfect as we can get it.
Unfortunately, all too often I’ve seen projects where the developers finished their first round of coding right before the deadline, and hadn’t budgeted time for QA or any code fixes. So then we have to make the uncomfortable decision: go live with bad data, or delay the launch? To me, this is especially tragic because at that point you’ve done 90 percent of the work but you reap almost none of the benefit.
- Don’t Launch With Bad Data.
I often tell my Clients that the only thing worse than having no data, is having bad data. What I mean by that is that your Web analytics can’t help you improve your business if your stakeholders don’t trust the numbers. If you have to choose between getting 10 reports that are 80 percent accurate, or eight reports that are 98 percent accurate, go with the latter. Also, I’ve often found that having fewer data points to look at can actually make it easier to focus and help you to zero in on what’s really important.
- Don’t Let the Perfect Be the Enemy of the Good.
This may sound like a contradiction to my previous point, but bear with me. More often than not, when I start a conversation with marketers by asking “what do you want to track,” they reply “everything.” But we live in a world of limited resources — limited development resources, limited attention, limited brainpower! Your deployment will be far less stressful, and far more successful, if you are willing to bite off only what you can really chew when you deploy your tracking.
This means tailoring the requirements to the amount of development resources you have, and also to the amount of time you’re going to have post-deployment to actually analyze the data. Plus, if you don’t overwhelm your developers in your first phase of deployment, they might actually answer your emails when you’re ready for phase two!
So hopefully some of you will be able to use my screed to plan a smoother analytics implementation in your future. I’m sure there are some other painful mistakes I left out, though — let me know in the comments!
An expert on analytics architecture and integration, David specializes in the innovative design and implementation of analytics solutions that deliver both global “big picture” insights and detailed performance metrics. David leads Acronym’s Analytics Practice as well as its Adobe Preferred Partnership, wherein Adobe subcontracts work to David’s team.
David also has extensive experience working with major analytics, bid management and reporting platforms, and is noted for his expertise in integrating such solutions into companies’ larger marketing and business infrastructures. David is a Certified Omniture Professional and a veteran industry speaker. His Client portfolio includes such leading brands as Four Seasons Hotels and Resorts, SAP, The Tribune Company, HP, Scholastic and Humana, among others.