Measure for Measure Blog

Data Collection: Deployment and Testing

Written by Andrew Edwards on . Posted in Digital Analytics

One of the most challenging parts of digital measurement is getting the data collected properly. You will want both accuracy and relevance in your page tagging in order to accomplish this.

No matter what vendor application you’re using to collect clickstream data, you will first need to define your reporting priorities (which means you’ll need to define what you want to know about your visitors’ behavior). For the sake of this post, let us assume you’ve gone through that exercise already and know what you want to report on.

Most web analytics applications today require page tagging to collect data. Tags are snippets of javascript that generally go in the header of your html and are comprised of a combination of a supplied formatting (specific to the tracking tool) plus a location in the script to place variables that are unique to your site and sometimes to the campaign, page or activity you are tracking. For some sites that need only basic, generic reporting, the placement of a single line of code with your domain as the variable will be sufficient–but most serious marketers need to go beyond this.

It’s key to success that a tagging expert create these tags, especially when you need to go beyond the generic tracking built into the vendor’s application.

Deployment of tags then moves to either a tag management tool or to your developers who will put the tags into the html in the appropriate locations. In order for this process to go smoothly, some tagging expertise is necessary, as developers typically do not concentrate on this as part of their skill-set. If you have a tag management solution, you’ll be going through a somewhat different process (we don’t have space to detail it here) but the tags still need to be carefully constructed by an expert.

Testing for data collection throughput and accuracy takes place after tag deployment and, usually in a test environment, some data is coming through (its not a good idea to launch before tagging QA). This is accomplished by checking each and every tag to make sure that when the relevant action takes place, it is picked up by the tag and delivered to the analysis engine. A good way to set up a QA report is to use a spreadsheet showing the tag name, its function, the expected result and the actual result. Unexpected or null results will then have to go back to the developers for adjustment–usually your tagging expert will know what was done incorrectly and can make sure it gets fixed.

Once a rigorous QA plan is completed and the tags are collecting data as they should be, it’s finally okay to launch and begin to collect actual user data. At this point you should expect to see live data flowing into your reports as expected.

 

Web Analytics 201

 

Tags: , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*

HTML tags are not allowed.