Tracking tel: URLs, custom heatmap objects, and other tracking code updates

Note 1: DO NOT PANIC. You don't need to change anything with the code installed on your site. We've simply made some changes to the way the code works, adding a couple of features and fixing a few bugs.

Note 2: Other than the "tel:" tracking, most of this only applies to advanced users.

New features

  • Tracking tel: URLs - This has been requested here and there over the years, but as Skype et al become more ubiquitous, these URLs are on more and more pages. So we just added support for automatically tracking them. Like mailto: links, these will show up in your outbound link report. You don't need to do jack diddly, it should just work.

  • Custom heatmap object tracking - Our heatmap code by default listens for clicks that bubble up to the document.body element. The default for all events is to bubble up, but there are plenty of events that don't do this, which means those clicks weren't being captured. We ourselves have been affected by this too, mainly by our Javascript menus.

    So now there is a new clicky_custom property, heatmap_objects. With this you can specify custom elements, by tag name, ID, or class. It can be a string if you just need to specify one thing (most likely), or it can be an array of strings if you need to specify more than one. Using this we can track clicks on these elements. Which reminds me, I forgot to update our own Javascript code to track our menus! Mental note.

    You should ONLY use this for events that don't bubble up, or you will experience oddness.

  • clicky.goal() changes - This likely does not affect you but if you use the clicky.goal() javascript method at all, you may want to read on.

    When we released heatmaps, we added a new event queue system for logging some items in batches: heatmap data, javascript events, and javascript goals. The reasoning behind this change was to reduce bandwidth for heatmaps, and increase accuracy for events and goals. The accuracy part being that if you sent a hit to clicky.log() or clicky.goal() when someone clicked on a link that would result in a new page being loaded, chances were good that it would not be logged because the page would be unloaded from the browser before the logging request went through.

    So the queue system was made to store events and goals in a cookie, which is then processed every 5 seconds. So if the person is just sitting on the same page still, the queue will be processed shortly and send that event/goal to us. But if instead a new page is loaded, the cookie is still there holding the event/goal that wasn't logged on the last page, and can be processed immediately on the new page view (which we do before processing the new page view itself, to ensure things are in the correct chronological order).

    ANYWAYS... there were some customers who were using clicky.goal to log goals when visitors were leaving their site. The queue would intercept these goals though, resulting in a snowball's chance in hell of the goal ever being logged.

    SO... we added a new parameter to clicky.goal() called "no_queue", which will tell our code to skip the queue and just log the goal immediately. Check the docs for more.

    This doesn't affect many of you, but if it does, the back story I've written above is probably worth a read.

  • New method to check if a site ID has been init()'d - for customers using multiple tracking codes on a single site/page. This was a specific request from one customer, but we realized our code itself wasn't even doing this sanity check, so if you had the SAME code on your site multiple times, there were some minor bugs that resulted from this.

    If for some reason you think this applies to you, the new method is clicky.site_id_exists(123), which returns true or false indicating whether this site ID has been passed through the clicky.init() function yet. Note: "123" is an example site ID. Use a real one.


Bug fixes for sites using multiple tracking codes

In addition to the last item above about loading the same site ID multiple times resulting in oddities (and which is now fixed), we've made another change to the way the init process works.

There are a number of things that happen when a site ID is init()'d, but it turns out most of those things only needed to happen once, even if you had multiple site IDs on a aingle page. However, our code was executing this entire init process for every site ID on a page, which resulted in bugs such as:

  • clicky_custom.goals and clicky_custom.split only working with the first site ID that was init()'d.
  • The automatic pause that we inject for tracking downloads and outbound links was being called once for every site ID, rather than once per click (which is all that's needed)
  • When loading heatmaps by clicking the heatmap link from clicky.com, the heatmap would sometimes load twice (making it extra "dark").


There were a few other much more minor bugs, but those were the ones that were really irritating. So now what happens is we split the setup procedure into a different method, and wait 100 milliseconds before calling it (just once), giving a chance for all site IDs to be passed into the init process first. And the actual init() method now just puts each site ID into an array which we loop through when any request to log data is called.


Coming soon

Been requested a number of times and something we will definitely add in the coming months. That being when you set custom visitor data with clicky_custom.session (or utm_custom), we will store this data in a cookie so the data will be applied to all future visits by this person, so even if they're not logged in, they'll still be tagged as they were last logged in / tagged visit.

We'll probably only do this with a few specific keys though, since people use clicky_custom.session for all kinds of crazy purposes, many of which can be session specific. But we'll probably do something like, only do it for keys like "username", "name", "email", and a few others.

Just something to watch out for. We think this will be a nice addition when we add it.
3 comments |   Apr 17 2013 10:34pm

Local / internal search support!

Local searches (searches performed with your site's own search engine) has been one of the biggest feature requests we've had over the years, so we're happy to finally support it!

First, you need to tell us what the search parameter is that your site uses. Common ones would be "q" or "search". Examples:

http://yoursite.com/search?q=lollipops
http://yoursite.com/search?search=care+bear+stare

You can set this in your site preferences:





You will then start seeing data in the new local searches report:





You can click on any of the searches to filter down to the visitors who performed said searches:





They will also show up in Spy:





As well as the actions log (both globally, and when viewing a session):





The action log can also be filtered down to just show local searches:




And that about covers everything!
28 comments |   Apr 16 2013 4pm

Tracking Youtube videos no longer requires PhD

Recently we were inspired by this post detailing how to automatically track (with Google Analytics) all Youtube videos embedded on a page, with zero work required other than including a single Javascript file (or two if you don't have jQuery).

Our old method for tracking Youtube was really ugly, requiring a good bit of custom code for every single video you wanted to track. We wanted it to work more like what we read above.

So, now it does! The old method still works for those of you who already have it deployed, but the new method is great because it works with the default iframe embed code that Youtube gives you, and it requires pretty much no work on your end.

Head on over to the video analytics docs to see what you need to do to get it working (scroll down, click 'youtube').

Coming soon

We've got a couple new features we hope to release this week. One is local search support, probably our biggest feature request of all time. Another is tracking clicks on tel: URLs. Another is the ability to click on any graph to view/segment visitors based on what you clicked. Last, Monitage (uptime monitoring) is being finalized, which also means we'll have up to 1 minute monitoring available and the ability to setup more than 3 checks. Monitage won't launch this week, but soon thereafter.
2 comments |   Apr 09 2013 4:06pm

API throttling

The analytics API has been a complete free for all in almost 6 years of existence. This has rarely been an issue, save for once or twice a year maybe, we'd have to ask someone to please relax themselves.

But recently it's become a serious ongoing problem. We've had at least 3 different people in the last few weeks all doing utterly massive exports of data, causing some of the database servers to lag quite badly (up to almost 2 hours in the most severe case).

When a server is lagging it affects thousands of customers. We can't have this anymore, so today we have implemented some API throttling functionality and it is live now.

Throttling will only apply for visitors-list, actions-list, and segmentation requests, as those are by far the biggest drain on resources. All other requests are unaffected.

Here is how it works:

  • Maximum of 1 simultaneous request per IP address per site ID at any point in time. Part of the issue recently has been people doing automated simultaneous requests for exporting data, in one case over 20 requests at the same time for the same site ID, from the same IP. This will no longer work. You will receive an API error.

  • Maximum of 500 results per request (down from 5,000), maximum date range of 3 days. This one is pretty strict and we will likely raise these limits, but we have to get API usage under control immediately. We will be monitoring things and plan to raise the limits as things calm down. UPDATE: things have been stable so we've raised the limits to 1000 results and 7 days.


To repeat, these changes only apply for visitors-list, actions-list, and segmentation requests. No other types of requests are affected by anything mentioned here.

We know this is pretty lame, but it's in the interest of keeping the service as close to real time as possible for all customers and that's important. Hope you understand.

-------
Update, Monday Feb 11:

Since we made this change on a Friday, and Friday-Sunday is a complete trickle compared to the rest of the week, it wasn't until today (Monday) that we could really see the effect of this change.

Good news: All servers are keeping up with real time no problem now. A few are 1 minute behind right now, which sometimes happens when caches expire on the servers and have to regenerate, and usually they're back up to normal shortly thereafter.

We'll likely raise the single day restriction soon, first to 3 days, and if things keep up, then probably 7 days. I don't know if we'll ever let it go beyond 7 days again though. As well, the limit of 500 per request will probably be raised to 1000, but again I'm not sure if we'll ever let it go beyond that.

----
Update, Tuesday Feb 12:

Things have been stable so we've raised the date range limit to 3 days now. We'll see how things go from here.

----
Update, Friday Feb 15:

Things continue to be stable so we've raise the date range limit to 7 days, and the result set limit to 1,000 items. We've also changed the "one request per IP address" limit, so that it's "one request per IP address, per site ID".
14 comments |   Feb 08 2013 4:11pm

Monitage: Uptime monitoring beta

Many Clicky users have asked us about site monitoring. We are happy that this is top-of-mind for some of you because its importance should not be understated. Often we will receive emails from Clicky users asking why there was a dramatic lag in visitors tracked or a complete drop-off altogether. While there can be many explanations, it is not uncommon that the site itself went down unbeknownst to the site owner. Standard web analytics will not be able to tell you this, but site uptime monitoring and alerts will.

With this in mind, we are excited to announce a closed beta in partnership with Monitage, a newly-developed site uptime monitoring service by Roxr Software (that's us). We have integrated Monitage into Clicky to give you a bigger picture of the health and activity of your web sites.




Monitage monitors web sites from five locations around the world (three in the US, one in Paris, one in Japan) and only declares a downtime event if a majority of its servers agree on it. This prevents network hiccups on the monitoring end from sending false alarms.

Pro Plus users and above receive access to the Monitage closed beta. When we officially launch, you will have the ability to create up to 30 checks per site with intervals as fast as 1 minute, but during testing we want to keep resource usage within a reasonable range. So for the time being, we are limiting to 3 checks per site with a max interval of 5 minutes. We expect to officially launch within 4 weeks, at which point Monitage will also be available as a standalone service.

To access Monitage, go to your site dashboard and click the Uptime tab. You can create checks for HTTP, HTTPS, SSH, FTP, IMAP, IMAPS, and ICMP (ping). We've also created a dashboard module.




You can also access uptime stats from the API. Check the API docs and search for "uptime". type=uptime will give you the current status of all of your checks for a site. type=uptime-list will give you a chronological list of all downtime events for your site for the date range requested.

Last, we added uptime stats as an option in email reports as well.

When Monitage officially launches, we will determine what intervals and types of tests will be included with Clicky Pro Plus plans and above.

We are asking that you test Monitage, and let us know your thoughts, what you like, don't like, and want to see. As Monitage is in its infancy, we want your feedback to help mature it into a stalwart companion to Clicky.

Note to white label customers: Monitage will be added as an option to white label service when it officially launches, but for now it is only available to Clicky users.

----
Update, Feb 4: Just pushed an update that integrates uptime monitoring into the Big Screen report. Also added web hooks to the setup page. Enter a URL and we will POST a JSON object (documented in the setup page) to that URL for events.

Also we've been getting reports of false positives since launch, we are pushing some updates later today that should fix it entirely, or at least close to.
21 comments |   Jan 22 2013 6:23pm

Next Page »




Copyright © 2018, Roxr Software Ltd     Blog home   |   Clicky home   |   RSS