Voice Analytics: Getting on the Bleeding Edge of Data

By | 2016-10-27

Siri is a household name despite not being a real person. It goes without saying that voice technology is here to stay, but what does that mean for data analysis? This blog post will help your organization understand and implement voice analytics.

 

Voice technology: a short history

Voice interaction with computers has been around for a while, popularized early on by Dragon Dictation. However, it didn’t go mainstream until Apple Siri hit the market. And now-a-days, it’s standard for smart phones to respond to commands in the blink of an eye.

Siri and its alternatives certainly make intelligent personal assistants. That being said, they are still disconnected from a direct revenue stream. Enter Alexa Voice Services (Amazon). With Alexa, you can purchase anything available on Amazon using just your voice and an Echo. “Alexa, add cookies to my shopping cart.” “Alexa, play the soundtrack to Stranger Things.” ‘Alexa, do this and that.’ Your house can constantly be waiting for your next request!

Luckily, Alexa and other voice platforms are opening up to 3rd party developers. As a result, your business could start engaging with your users/customers without them ever needing a phone or computer. The future is great!

But what does the future mean for analytics?

As digital marketers and analysts, data is our sword and shield. When mobile apps first came out, we didn’t know what to do, leaving us rather exposed. We had to adapt the solutions that we already knew (web analytics) to this new world.

Now, we are in the same position again with voice analytics. We are unsure how to adapt, and end up making best guesses on solutions to new problems. If users don’t use phones or computers to engage with your business, then traditional measurements become useless. Tracking site referrals, downloads, and video plays? All irrelevant. Perhaps the future is not so great.

The silver lining

This all may sound bleak, but voice analytics aren’t as complicated as they sound. It helps to compare it to an interactive voice response system (IVR). We are all familiar with ringing up a call center. You end up trapped in the nightmare of an IVR system, mashing the “0” button in hopes of getting to a human. But as annoying as talking to a computer may be, the solutions implemented in an IVR are sometimes useful. At minimum, they can read a caller’s account history and interpret the tone/temperament of the the caller.

The problem? These solutions just route you to the right person or area of the IVR. In other words, they can’t collect data in a meaningful way.

Now imagine that you could collect voice analytics on each individual user (or in aggregate, for you privacy cautious folks). Think about the questions you would need answered. What was the user trying to get help with? How did they ask their question? What response was most effective in resolving their interaction?

Due to a maturing analytics market, there are analytics products that can answer these questions. Thanks to flexible offerings like Google Universal Analytics, Mixpanel, and other event driven analytics solutions, you can analyze users’ voice queries within your analytics software.

 

Voice analytics example implementation

Your developers can include voice analytics code like they normally would other event driven analytics solutions. Even better, they can do it in a language they already use, such as Node.js.

We’ll use Alexa Skills as an example.

In the development environment, run the following:
Google Universal Analytics: (g)
npm install universal-analytics --save

Mixpanel: (m)
npm install mixpanel --save

Include and adjust accordingly in your application index.js file:
(g) var ua = require('universal-analytics');
(g) var gUA = ua('UA-XXXX-XX'); // your Tracking-ID

(m) var Mixpanel = require('mixpanel');
(m) var mixpanel = Mixpanel.init('aaa111bbb222); // your Token

Tracking Events
If the user was not understood:
(g) gUA.event("user error","misunderstood statement").send();
(m) mixpanel.track("user error",{result: "misunderstood statement"});

A successful query:
var utteranceData = ("intent: " + utteranceValue).toString();
(g) gUA.event("user query","successful query", {query: utteranceData});
(m) mixpanel.track("successful query", {query: utteranceData});

A failed query:
var utteranceData = ("intent: " + utteranceValue).toString();
(g) gUA.event("user query","failed query", utteranceData).send();
(m) mixpanel.track("failed query", {query: utteranceData});

Capture bugs in your Intent:
var veryBad = false;
(g) gUA.exception("out of memory", veryBad).send();
(g) gUA.event("intent error","out of memory", "fatal: " + veryBad).send()
(m) mixpanel.track("intent error", {error: "out of memory", fatal: veryBad});

User ends session/engagement:
(g) gUA.event("exist","session ended).send();
(m) mixpanel.track("session ended");

Of course, there are some pretty substantial differences among voice platforms. Programming languages and user identification in particular tend to vary. But it’s nothing some app-based modifications can’t fix.

Now that your development team is building applications with voice interaction, each of those interactions can be sent to your analytics solution. You’ll be well on your way to understanding the user experience in this new technology!


This post was originally published by me on the Search Discovery Blog here: http://www.searchdiscovery.com/blog/voice-analytics/

One thought on “Voice Analytics: Getting on the Bleeding Edge of Data

  1. Zach Doty

    Lee- this is an awesome write up, thanks for sharing!

    So I developed a *very* basic Alexa skill in January, and am looking to start integrating analytics into future skills as I make them more interactive.

    I’m not a developer by original trade, so I’m not quite understanding the first bit about the development environment.

    Looking at the NPMJS install documentation, do I need to create a separate file in my zip upload to AWS that has the Universal Analytics async tracking code or SDK code? The parts about including the UA-XXX reference and event tracking make sense in the index.js file, but seems like I need the above to work first. 🙂

    Thanks so much! -Z

    Reply

Leave a Reply to Zach Doty Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.