Getting The Most Out Of User Testing

Posted by IAB Australia On June 29, 2014

You’re excited. It’s your user-testing day and your carefully recruited customers’ are coming in to validate your new product. It’s going to be an amazingly insightful day. What could possibly go wrong? You’ve worked closely with your user-experience (UX) guru to develop a script, covering all the questions you want to ask (not too leading of course) as well as all the tasks you want your test group to complete.

Your UX guru has painstakingly pulled together a prototype to use in the session and you’ve made that last minute dash to the nearest shop to load up on the sweets your customers will eat to keep their energy levels up as they answer each question you probe them with.

You go through the UX testing session with your customer (or someone who fits a customer persona), or better yet, your 4 customers, each fitting a different target audience / segment / persona and you’ve compiled enough notes to fill an entire wall. You’ve generated an amazing amount of insight and you’ve validated your hypotheses and can’t wait to take the prototype into production based off the feedback from the user testing sessions.

So you wait a couple of weeks / months, as you watch your product come to life. You hit the big “Go Live” button and lo and behold, your new idea gets almost no traction. You scratch your head in sheer wonder. How did this happen? I tested this idea. “We had four customers come into the office and validate our thinking” you explain to your manager as they ask why all your key metrics are going backwards.

Sound familiar? I wouldn’t be surprised it if does. This is an all too common outcome in the digital industry and one that I have personally experienced several times over the past decade or so. I am still surprised by how often I hear of people validating ideas today with a very small subset of their user base. And doing so using a prototype developed in a tool such as Axure, ultimately progressing into taking the idea/hypothesis/product into a production environment based solely off the in-house user testing of a few people.

I’ve come to the personal belief that the true value of user testing is derived when you validate a new idea or product in a production environment that ideally carries some sort of scale. Importantly, one should use the data gleaned from the actual usage and engagement to determine success. Ultimately people often behave very differently to the things they say. And I have come to trust the action they take when using a live product in a real-world scenario over anything someone might say or do in a user testing session.

“Yes, I’d pay $50 for that” you hear with all four of your customers through your user testing. Yet your conversion of sales when you go live is not what you expected.

So how do you overcome this? Well, my current approach is one whereby we have built an internal framework that enables us to switch on new features to a select group of users. We can do this to an individual, a group, a new member, people in a certain location, someone using a certain platform. You get the picture. The key point is that we are using real customers in a real situation and we take the learnings from that to validate our ideas. The trick is getting something in a production environment as quickly as possible, in order to take the learnings and improve the offering based on what you’ve learnt. Rinse and repeat. There is, in my experience, nothing more valuable. It’s something we have been doing for a while now and a method I have come to rely on, and subsequently, something I place significant trust in.

There are other options too, though, if you do not have the internal resources to build frameworks such as the kind we have. Android have provided an incredibly powerful public beta testing platform that when used, will undoubtedly provide invaluable insights into new product ideas. And with the very recent announcement by Apple of a similar offering through TestFlight in the latest version of iOS; there is now little excuse not to validate new ideas with a public audience. Importantly these platforms allow you to take control of how wide a net to cast in terms of your user testing audience.

Tools like Optimizely also provide a powerful platform to conduct new product testing to a controlled segment of your audience. They also allow for variations to be tested in order to validate the best solution. And with the ability to integrate into analytic tools, such as Google Analytics, SiteCatalyst and many others; there really is very little holding anybody back from generating live data to inform product and development decisions.

If you do not currently make use of any of the above tools, I would strongly encourage you to do. As soon as possible.

There are also other “lean” methods one can employ to validate a hypothesis at some scale. Sites similar to usertesting.com provide a way to test your confidence levels in an idea and are often useful and likely more accurate than the traditional user-testing approach.

It goes without saying that there is obviously value in interviews in terms of deriving qualitative feedback but it can often pretty much end there in terms of moving further across the value-spectrum into the quantitative realm. It’s often through the interview / qualitative process that you can get some solid insight into understanding your key metrics further.

The shift into using data and analytics gleaned from your actual customer base in a production environment, over the feedback you captured from a focus group or one-on-one user testing session is a shift that will drive far more successful engagement with your audience and ultimately increased levels of success.

IAB Australia

IAB Australia is the peak trade association for online advertising in Australia. As one of over 43 IAB offices globally and with a rapidly growing membership, the role of the IAB is to support sustainable and diverse investment in digital advertising across all platforms in Australia.

Recommended