In 2017, the World Economic Forum reported that the world produces 2.5 quintillion bytes of data every day. If this data were stored on 1 terabyte hard disks, then the hard disks were stacked on their side like books in a bookcase, it would measure 54 kilometres – twice the length of Manhattan Island in New York. That’s just the data for one day; measured over a year, our bookcase of hard disks would wrap around the equator one and a half times.
Much of the world’s data is generated online, which has the advantage of being easy to collect. However, as the same report states – before data can be used, it needs to be turned into information by being organised and processed in the right context. How to organise large collections of data is an important and much discussed topic, with few clearly generalisable answers.
So, how do you get data analytics right? Major universities aim to train analysis experts by offering higher degrees in data analytics, but even if you have a team of amazing analysts, there’s still challenging software engineering required to design and build a data processing pipeline. For these reasons, our clients often turn to us to help efficiently turn their data into valuable business insights.
Enter user experiments
One straightforward way to turn data into information is by running user experiments – for example using data from subsets of a website’s users to generate insights into the best choices for product design or features.
By far the most common user experiment type is the A/B test, where some of your product’s users get approach A (say a red “buy now” button on catalogue pages), and a similarly sized group receive approach B (say a green “buy now” button on catalogue pages). Once the experiment has been run for long enough to collect the right amount of data, the data of users from group A is compared with the data from group B users (for example, to see whether or not they purchase an item after viewing it, or how much the total revenue in each group was).
Much has been written about how to design and conduct effective experiments. Kaiser Fung, a well known analytics expert, recommends teams invest at least 50% of their time in the experiment design phase (you can watch his excellent primer on A/B testing here, or read a summary here).
Additionally, conducting well-designed user experiments can be a solid complement to hypothesis-driven-development, which we blogged about back in February. A key advantage of an hypothesis-driven culture is that it allows making many small improvements with ease. These improvements can stack up to many millions of dollars of extra revenue.
How much science to do?
So, we know that well-designed experiments are important. But how do we design experiments well? An easy place to look for advice is in the scientific world. However, the priorities of the scientific world are different to the business world. Let’s imagine a sliding scale between basing decisions on hard science or gut feel:
At the hard science end of the scale, we’d be looking for strong scientific rigour aiming to prove that approach B is better than approach A. This kind of scientific rigour implies a value system that is probably not appropriate for business– we don’t need comprehensive investigation into proving how-and-why approach B is better. Instead, we prefer some general indication that approach B is better, and some assurance that it’s not worse than A (or whatever we are currently doing).
On the other end of the scale is gut feel – where either no data is analysed (or data is only analysed generally). At this end of the scale, approach A and B are differentiated based on which approach is thought to be best by the person or people who are involved. Although this sounds bad– it is completely legitimate to make decisions based on gut feel – it’s very likely that a business’ employees were hired for their good instincts (at least in part). However, in today’s data driven world, acting on gut feel is not considered principled enough. We can do better by running experiments.
In our experience, a common compromise is to sit in the middle of the line by doing a lot of science poorly. It’s our view that it is better to sit in the middle of the line by doing a little science well. When we compare the two approaches, we don’t need hard proof, but we do want to make sure we aren’t being misled by running with a poorly designed experiment.
Wrapping up
Running experiments is one of the many ways it is possible to turn your user data into information. When that information is used to drive business decisions, it’s possible to generate high value – and if the experiments are backed by a high quality data processing and experimentation framework, this can be achieved with a rapid turnaround.
While data-driven experiments are a great fit for incremental improvements, it’s important to remember that they’re not a great fit for substantial changes. As Facebook’s Julie Zhuo points out in an excellent blog post, A/B testing incremental design improvements would never have produced the iPhone.
So, how do you do experiments well? There is an absolute wealth of advice out there – here’s Julie Zhou’s advice post again, and a similar post from Microsoft.
Even though there’s no single best way to run experiments, even a little bit of research or expert advice can help you run well-designed experiments.
I’ll be writing more on this topic in the coming weeks so check back for more.