I love the expression Cult of Analytics! Two reasons. First, it's a great name for a book. The book in question is from Steve Jackson which is in the Elsevier E-marketing Essentials series for which I'm editor.
In a future post, I'll explain the techniques I've found useful from the book, in particular, the REAN framework which I have succesfully applied in some recent consulting projects.
[amazon-product]1856176118[/amazon-product] |
[amazon-product region="us"]1856176118[/amazon-product] |
Second, it highlights the difference between companies that are successful in their digital marketing and those that lag behind. This post highlights how Amazon has successfully applied the principle of developing a "Cult of Analytics" to drive its success.
Amazon's Culture of Metrics
The expression "Cult of Analytics" highlights one of the reasons behind Amazon's success - the culture that their CEO Jeff Bezos has instilled, almost from Day 1.
In Amazonia:Five Years at the Epicenter of the Dot.Com Juggernaut, an excellent book charting Amazon's early growth from an employees perspective, Marcus (2004) describes an occasion at a corporate 'boot-camp' in January 1997 when Amazon CEO Jeff Bezos 'saw the light'. 'At Amazon, we will have a Culture of Metrics', he said while addressing his senior staff.
[amazon-product]1595580247[/amazon-product] |
[amazon-product region="us"]1595580247[/amazon-product] |
Bezos went on to explain how web-based business gave Amazon an 'amazing window into human behavior'. Marcus says:
Gone were the fuzzy approximations of focus groups, the anecdotal fudging and smoke blowing from the marketing department. A company like Amazon could (and did) record every move a visitor made, every last click and twitch of the mouse.
As the data piled up into virtual heaps, hummocks and mountain ranges, you could draw all sorts of conclusions about their chimerical nature, the consumer. In this sense, Amazon was not merely a store, but an immense repository of facts. All we needed were the right equations to plug into them.
James Marcus then goes on to give a fascinating insight into a breakout group discussion of how Amazon could better use measures to improve its performance. Marcus was in the Bezos group, brainstorming customer-centric metrics. Marcus (2004) summarises the dialogue, led by Bezos:
'First, we figure out which things we'd like to measure on the site', he said. 'For example, let's say we want a metric for customer enjoyment. How could we calculate that?'
There was silence. Then somebody ventured: 'How much time each customer spends on the site?'
'Not specific enough', Jeff said.
'How about the average number of minutes each customer spends on the site per session', someone else suggested. 'If that goes up, they're having a blast.'
'But how do we factor in purchase?' I [Marcus] said feeling proud of myself. 'Is that a measure of enjoyment?'
'I think we need to consider frequency of visits, too', said a dark-haired woman I didn't recognise. 'Lot of folks are still accessing the web with those creepy-crawly modems. Four short visits from them might be just as good as one visit from a guy with a T-1. Maybe better.'
'Good point', Jeff said. 'And anyway, enjoyment is just the start. In the end, we should be measuring customer ecstasy.'
It's interesting that Amazon was having this debate about the elements of RFM analysis in 1997, after already having achieved $16 million of revenue in the previous year.
Amazon's creator Metrics
Later Amazon developed internal tools to support this "€˜Culture of Metrics"€™. Marcus (2004) describes how the "€˜Creator Metrics"€™ tool shows content creators how well their product listings and product copy are working. For each content editor such as Marcus, it retrieves all recently posted documents including articles, interviews, booklists and features.
For each one it then gives a conversion rate to sale plus the number of page views, adds (added to basket) and repels (content requested, but the back button then used).
In time, the work of editorial reviewers such as Marcus was marginalised since Amazon found that the majority of visitors used the search tools rather than read editorial and they responded to the personalised recommendations as the matching technology improved (Marcus likens early recommendations techniques to "going shopping with the village idiot").
I wonder how companies today provide their product and content owners with this type of insight (or the training to access it from their web analytics system).
It struck me recently that the newish Postrank tool provides a recent equivalent based on social media tool.
AB and multivariate testing at Amazon
Listening to Matt Round, speaking at E-metrics 2004 when he was director of personalisation at Amazon gave a different slant. He describes the philosophy as "Data Trumps Intuition". He explained how Amazon used to have a lot of arguments about which content and promotion should go on the all-important home page or category pages. He described how every category VP wanted top-centre and how the Friday meetings about placements for next week were getting "€˜too long, too loud, and lacked performance data"€™.
But today "automation replaces intuitions" and real-time experimentation tests are always run to answer these questions since actual consumer behaviour is the best way to decide upon tactics.
Marcus (2004) also notes that Amazon has a culture of experiments of which A/B tests are key components. Examples where A/B tests are used include new home page design, moving features around the page, different algorithms for recommendations, changing search relevance rankings. These involve testing a new treatment against a previous control for a limited time of a few days or a week.
The system will randomly show one or more treatments to visitors and measure a range of parameters such as units sold and revenue by category (and total), session time, and session length. The new features will usually be launched if the desired metrics are statistically significantly better. Statistical tests are a challenge though as distributions are not normal (they have a large mass at zero for example of no purchase). There are other challenges since multiple A/B tests are running every day and A/B tests may overlap and so conflict. There are also longer-term effects where some features are "€˜cool"€™ for the first two weeks and the opposite effect where changing navigation may degrade performance temporarily. Amazon also finds that as its users evolve in their online experience the way they act online has changed. This means that Amazon has to constantly test and evolve its features.
These notes on Amazon's approach are taken from my Amazon case study.