The History of Marketing

Depending on where you draw the line between sales and marketing, one could say marketing has been around forever. In the interests of brevity, we can start the story of Marketing in the early 20th century. The first real ‘go’ at marketing was the discovery that repetition of a consistent brand was important. Companies like Coke and P&G figured it out early on and used whatever media existed to get their consistent brands in front of consumers to win share of mind.

People had theories on what worked, but mostly they were guessing. The famous quote, “I know 50% of my advertising is working, I just don’t know which half” could easily describe marketing from the very beginning. It was obvious  the consumer goods companies that had extensive advertising campaigns were more successful than the ones that did not, but it was very unclear which specific advertising activities were driving the impact.

The basic problem is that brand advertising impact is so thinly spread out. When you see a TV commercial for Coke you don’t jump up and buy a Coke right away. You likely don’t even jump up and drink a Coke right away. But somewhere in the back of your mind you increase neurological connections to the Coke brand and sometime in the future you may be slightly more influenced to choose Coke over another option. But because that “slight influence” happened “sometime” in the future, it’s practically impossible for the advertiser to figure out that that specific spot caused you to go over the edge that specific time. Even the person being influenced doesn’t know (so surveys are a waste of time).

The obvious impact combined with lack of data and difficulty with any real measurement created demand for people who could explain what was going on. The best of these ‘experts’ were master storytellers who could spin a yarn. They told stories based on psychological experiments or customer surveys or sometimes just anecdotes spun into narratives. With nothing better to go on people took their advice. Maybe marketing got better and maybe it didn’t. It was very hard to tell anyway. But it didn’t matter too much, since overall it was working. People were buying what brands were selling. Who cares if it was fully optimized?

By the 1950s the US had Madison Avenue. The storytelling just kept getting better, even if the science was not. “Better” tools followed. NPS scores were ‘scientific’ measures of customer satisfaction. Media mix models used multiple regression to tell you the relative value of different marketing channels. Conjoint Analysis and Max-Diff helped quantify customer surveys to figure out what customers ‘latent’ values really were. Products started being positioned. Blue Oceans were discovered.

The issue was it was hard, if not impossible, to prove any of this stuff actually improved impact. It definitely made people feel better about their decisions – it was SCIENCE – but it was science without proper A/B testing to measure real results. These new quantitative marketers were just like the ad men of the 1950s – they just had better storytelling tools.

(And the best part is they were generally ‘selling’ to marketers with no quantitative intuition themselves. So it was easy to pull out a black box and ‘reveal’ the final decision.)

Enter Real Science

Within all this hocus-pocus there was one part of marketing that was using real science. It was the least sexy and maybe the least desirable career in marketing: Direct Mail Marketing.

DM Marketing involved sending direct mail to thousands or millions of homes asking people to mail something back, or call a number, and buy something. The response rates you could imagine were terrible: 3% would be a fantastic campaign. But the beauty was the costs were relatively low (at least compared to television) and you could measure changes with (almost) absolute certainty.

You could send out ten different versions of a mailer – different colors, different copy, different envelopes – whatever changes you wanted to make – and send them all to the same group of people (randomly divided into ten sub-groups – one for each version). Then you could measure the response rate for each version. If Version A got a 3% response rate and Version B got a 2% response rate, you know the changes you made from B to A got you an extra 1%. But you don’t need to stop there. You can keep iterating the changes to find out what works and what doesn’t. You can change offers. You can change prices. You can change the gender of the call center person who answers the phone. Anything to want.

And each time you test a change you learn something. Not the learnings that came from the witch-doctor marketing, but real learnings you could replicate. When you tried to replicate it and it failed, you learned something then too and you would add it to your quiver.

Over time people become experts in DM Marketing. They had run so many tests that they had intuition on what worked and what didn’t, so they didn’t need to test everything anymore (but they could). They were the first real marketing scientists.

Taking DM to the Store

The next big advance in marketing was Loyalty Programs. DM Marketing was great, but it only existed in a dark corner of the marketing world. Even when DM Marketing made it to TV (with Infomercials and Direct-response advertising) it was still hidden away in weird time slots in the middle of the day or the middle of the night. But Loyalty let marketers use the techniques of DM Marketing with mainstream businesses.

With Loyalty Programs companies could track individual customers over time. Then they could begin to run experiments on the impact of changing things. What happens when you send someone a coupon? Do they buy more? Do they shift brands? Do they just move spending forward and reduce it later?

Before Loyalty Programs the answers to that question was just guess work. Now retailers could run real A/B tests and figure out the actual drivers of improved sales.

The problem was people forgot what the purpose of Loyalty Programs were. They got caught up on the name and started to think Loyalty Programs were designed to drive Loyalty. They ignored the data analytics and tests they could run, and instead just looked at numbers showing Loyalty Members spend more than non-Loyalty Members (It’s called Selection Effect, which I will expand on in another post). If you believe that getting someone to join a loyalty program gets them to spend more, then just getting people to join on its own creates value. Turns out that was wrong. If you look at all retailers that have Loyalty Programs and compare to those that don’t – the ones that don’t have had significantly better ROI and Market Cap improvement over time. Loyalty Programs on average destroy value.

Except when they don’t.

If you use them correctly – to turn your business into a smart DM-Marketing machine – Loyalty Programs can add a ton of value. It just turns out that most companies don’t use them that way.

Along Comes the Internet

In the early days it was hard to get people to buy on the internet. But the early leaders like the Jeffs Bezos and Skoll worked hard to change that. And then people started buying.

The best thing about the internet:

  • Now every company has DM-Marketing data

The second best thing about the internet:

  • Now DM Mailings are (basically) free.

Now you could run analysis on anything. Now instead of spending $1M to send a million pieces of mail, you could spend $1000 to send a billion emails. The ability to test went through the roof.

It took a while for the tools to do all this testing to be fully developed (Google Analytics didn’t launch until November 2005!), but now they are everywhere. All of a sudden everyone is a Direct Response Marketer whether they know it or not.

In general this is a good thing. A/B testing can teach you an awful lot that the Quantitative BS of the last century couldn’t. And we can measure those A/B tests on all sorts of sub-metrics: impressions, conversion rate, customer flow, multi-session tracking – just about any behavior you can imagine.

All of a sudden data is the easy part.

But with more data comes more responsibility.

We aren’t taking responsibility.

Instead the data-marketers compare their quantitative methods to the qualitative methods that came before and since they are convinced that what they are doing is ‘better’, what they are doing must be ‘right’.

Obviously we don’t need to figure out what color our website should be, all we have to do is A/B test it.

Obviously we don’t need to think about how to segment our customers, we will just do Big Data Analysis and it will spit out the segmentations we could never see with our naked eye.

Obviously this counter-intuitive fact must be right – it was proven in the data.

 

There are two problems with all this.

First, there is a difference between proof and Proof. Significant results are still wrong 5% of the time (and likely not important 50% of the time). When you are running Big-Data sized tests and trying to backward-infer results, this 5% gets really really important.

Second, just because we can do something, doesn’t mean we should. I’m not talking about ethics, I’m talking about impact. Personalization is great, but sometimes (usually) it is better to create a great product than personalize a crappy one. Data might help you make a better product, but only if you choose to spend your time trying to do that, instead of spending the time with the sexy new personalizing algorythm.

 

It’s great that marketing has moved away from qualitative BS. Now we need to resist the urge to deify anything that has math and algorythms behind it. Quantitative BS exists and it’s all around us.