Book Recomendations

I have written a few book reviews over the years.

Here is my piece on what I think is the most important marketing book to read: How Brands Grow.

Here is a piece on the three books I assigned to my Online Marketing & Analytics class at UW MBA.

Today’s post is for those who want more. A lot of the content for the book I am writing (“Good Enough: Why Good is Better than Excellent”) is coming straight out of my head (and my experience). But I also “stand on the shoulders of giants”. What follows is some the books than influenced significant portions of Good Enough (or sometimes just a paragraph, but it’s still a very good book).

 

Everything is Obvious, Duncan Watts

He’s not as good a storyteller as Malcolm Gladwell, but he makes up for it in being “right”. Watts explores his own research to show that we know a lot less than what we think we know.

 

Selfish Reasons to Have More Kids, Bryan Caplan

Parenting choices have much less impact than people generally think. Therefore you should put less effort into parenting than you were going to. If you put less effort in, you will have more fun and less work having kids. Therefore you should have more kids than you were planning to have (at least on the margin). (Also stuff on why higher population is good for the world and why kids today will have better lives than the previous generation)

 

Foolproof, Greg Ip

As you add more safety to a system, people will compensate and it often makes things worse.

 

Fooled by Randomness / Black Swan / Antifragile, Taleb

It pains me to include these given the rudeness of the man, but his books are excellent.

My take: When variability looks low it might just mean that the big swing is going to come in the future. You can’t predict the specifics of the future, but you can be sure that something big will happen which you don’t expect. And there are ways to be ready for it when it does. Too many people are picking up pennies in front of steamrollers.

 

The Upside of Down, Megan McArdle

Failure can be the first step on the way to success. How and why we fail is very important. Has a chapter that basically summarizes Duncan Watt’s book, but lots of new content too.

 

Rationality from AI to Zombies, Eliezer Yudkowsky

By the guy who wrote Harry Potter and the Methods of Rationality (also recommended) and the founder of LessWrong.com. It’s free to read over the web, or available on Kindle for a suggested donation. A LONG series of blog posts on how to how to be more rational. Everything from cognitive biases to the difference between Bayian science and traditional science to the “proper” interpretation of quantum mechanics and the risk of generalized AI.

 

Expecting Better, Emily Oster

A data-driven look at pregnancies. For example the data says one drink a day actually decreases fetal risk (not statistically significant).

 

How to not be wrong, Jordan Ellenberg

There are four types of math in a 2×2 matrix. Hard-Easy and Interesting-Dull. You learn dull is school, and the media throws east-interesting at you all the time. This is the hard-interesting part of math – but he makes it easy to understand.
Modern Romance, Aziz Ansari and Dataclysm, Christian Rudder

Dating is hard. But there is a lot more data these days on how it works and how it is changing.

 

Who Gets What And Why, Alvin Roth

By a Nobel Prize winner and real-world practitioner. A book of stories about the design of non-monetary markets and how to make them better. From kidney transplants to lawyer internships it dives into the less-explored economic markets that don’t involve cash and explains what makes them fail and how to make them better.

 

Doing Good Better, William MacAskill

Why traditional charity fails and how to give away your money effectively.

 

Superforecasting, Philip Tetlock

Remember reading about that experiment which showed that experts were no better than chimps at predicting the future? Turns out that was a huge simplification by the media. The real story isfar more nuanced and interesting. Philip Tetlock created and ran the experiment. Here he tells the story of what he learned.

 

Hive Mind, Garet Jones

Why the average IQ of a country is more important to success of a country than the IQ of an individual is to that individual. An important book even though some conclusions in the book bothered me I a technical sense (Issues explained very well by Scott Alexander).

 

SlateStarCodex.com, Scott Alexander

Definitely not a book (but maybe it should be). If Scott writes a new blog post it is the first thing I read in the morning. If he doesn’t I will often find an old one to ready anyway. His takedown of the scientific method will feature prominently in “Good Enough”.

 

How to fail at almost everything and still win big, Scott Adams

Kind of an autobiographical self-help book. Scott Adams succeeded by being “good” at a lot of different things rather than excellent at one thing. He credits that to following “systems” vs having “goals”. The chapter in “Good Enough” on how to apply some of the principles of the book to your personal life references Scott’s theories.

 

Stumbling on Happiness, Daniel Gilbert

SoH was incredibly insightful when it first came out in 2007. The ideas have now spread considerably in the general consciousness (ideas like how experiences make you happier than things and happiness set-point levels). More recently some of the data has been questioned. I explore a lot of this in the “Good Enough” chapter on diminishing returns.

 

Eat Move Sleep, Tim Rath

Rather than trying to be healthy by exercise or nutrition or sleep alone, Tim argues you need to work on all three at the same time. That the three “skills” are synergistic with each other. A good night’s sleep gives you the energy to exercise which gives you the willpower to eat right. And if you fail to exercise you may have a bad night’s sleep and then eat an unhealthy breakfast. His solution is to build slowly with a series of small changes across each of the three skills simultaneously day-by-day.

 

Malcolm Gladwell and Tim Ferris

I hesitated before including either of these two because I expect my readers will have either read them or made a conscious choice not to. Neither author’s books had a direct impact on Good Enough, but both had a significant impact on me over the years. I could only dream of matching the storytelling ability of Gladwell or Ferris’ skill at tapping into the zeitgeist. But in many ways if you wanted to be generous, the book could be described as Gladwell meets The Four Hour Workweek.  It follows Gladwell’s common structure of a series of chapters attacking the thesis in different ways using an intertwining of stories and data. But it tries to match that with the practicality Ferris brings to his books.

 

Bonus:

These books have little is anything to do with Good Enough, but I still highly recommend them.

 

Guns Germs and Steel // The Day before Yesterday, Jared Diamond

Why Europe succeeded; How tribes are different than empires.

 

The Average is Over // An Economist Gets Lunch, Tyler Cowen

Why and how the world will be more polarized and you should learn to work with computers; A thinking man’s guide to food

 

Sapiens: A History of Humankind, Yuval Noah Harari

Like the title says.

 

Economics without Illusions, Joseph Heath

Both the left and the right have it wrong.

 

The Red Queen // Rational Optimist, Matt Ridley

Keep running to stay in the same place; The world is getting better

 

The Moral Case for Fossil Fuels, Alex Epstein

Coal and Oil make the world a better place.

 

The Undercover Economist Strokes Back, Tim Hartford

How MACRO economics works.

 

If you have enjoyed any of these books, or think you might, then you might also like my book: Good Enough: Why Good is Better than Excellent. You can sign-up to receive chapters while they are written here

Investment is Easy

Last night a friend told me how his investment advisor called him up to tell him his investments were up +14% year-to-date. That’s pretty amazing given the market as a whole is down -11%. He was understandably very impressed with his advisor. “Even better,” he told me, “I don’t pay any fees to him at all. It’s free!”

Everything he understood about what was happening with his finances was wrong.

His advisor was NOT any better than the other advisors out there.

And his advisor certainly wasn’t free.

The financial advisement industry exists to do three things:

  • Confuse consumers so they feel they need the advisement industry
  • “Hold your customer’s hand” – Make people feel comfortable so they don’t pull their money out of the market when it drops
  • Sell sell sell – build up a book of business of customers who are willing to invest their money and pay their fees

Notice none of those things involve actually making money for their clients.

To understand why we can look at it in two different ways: Analytically and hypothetically.

 

Analytically the data is very clear that when money managers beat the market it is random. Yes you can find money managers that beat the market multiple years in a row, but you can’t predict who those advisors will be beforehand. About half of the advisors will beat the market in any given year, but who those advisors are is random. So 25% will beat the market two years in a row, 12.5% three years in a row, and 0.1% will beat the market 10 years in a row. Those one-in-one-thousand that beat the market 10 years in a row might look like financial geniuses, but they are just the lucky ones. If you invest with one of those guys there is still only a 50% chance he beats the market next year (one in two thousand beat the market 11 years in a row).

So even if there IS skill in an investment advisor it doesn’t matter. Because the only data you have on who is good and bad doesn’t predict who will be good or bad next year.

(There is actually a small exception to this. The guys at the very bottom of the performance pool – the guys in the bottom 10% in any given year, are more likely to stay in the bottom 10% than chance would predict. But even for those guys it’s not because they are worse at picking stocks. It’s just that they trade more frequently than everyone else so their performance drops due to transaction fees.)

 

The other way to think about it is hypothetically. If there really was an investment advisor who was able to consistently beat the market well beyond the level of chance – why would he be working with you? The ability to beat the market consistently is an extremely rare skill. Most hedge funds don’t do it. Pension funds and trusts like Harvard’s are doing everything they can to beat the market. If there was someone who could do that, they could take a role with one of those companies in a second and be paid a small share of the billions of dollars those funds invest. Why would that genius be working with you to invest your $100K or $1MM or $10MM?

 

I tried to explain this to my friend but we was still pointing to the +14%. It doesn’t matter what the input is, it is very difficult for humans not to see value, skill and intention in good outputs (and lack thereof when the output is bad). He also pointed out that it didn’t cost him the 2% in fees number I was throwing around.

The fees are insidious. “Obviously,” I explained to him, “your advisor is not working for free.”

He said he understood that. He knew his advisor was getting kickbacks from the mutual funds he was buying. “But,” he said, “The mutual fund fees would be the same with or without the advisor. So it’s just a cost for the mutual funds, not a cost for me.”

That’s even worse I told him. It means the advisors real incentive isn’t to make you money. The advisors incentive is to sign you up to mutual funds with the largest kickbacks. And those funds are likely also the funds with the largest fees. So he would be paying the same fees if he was investing on his own, but if he was investing smart on his own he would be avoiding those mutual funds like the plague.

 

“So what’s the answer?” He asked me, “What should I be doing with my money?”

 

Here’s my answer to him (and to you). It’s actually very easy, which is why the industry needs to spend so much time making it confusing.

 

  1. Invest in investment vehicles with very low fees. Vanguard Index Funds is one example. Your costs should be on the order of 0.1% of your investment, not 1%. Also included here: Don’t buy and sell stuff if it causes you to pay transaction fees (which it usually does)
  2. Diversify. Spread your money out as widely as you can across as many asset classes and geographies as you can. Ideally you spread your money the same way the global economy is spread (If the US is 10% of the global economy, you should put 10% of your investments in the US).This gets you the best return for any given level of risk.
  3. Find tax advantageous strategies. Max out your 401K or your IRA or any other vehicle the government gives you (education and health in many states). When you hit the highest tax bracket for any reason sell for capital losses to reduce your tax obligation (but hold capital gains to delay required payments).

That’s it. You can now guarantee yourself a better return on average than any advisor on the planet. If you are really concerned about your fortitude to do the right thing, pay someone $1000 per year to call you whenever the market drops to remind you not to sell.

 

If even the three steps above seem like too much work (and to be fair, diversifying properly and maximizing tax advantages can sometimes be a little difficult), you can use a service like Wealthfront or Betterment (or WealthSimple for Canadians). They charge very low fees to handle the diversification (and sometimes tax harvesting) for you.

It turns out personal finance is even easier than marketing.

 

The returns made by investment advisors are an example of how easy it is to identify “good enough” but how it is impossible to identify in advance “excellence”. This pattern repeats itself in many many fields – not just investment advisors. It is the thesis of the book I am writing. To read more about “Good Enough” check out this teaser, and then sign-up to get weekly emails with new unique not-seen-on-the-blog content here.

Interviews vs Performance Reviews in Consulting

I have shifted focus on my book.

When I started this blog my plan was to write a book on analytical and data-driven marketing. I may still finish that book someday. But as I wrote and thought about content I realized I had something broader I wanted to share. My biggest disagreement with the accepted marketing world is that I think it is being made too complicated. People are chasing things like big data and personalization and they are missing the drivers of what really matters. As I explored the subject more I realized this is a broader issue that has it’s tentacles in more than just marketing.

So I’ve changed my focus.

Instead of a pure marketing book, my plan is to write a book about how people – both in their work life and personal life – are wasting their time chasing “excellence” when they would be much better off trying to achieve “good”. And how doing “good” at scale is hard enough – and much more likely to lead to success – than trying to be “awesome”.

As I write the book I am going to share the content first with my email subscribers. IF you think you might like the content, and you haven’t already, please do subscribe. You can do that here.

As a teaser, here is the first part of current “Chapter one”:

 

Hanover is less a town than a college campus dropped into the middle of the New Hampshire forest. The isolation is part of the reason students choose Dartmouth over the other Ivy League schools in the north east. Social activities revolve around school-life more than at most other campuses. Even dating takes on a surreal quality when there are only three off-campus restaurants to choose from.

Winters at in New Hampshire hover around twenty degrees Fahrenheit. It is no wonder Dartmouth is known for their fraternity drinking culture. You might drink more too if you were in an isolated village surrounded by mountains in the dead of winter.

But Tariq Malik did not drink. He had other things on his mind. Tariq was studying for his MBA at Dartmouth’s Tuck School of Business and he wanted to be a consultant with McKinsey & Company. Every year business school students are surveyed and asked who their most desired employers would be. The top choice shifts based on recent company performance. Lately Google has been in the top spot. But for as long as the surveys have been running McKinsey has consistently been in the top two.

McKinsey & Company is the pinnacle of professional services firms. When it was founded in 1926 it was the first and only management consulting firm. In 1964 when the first women graduated from Harvard Business School, three of the eight joined McKinsey. In 1970 during a project for the Grocers Product Council, a McKinsey team invented the UPC code. Many companies put high value on their people. I once heard the CEO of Procter & Gamble say that if P&G lost all of their assets and all of their brands, they could re-build it with the people they had employed, but if the company lost its entire workforce it  would fail. It is unclear how much of that statement is truth vs hyperbole. But, in McKinsey’s case, while it is an exaggeration to say it does not have assets – McKinsey leases property; it operate a knowledge database – it is fair to say that its future success rides almost entirely on the quality of its people.

When I was at McKinsey from 2005-2009 we would charge clients about a $500,000 (plus 20% expenses) a month for a team of three people (plus some partner support). Ignoring the partners and assuming a sixty hour work week that works out to $645 per hour per consultant. We also had a philosophy of adding ten-times our fees in value created. Those people better be good.

So it should be no surprise McKinsey spends a great deal of time and effort making sure those people are the best they could possibly be. Part of that is training programs and on the job coaching. Part of it is employee selection: Making sure they hire the right people to begin with.

It is that hiring process that Tariq was preparing for.

There are two parts to the McKinsey interview. The first is the behavioral interview. In that section the candidate is asked about a time they had a specific experience. Each interview will dive deep on a different experience: “Tell me about a time when you had to change someone’s mind”; “Tell me about a time when you took on a leadership role outside your formal responsibilities.” “Tell me about a time when you had to make a difficult decision where neither choice seemed like the right one.” Each interviewer has a (different) standardized list of things they are listening to hear from the candidate’s story. They idea is to find people who have the right temperament to influence clients in a positive win-win way.

Most students spend very little time preparing for the behavioral interview. They are too busy stressing about the case interview.

The case interview begins with the interviewer explaining a situation and then asking the student how he or she would go about solving it. The interviewer may provide tables, charts of numbers – sometimes proactively and sometimes only when the student asks for it. It is a real-time, verbal problem solving.

Case interviews have often been misunderstood in popular media. Sometimes they are described like brain-teasers (“You have a fox, a chicken and some grain and you need to get it across the river on a boat…”). Other times they are described as surreal estimation problems (In the movie Abandon, Katie Holmes’ character is trying to get a job at McKinsey. The only question they show from her interview is, “Estimate how many paperclips would fit in this room.”). Both demonstrations of case interviews miss the mark.

A better example of a case interview would be something like this:

“You are working for a telecom company in Africa. They are trying to reduce the churn rate of their customers. Before we begin, we need to run a survey to ask people why they have stopped using their last mobile phone plan. We need it to be multiple-choice. How would you go about creating an extensive list of all the possible reasons someone could stop using their mobile phone service (and then create that list for me)?

Part 2: Let’s say the top reason is they are switching to a competitor for price-related reasons. What strategies could you use to prevent that churn?

Part 3: Let’s say we run an SMS campaign targeting users who we think are likely to churn in the near future. We get the following results (hand’s the candidate a printed spreadsheet). What happened? Do you think the program was successful? If so, how successful and should we roll it out? If not, what do you think could be done differently?”

Good case interviews often come straight out of actual client work. The candidate is being asked to solve a problem an actual McKinsey team was paid millions of dollars to resolve (albeit the candidate will receive significant hand-holding through the process with someone who knows what the actual answer is).

One could imagine how preparing for case interviews could be stressful.

Even with months of preparation Tariq did not get an offer to be a McKinsey summer associate. Undeterred, he applied for a full time role the next fall and was accepted. But getting a job at McKinsey does not end the challenge. Some would say Tariq’s gauntlet was just beginning.

McKinsey tries to hire the best, but they don’t stop there. After ever client “study” the consultants are given a formal evaluation. Twice a year they are put up against everyone of similar tenure in their office and evaluated again. Since consultants have different ‘managers’ on each study it is a weak complaint that “I just had a bad boss”. And bosses are being evaluated as well. There are many things to complain about at McKinsey, but not knowing where you stand is not one of them. No one is ever ‘fired’ from McKinsey but if you are not advancing at the expected rate, you will be “Counselled to Leave” or “CTL”. There are no eight-year consultants at McKinsey – there are only partners and ex-consultants.

Tariq is a partner at McKinsey today. One might say the McKinsey system worked for the “Tariq data-point”. The interviews suggested he would be a good fit and add value to clients, and a decade later he is a partner at the firm helping clients and mentoring new consultants. How do the other data points do?

There are lots of data-points to look at. McKinsey hires thousands of business school students every autumn and have been for decades. For every new hire McKinsey knows their quantified interview scores as well as their quantified performance on every study. They know each candidates relative strengths and weaknesses. And they know how long they lasted at the firm.

When McKinsey decides whether or not to make an offer, sometimes it is easy. After all the interviews they can turn the candidate’s evaluations into a score out of 100. When a candidate has a score of 90% or even 100% it is uncontroversial that they will get an offer. If they have a score of 10% it is clear they won’t. When a candidate has a score of 50% then there may be a vigorous discussion among the interviewers on whether McKinsey should take a chance on them. The result is that there is a spread of interview scores among new McKinsey hires. Some had stellar interviews and some got in by the skin of their teeth.

That variation is a good thing when for those that want to evaluate how well the interviews do at finding strong employees. It’s a relatively simple activity to run a regression against the interview scores and the performance evaluations those some people receive once they are working consultants. You can be sure an analytical company like McKinsey that believes the quality of its people is the most important thing for its future success has done that regression.

For those who haven’t worked at McKinsey (or done statistics) a simple regression just means putting all of your data points on a xy-chart and then drawing the best line you can through the data using some mathematical tools. If all the points line up perfectly on a diagonal-line you have perfect correlation in your regression. The further off your line they fall the less perfect your correlation. If you plot the heights of identical twins on the chart (one twin on the x-axis and one on the y-axis) you will get a near perfect (but not completely perfect) lining up of points. If you plot the weight of the same twins on a similar chart the points would still be very close to the line, but not as close as the height regression. The points on a regression of non-twin same-sex siblings would also be close, but much further than the first two regressions.

How closely the points are to the best line you can draw is called the “correlation”. Statisticians measure the correlation with something called R2 or R-squared. The higher the R2 the more two data sets are correlated. The temperature over time between Boston and New York is correlated. The temperature over time between Washington DC and Baltimore is even more correlated. The temperature over time between Toronto and Los Angeles is almost not correlated at all (but not completely uncorrelated. Since both cities are warmer in the summer and cooler in the winter, you would still see some correlation).

One would hope that the extensive interview process McKinsey puts its candidates through would be highly correlated with how well those candidates do on the job. Otherwise, why bother with the time and effort of the challenging interview process? (Make no mistake: The McKinsey interview process is difficult and time consuming for the interviewers as well.)

So what is the correlation between McKinsey interview scores and McKinsey job evaluations?

Zero.

Nothing.

There is no correlation.

There is less correlation between candidates’ results on the McKinsey interview and their performance on the job than there is between the temperature in Toronto and Los Angeles.

It bears repeating: This is the most expensive consulting firm in the world. Their core focus is bringing analytical rigor to their clients. They believe there is nothing more important than hiring the right people and they are willing to dedicate as many resources as it takes to mastering that challenge. And yet if instead of using their analytically-tested interview scores to predict who make the best consultants you instead just threw a dart at accepted candidates, you would be just as accurate.

The candidates McKinsey thinks are “slam dunks” are no better than the candidates that barely get over the fence. McKinsey people are some of the smartest people in the world, and yet on this, they are no better than chance.

Why?

And if all this is really true, why is McKinsey making a mistake to put any effort into recruiting at all? If their best hires and their worst hires show no difference in performance, why not just hire people at random and save on all that effort?

The answers to these questions form the meat of this book.

 

Why can’t we tell good from great?

And what should we do about it?