I have shifted focus on my book.
When I started this blog my plan was to write a book on analytical and data-driven marketing. I may still finish that book someday. But as I wrote and thought about content I realized I had something broader I wanted to share. My biggest disagreement with the accepted marketing world is that I think it is being made too complicated. People are chasing things like big data and personalization and they are missing the drivers of what really matters. As I explored the subject more I realized this is a broader issue that has it’s tentacles in more than just marketing.
So I’ve changed my focus.
Instead of a pure marketing book, my plan is to write a book about how people – both in their work life and personal life – are wasting their time chasing “excellence” when they would be much better off trying to achieve “good”. And how doing “good” at scale is hard enough – and much more likely to lead to success – than trying to be “awesome”.
As I write the book I am going to share the content first with my email subscribers. IF you think you might like the content, and you haven’t already, please do subscribe. You can do that here.
As a teaser, here is the first part of current “Chapter one”:
Hanover is less a town than a college campus dropped into the middle of the New Hampshire forest. The isolation is part of the reason students choose Dartmouth over the other Ivy League schools in the north east. Social activities revolve around school-life more than at most other campuses. Even dating takes on a surreal quality when there are only three off-campus restaurants to choose from.
Winters at in New Hampshire hover around twenty degrees Fahrenheit. It is no wonder Dartmouth is known for their fraternity drinking culture. You might drink more too if you were in an isolated village surrounded by mountains in the dead of winter.
But Tariq Malik did not drink. He had other things on his mind. Tariq was studying for his MBA at Dartmouth’s Tuck School of Business and he wanted to be a consultant with McKinsey & Company. Every year business school students are surveyed and asked who their most desired employers would be. The top choice shifts based on recent company performance. Lately Google has been in the top spot. But for as long as the surveys have been running McKinsey has consistently been in the top two.
McKinsey & Company is the pinnacle of professional services firms. When it was founded in 1926 it was the first and only management consulting firm. In 1964 when the first women graduated from Harvard Business School, three of the eight joined McKinsey. In 1970 during a project for the Grocers Product Council, a McKinsey team invented the UPC code. Many companies put high value on their people. I once heard the CEO of Procter & Gamble say that if P&G lost all of their assets and all of their brands, they could re-build it with the people they had employed, but if the company lost its entire workforce it would fail. It is unclear how much of that statement is truth vs hyperbole. But, in McKinsey’s case, while it is an exaggeration to say it does not have assets – McKinsey leases property; it operate a knowledge database – it is fair to say that its future success rides almost entirely on the quality of its people.
When I was at McKinsey from 2005-2009 we would charge clients about a $500,000 (plus 20% expenses) a month for a team of three people (plus some partner support). Ignoring the partners and assuming a sixty hour work week that works out to $645 per hour per consultant. We also had a philosophy of adding ten-times our fees in value created. Those people better be good.
So it should be no surprise McKinsey spends a great deal of time and effort making sure those people are the best they could possibly be. Part of that is training programs and on the job coaching. Part of it is employee selection: Making sure they hire the right people to begin with.
It is that hiring process that Tariq was preparing for.
There are two parts to the McKinsey interview. The first is the behavioral interview. In that section the candidate is asked about a time they had a specific experience. Each interview will dive deep on a different experience: “Tell me about a time when you had to change someone’s mind”; “Tell me about a time when you took on a leadership role outside your formal responsibilities.” “Tell me about a time when you had to make a difficult decision where neither choice seemed like the right one.” Each interviewer has a (different) standardized list of things they are listening to hear from the candidate’s story. They idea is to find people who have the right temperament to influence clients in a positive win-win way.
Most students spend very little time preparing for the behavioral interview. They are too busy stressing about the case interview.
The case interview begins with the interviewer explaining a situation and then asking the student how he or she would go about solving it. The interviewer may provide tables, charts of numbers – sometimes proactively and sometimes only when the student asks for it. It is a real-time, verbal problem solving.
Case interviews have often been misunderstood in popular media. Sometimes they are described like brain-teasers (“You have a fox, a chicken and some grain and you need to get it across the river on a boat…”). Other times they are described as surreal estimation problems (In the movie Abandon, Katie Holmes’ character is trying to get a job at McKinsey. The only question they show from her interview is, “Estimate how many paperclips would fit in this room.”). Both demonstrations of case interviews miss the mark.
A better example of a case interview would be something like this:
“You are working for a telecom company in Africa. They are trying to reduce the churn rate of their customers. Before we begin, we need to run a survey to ask people why they have stopped using their last mobile phone plan. We need it to be multiple-choice. How would you go about creating an extensive list of all the possible reasons someone could stop using their mobile phone service (and then create that list for me)?
Part 2: Let’s say the top reason is they are switching to a competitor for price-related reasons. What strategies could you use to prevent that churn?
Part 3: Let’s say we run an SMS campaign targeting users who we think are likely to churn in the near future. We get the following results (hand’s the candidate a printed spreadsheet). What happened? Do you think the program was successful? If so, how successful and should we roll it out? If not, what do you think could be done differently?”
Good case interviews often come straight out of actual client work. The candidate is being asked to solve a problem an actual McKinsey team was paid millions of dollars to resolve (albeit the candidate will receive significant hand-holding through the process with someone who knows what the actual answer is).
One could imagine how preparing for case interviews could be stressful.
Even with months of preparation Tariq did not get an offer to be a McKinsey summer associate. Undeterred, he applied for a full time role the next fall and was accepted. But getting a job at McKinsey does not end the challenge. Some would say Tariq’s gauntlet was just beginning.
McKinsey tries to hire the best, but they don’t stop there. After ever client “study” the consultants are given a formal evaluation. Twice a year they are put up against everyone of similar tenure in their office and evaluated again. Since consultants have different ‘managers’ on each study it is a weak complaint that “I just had a bad boss”. And bosses are being evaluated as well. There are many things to complain about at McKinsey, but not knowing where you stand is not one of them. No one is ever ‘fired’ from McKinsey but if you are not advancing at the expected rate, you will be “Counselled to Leave” or “CTL”. There are no eight-year consultants at McKinsey – there are only partners and ex-consultants.
Tariq is a partner at McKinsey today. One might say the McKinsey system worked for the “Tariq data-point”. The interviews suggested he would be a good fit and add value to clients, and a decade later he is a partner at the firm helping clients and mentoring new consultants. How do the other data points do?
There are lots of data-points to look at. McKinsey hires thousands of business school students every autumn and have been for decades. For every new hire McKinsey knows their quantified interview scores as well as their quantified performance on every study. They know each candidates relative strengths and weaknesses. And they know how long they lasted at the firm.
When McKinsey decides whether or not to make an offer, sometimes it is easy. After all the interviews they can turn the candidate’s evaluations into a score out of 100. When a candidate has a score of 90% or even 100% it is uncontroversial that they will get an offer. If they have a score of 10% it is clear they won’t. When a candidate has a score of 50% then there may be a vigorous discussion among the interviewers on whether McKinsey should take a chance on them. The result is that there is a spread of interview scores among new McKinsey hires. Some had stellar interviews and some got in by the skin of their teeth.
That variation is a good thing when for those that want to evaluate how well the interviews do at finding strong employees. It’s a relatively simple activity to run a regression against the interview scores and the performance evaluations those some people receive once they are working consultants. You can be sure an analytical company like McKinsey that believes the quality of its people is the most important thing for its future success has done that regression.
For those who haven’t worked at McKinsey (or done statistics) a simple regression just means putting all of your data points on a xy-chart and then drawing the best line you can through the data using some mathematical tools. If all the points line up perfectly on a diagonal-line you have perfect correlation in your regression. The further off your line they fall the less perfect your correlation. If you plot the heights of identical twins on the chart (one twin on the x-axis and one on the y-axis) you will get a near perfect (but not completely perfect) lining up of points. If you plot the weight of the same twins on a similar chart the points would still be very close to the line, but not as close as the height regression. The points on a regression of non-twin same-sex siblings would also be close, but much further than the first two regressions.
How closely the points are to the best line you can draw is called the “correlation”. Statisticians measure the correlation with something called R2 or R-squared. The higher the R2 the more two data sets are correlated. The temperature over time between Boston and New York is correlated. The temperature over time between Washington DC and Baltimore is even more correlated. The temperature over time between Toronto and Los Angeles is almost not correlated at all (but not completely uncorrelated. Since both cities are warmer in the summer and cooler in the winter, you would still see some correlation).
One would hope that the extensive interview process McKinsey puts its candidates through would be highly correlated with how well those candidates do on the job. Otherwise, why bother with the time and effort of the challenging interview process? (Make no mistake: The McKinsey interview process is difficult and time consuming for the interviewers as well.)
So what is the correlation between McKinsey interview scores and McKinsey job evaluations?
There is no correlation.
There is less correlation between candidates’ results on the McKinsey interview and their performance on the job than there is between the temperature in Toronto and Los Angeles.
It bears repeating: This is the most expensive consulting firm in the world. Their core focus is bringing analytical rigor to their clients. They believe there is nothing more important than hiring the right people and they are willing to dedicate as many resources as it takes to mastering that challenge. And yet if instead of using their analytically-tested interview scores to predict who make the best consultants you instead just threw a dart at accepted candidates, you would be just as accurate.
The candidates McKinsey thinks are “slam dunks” are no better than the candidates that barely get over the fence. McKinsey people are some of the smartest people in the world, and yet on this, they are no better than chance.
And if all this is really true, why is McKinsey making a mistake to put any effort into recruiting at all? If their best hires and their worst hires show no difference in performance, why not just hire people at random and save on all that effort?
The answers to these questions form the meat of this book.
Why can’t we tell good from great?
And what should we do about it?