Sunday, 6 September 2015

Using Behavioral Economics to Keep Resolutions

In 2010, David Cameron set up the “Behavioral Insights Team” (nicknamed the “Nudge Unit”) to use behavioural economics to “nudge” individuals to take superior decisions for themselves and society, e.g. save more, register as organ donors, or give to charity. It uses Randomized Control Trials (RCTs) to find the method that best works – similar to clinical trials in medicine. The establishment of BIT was inspired by Cameron reading the book “Nudge” by Richard Thaler and Cass Sunstein, and Thaler (one of the pioneers of behavioural economics) was heavily involved. BIT had their annual “Behavioural Exchange” conference in Westminster in September 2015 and I was privileged to give a shortened version of my TEDx talk showing that social responsibility improves profit, in contrast to the conventional view that it’s at the expense of profit. But, the greater privilege was the chance to learn from the leading behavioural thinkers in the world, e.g. Thaler, Daniel Kahneman (Nobel Laureate), Dan Ariely (author of Predictably Irrational), Hal Varian (Chief Economist of Google). Over the next few blog posts I will share the leading insights of the conference.

Today I’ll start with what I thought to be the single most powerful idea, which I immediately started using in my own life. It’s from Dan Ariely, one of my favorite behavioral thinkers and giver of some of my very favorite TED talks (here is a blog post on my top 10 TED talks). It was in a session entitled “Nudging for International Development” – but the idea turned out to be one that you can apply just as much to individual habits as world economic problems.

Here’s the experiment he ran. He used a RCT to study the best way to persuade adults in a developing country to save money. Here are the different treatments he used:

1) A text message saying “Please try to save 100 shillings by the end of the week”

2) A text message signed by their kids saying “Hey Mom/Dad, please try  to save 100 shillings by the end of the week”. (The text still came from the experimenters – the parents knew that their kids don’t have cellphones – but signing it with their kids’ names prompted them to think of their kids when making the saving decision).

3) A 10% or 20% post-match – i.e. the experimenter gave you an extra 10% or 20% (depending on the treatment) of what you saved by the end of the week.

4) A 10% or 20% pre-match. Similar to the above, but you’re given the bonus at the start of the week, and it’s taken away from you if you fail to save 100 shillings. This is intended to exploit the “endowment effect”. People value something more highly if they have it than if they don’t, so giving them a reward and threatening to take it away may be more powerful than giving it at the end.

5) A gold coin. At the end of the day, you used the coin to scratch either a “Yes” or “No” box according to what you saved that day.

Dan asked the audience which they thought worked the best. The audience seemed split between 2), 4), and 5) (with a bit less suggesting 3). But, the coin turned out to be the most effective by a substantial margin. Here’s why. The coin scratch is much more salient and timely. The bonus pays off only at the end of the week. So, it encourages procrastination – you can spend today, and dupe yourself into thinking you’ll save tomorrow. The same goes for the text message – you think you’ll recover and still hit the weekly target. The coin scratch encourages you to save every day. And it “gamifies” saving. Adults feel they’ve achieved something when, at the end of the day, they scratch the “Yes” box. Note that, if they don’t save, they don’t simply do nothing – they’re asked to actively scratch the “No” box, i.e. admit to themselves that they’ve failed to save.

Although Dan didn’t explicitly draw the implications for our own lives (he only had 8 minutes), I thought that this is something we can instantly apply to develop good habits. That same evening, I created a spreadsheet with a list of things that I would like to achieve each day. Every day, at the end of the day, I have to put a tick or a cross according to whether I’ve achieved it. These goals will vary from person to person, but I’m happy to share a few of mine:

1) Did I end the day with zero emails in my inbox? Typically, I don’t, which leads to the inbox getting more and more cluttered over time. (Note, this doesn’t mean I reply to every email – that’s unrealistic – but I file them for future reply by a particular deadline. See an earlier blog post, “Time Management Tips to Improve Your Productivity“, for more on dealing with email).

2) Did I go to bed before midnight? If I don’t, I don’t just write a cross, I have to write the number of minutes I exceeded the target by. This is to avoid what Dan called in another session the “what the hell” effect – if I know I’ve missed the deadline, I might as well stay up until 3am.

3) Did I do athletics practice today? (plus a similar box for music practice). We often set resolutions to lose weight, or get better at the guitar. But, these are long-term goals; without daily accountability, they get deprioritized.

4) How many times did I check my iPhone that day? I have an app called Checky which records the number of times I check my phone. I try to get below 20 each day.

5) How much time did I spend on my iPhone that day? Tracked with an app called Moment.

Can a silly little tick or cross – on a sheet of paper no-one else sees – really affect how grown adults behave? Can something intellectually equivalent to a gold star we give to kids make a big difference? Actually, yes. As emphasized by one of my favorite authors Stephen Covey (especially in his books “7 Habits of Highly Effective People” and “First Things First”), people act differently when they keep score. You run faster if you’re carrying a Garmin, you row faster if the display shows your split time. No-one else sees these things but you, but they make a big difference. If you play a racket sport, you’ll play differently in an actual game versus just hitting about, even though only you and your opponent see the score and the score has zero effect on your life. People (including me) invest time playing Fantasy Football even though you win absolutely nothing and the game doesn’t matter – because the score has meaning. So, why don’t we apply the idea of scorekeeping to things that do matter?

Sunday, 26 April 2015

Predicting Mutual Fund Performance Using (Legal) Inside Information

How does an investor choose which mutual fund to invest in? She’ll want a measure of the fund manager’s skill, and the most natural measure is his past performance. But, a ton of research has systematically found that past performance doesn’t predict future performance – it’s irrelevant in choosing a mutual fund.

How can this be? One interpretation is that fund managers aren’t skilled to begin with, and instead any good performance is due to luck. The thinking goes as follows. Skill is permanent. If good past performance were due to skill, performance should stay strong in the future. But, luck’s temporary.  If good past performance were due to luck, performance should revert to the average in the future. Since future performance appears unpredictable, this seems to support the luck explanation. This has huge implications for investors – if mutual fund managers indeed have no skill, there’s no point paying the high fees (around 1.5% per year) associated with actively-managed funds. Instead, put your money in passive index funds (where fees can be as low as 0.1%). Perhaps due to this thinking, passive index funds have grown substantially in recent years.

But an influential 2004 paper by Jonathan Berk (Stanford) and Rick Green (Carnegie Mellon) reached a different conclusion. Fund managers are skilled, and good past performance is a signal of skill. But, because everyone else is trying to invest with a skilled manager, managers with good past performance enjoy a flood of new funds coming in. This increases the fund manager’s assets under management (AuM) and thus his fees (which are a percentage of AuM) and so he won’t discourage the new flows. But, it will worsen his performance next year, because of diminishing returns to scale in investing. The manager has to put the new funds to work. But, he’s already investing in his top stock picks. He can’t put all of the new money in the same stocks, because there’s not enough liquidity in the market to accommodate this extra demand. So, he’ll have to choose his next-best picks, which will do worse. Thus, even though past performance is an indicator of skill, it’s not an indicator of future performance.

What’s the problem here? The analogy is choosing an individual stock. Choosing a stock on the basis of an attractive characteristic that’s known to everyone (e.g. buying Facebook because it’s a leader in social media) won’t be fruitful. Since everyone else is aware of that characteristic, they will have bought into the stock and driven the stock price up – the “Efficient Markets Hypothesis”. Similarly, identifying fund manager skill using a dimension that’s known to everyone (e.g. past performance) is also not fruitful. Since everyone else is aware of past performance, they will have bought into the fund and driven its AuM up, worsening its future performance.

The key to picking a stock is thus to identify positive attributes that might aren’t known to others. Similarly, the key to choosing a mutual fund is to find a measure of skill that isn’t known to others – to have a measure of skill based on private (but legal) inside information. This is where an ingenious new paper by Jonathan, together with Jules van Binsbergen (Wharton) and Binying Liu (Kellogg), entitled “Matching Capital and Labor”, comes in.

A mutual fund is part of a fund family. For example, the Fidelity South East Asia Fund and the Fidelity Low Priced Stock Fund are both part of Fidelity. One of Fidelity’s jobs as a fund family is to evaluate the performance of each fund manager, to decide whether to promote her (i.e. give her an additional fund to manage, or move her to a larger fund) or demote her (take away one of her funds). They have access to a ton of information over and above past performance figures – just like scouting out a baseball player gives you much more information than you’d get from the statistics. For example, they can engage in subjective evaluations of her performance based on on-the-job observation, or assess whether poor performance might actually be due to good long-run investments that just haven’t paid off yet.  Thus, a promotion signals positive private information, and a demotion signals negative private information.  

As an example, take Morris Smith. He joined Fidelity in 1982 and, from 1984-6, ran Fidelity's Select Leisure Fund, which soared from $500k to $350m under his management. In 1986 he was promoted to the Fidelity Over-the-Counter Fund and managed an average of $1b. After further good performance he was promoted to Fidelity's flagship fund in 1990 with assets of $13b.

In short, by observing promotion and demotion decisions (which we can, using data sources such as Morningstar and CRSP), we can infer the fund family’s private information.

Jonathan, Jules, and Binying find that:
  1. Promotion and demotion decisions can’t be predicted using data on past performance. In other words, observing such decisions gives investors, additional information over and above what we’d get from past performance figures. It allows us to (legally) infer the fund family’s private information.
  2. Promotion and demotion decisions both increase the fund manager’s value added.  The authors measure value added using a metric introduced by an earlier paper by Jonathan and Jules.  This equal’s the fund’s “gross alpha” (its actual return before fees and expenses, minus the return from passively holding the benchmark) multiplied by its assets under management (“AUM”).  This gives a dollar measure of how much value is added (or subtracted) by active management.  That both promotions and demotions increase future value added suggests that promotions give more capital to a skilled manager who can use it effectively, and demotions pull the plug from an unskilled manager who was using capital wastefully.  Thus, the information that promotion/demotion decisions give is not only incremental (to past performance), but also useful.
  3. It’s inside information that drives the results.  “External” promotions or demotions (a manager leaving to a new fund family and managing a fund with higher or lower AUM than he did before) have no effect on future value added.
  4. These effects are large. The fund family’s decision to promote or demote a manager adds value of $715,000 per manager per month. Thus, 30% of the value that a mutual fund manager adds comes from the fund family giving her the right amount of capital.
Why doesn’t the decision to give a manager a second fund lead to the problem in Berk/Green, that the fund manager now has too much money under her control? Because, the fund family – through its extensive monitoring – estimates the optimal amount of funds to give each manager. It chooses to promote managers who previously had been underallocated funds, so that promotion does not lead to the problem of diminishing returns to scale.

Decades of academic research have failed to find an answer to one of the most important practical questions for investors – how to predict mutual fund performance. Jonathan, Jules, and Binying may have just found a way.

Sunday, 15 February 2015

If Money Doesn't Buy You Happiness, You're Not Spending It Right

A good chunk of traditional finance research teaches us how to make money, such as optimal investment strategies. But, there's very little on how to spend it. Studies show surprisingly little relationship between money and happiness. One interpretation is that things that make you truly happy can't be bought - but money can allow people to afford healthier food, better medical care, more varied pastimes, better education, and leisure time with friends and family. So an alternative interpretation is that people don't know how to spend it.

That's where behavioral economists and psychologists come in. Elizabeth Dunn (UBC), Daniel Gilbert (Harvard) and Timothy Wilson (Virginia)'s excellent Journal of Consumer Psychology article, "If Money Doesn't Make You Happy, Then You Probably Aren't Spending It Right" surveys a ton of research and distills it to eight succinct guidelines. I summarize five of them here.

1) Buy More Experiences and Fewer Material Goods.

People who fritter their money away on holidays or expensive dinners are seen as wasteful, as there's nothing to show for it afterwards. Renovating your house or buying a better car are more prudent. But, it's actually the former that has the greater effect than happiness. We adapt to things (such as a new conservatory or a flashier car) quickly. But, the memory of an experience (e.g. an African safari) remains with you long after the fact, and the anticipation of the experience also bring utility.

Moreover, "mindfulness" studies systematically find that unhappiness is correlated with mind-wandering. Experiences absorb you and keep you focused on the here and now, but you can be distracted by a dozen things while driving your car.

2) Spend Money on Others Rather Than Yourself.

Scientists believe that one major reason for humans' large brain size is that we are more social than nearly any other animal. Thus, our happiness depends markedly on the quality of our social relationships. The "prosocial behavior" literature consistently finds that subjects report greater happiness after spending money on others rather than themselves - even though they anticipated that they would be happier doing the latter.

3) Buy Many Small Pleasures Instead Of Few Large Ones.

A variety of frequently small pleasures (in the authors' words, "double lattes, uptown pedicures, and high thread-count socks") dominate one big-ticket purchase, such as a front-row concert ticket. This is the well-known economic principle of diminishing marginal utility - a two-week vacation is less enjoyable than two separate one-week vacations. Indeed, studies show that happiness is more associated with the frequency rather than intensity of experiences.

The main reason is the surprise factor of a new experience. Two smaller vacations allow you to explore two different places. Moreover, variety exists even for "everyday" experiences - a beer after work is never the same as the last one, since it will feature different people and different conversations.

4) Buy Less Insurance

This principle doesn't just apply to literal insurance, e.g. over-priced extensive warranties, but also the "insurance" that comes with a generous return policy. Customers prefer Amazon to eBay and Craigslist, despite it being more expensive, because of the option to return a product they don't like. But, as Dan Gilbert discussed in his excellent TED talk The Surprising Science of Happiness (see here for my list of top ten TED talks), whether we like something or not doesn't just depend on the item's attributes - we can consciously choose to like it. Indeed, studies show that you like an item more if you don't have the option to return it.

5) Beware of Comparison Shopping

Websites allow you to compare products on tiny details, which leads to consumers fixating on very small differences and ignoring the similarities on the major characteristics. They can thus miss the forest for the trees and choose the wrong product based on a minor attribute. In addition, doing so wastes substantial time on minutiae, particularly since we typically grow to end up liking the product we buy anyway if its major characteristics are correct (see point 4). 

Sunday, 25 January 2015

Dangers of Using a Company-Wide Discount Rate

Any Finance 101 class will emphasize that the appropriate discount rate for a project depends on the project’s own characteristics, not the firm as a whole. If a utilities firm moves into media (e.g. Vivendi), it should use a media beta - not a utilities beta - to calculate the discount rate . However, a survey found that 58% of firms use a single company-wide discount rate for all projects, rather than a discount rate specific to the project’s characteristics. Indeed, when I was in investment banking, several clients would use their own cost of capital to discount a potential M&A target's cash flows. 

But the important question is – does this really matter? Perhaps an ivory-tower academic will tell you the correct weighted average cost of capital (WACC) is 11.524% but if you use 10%, is that good enough? Given the cash flows of a project are so difficult to estimate to begin with, it seems pointless to “fine-tune” the WACC calculation.

An interesting paper, entitled “The WACC Fallacy: The Real Effects of Using a Unique Discount Rate”, addresses the question. The paper is forthcoming in the Journal of Finance and co-authored by Philipp Krueger of Geneva, Augustin Landier of Toulouse and David Thesmar of HEC Paris. 

This paper shows that it matters. The authors first looked at organic investment (capital expenditure, or "capex"). If your core business is utilities and the non-core division is media, you should be using a media discount rate for non-core capex. But, if you incorrectly use a utilities discount rate, the discount rate is too low and you'll be taking too many projects. The authors indeed find that capex in a non-core division is greater if the non-core division has a higher beta than the core division. Moreover, they find the effect is smaller (a) in recent years, consistent with the increase in finance education (e.g. MBAs), (b) for larger divisions – if the non-core division is large, then management puts the effort into getting it right, (c) when management has high equity incentives, as these also give them incentives to get it right.

The authors then turn to M&A. They find that conglomerates tend to buy high-WACC targets rather than low-WACC targets, again consistent with them erroneously using their own WACC to value a target, when they should be using the target’s own high WACC. Moreover, the attraction of studying M&A is the authors can measure the stock market’s reaction to the deal, to quantify how much value is destroyed. They find that shareholder returns are 0.8% lower when the target’s WACC is higher than the acquirer’s WACC. They study 6,115 deals and the average acquirer size is $2bn. Thus, the value destruction is 0.8% * $2bn * 6,115 = $98bn lost to acquirers in aggregate because they don’t apply a simple principle taught in Finance 101!

We often wonder whether textbook finance theory is relevant in the real world – perhaps you don’t need the “academically” right answer and it's sufficient to be close enough. But this paper shows that “getting it right” does make a big difference.