I’ve just completed second year of my subject Behavioural Finance at the University of Western Sydney. This is of course a non-traditional subject–meaning non-Efficient-Markets-Hypothesis–but even here I take a non-standard approach. While I have great respect for the work of Kahneman and Tversky on behavioural economics, I argue that much of the subsequent work is mis-directed, because of a crucial misinterpretation of the original work on expected utility by von Neumann and Morgenstern.

Much of the standard behavioural finance literature shows that individual behaviour violates the precepts of expected utility theory when faced with a choice between two hypothetical options, and then develops some modified utility function that fits the actual behaviour. The options are normally presented in this manner:

- 1. Choose between
- A.$1000 with certainty; OR
- B. 90% odds of $2000 & 10% odds of -$1000

- 2. Choose between
- A. $0 with certainty; OR
- B. 50% odds of $150 and 50% odds of -$100

- 3. Choose between
- A. -$100 with certainty; OR
- B. 50% odds of $50; 50% odds of -$200

It is alleged that a rational person according to expected utility theory would choose B in all 3 cases, since the expected value of B exceeds A every time. The expected value is calculated simply by multiplying the value of each outcome by the probabilities. Thus the values above are

- 1. Choose between
- A: $1000
- B. .9 times $2000 + .1 times -$1000 = 1800 – 100 = 1700

- 2. Choose between
- A. $0
- B. .5 times $150 – .5 times $100 = $75 – 50 = 25

- 3. Choose between
- A. -$100
- B. .5 times $50 – .5 times $200 = $25 – 100 = -$75

However, when experiments are conducted, the majority of people choose option A in choices 1 and 2, but B in number 3: they are “irrational” twice and rational once. This sets up all sorts of conundrums, leading to a range of interesting problems, and a voluminous literature on irrationality, bounded rationality, risk aversion, preference reversal, and so on. The Wikipedia entry (as at November 11 2010) encapsulates this perspective:

The expected utility hypothesis, as applied to economics, has limited predictive accuracy, simply because in practice, humans do not always behave VNM-rationally. This can be interpreted as evidence that

- humans are not always rational, or
- VNM-rationality is not an appropriate characterization of rationality, or
- some combination of both.

Had von Neumann lived to see the development of this theory, I expect he’d be questioning, not human rationality in general, but the rationality of his interpreters–because his concept of expected utility was very different to how it is now portrayed in the literature. The crucial difference is that the literature uses a subjective vision of probability, when this was explicitly rejected by von Neumann and Morgenstern:

“Probability has often been visualized as a subjective concept more or less inthe nature of estimation. Since we propose to use it in constructing anindividual, numerical estimation of utility, the above view of probability wouldnot serve our purpose. The simplest procedure is, therefore, to insist uponthe alternative, perfectly well founded interpretation of probability as *frequency in long runs*.” (von Neumann & Morgenstern 1944: 19)

What difference does that make? A lot! Try it with the above three examples: consider exactly the same choices, but with the proviso that whichever option you choose you must repeat 100 times:

- 1. Choose between
- A. Receiving $1000 with certainty 100 times; OR
- B. 100 gambles with 90% odds of $2000 & 10% odds of -$1000

- 2. Choose between
- A. Receving $0 with certainty 100 times; OR
- B. 100 gambles with 50% odds of $150 and 50% odds of -$100

- 3. Choose between
- A. Losing -$100 with certainty 100 times; OR
- B. 100 gambles with 50% odds of $50; 50% odds of -$200

Now there’s no doubt that option B is the rational one. The total values of the options are now:

- 1.
- A.
**$100,000** - B. 100 times (.9 times $2000 – .1 times -$1000) = 100 times ($1,800 – $100) = $180,000 – $10,000 =
**$170,000**

- A.
- 2.
- A.
**$0** - B. 100 times (.5 times $150 – .5 times $100) = 100 times ($75 – $50) =
**$2,500**

- A.

3.

- A.
**-$10,000** - B. 100 times (.5 times $50 – .5 times -$200) = 100 times ($25 – $100) =
**-$7,500**

Do the sums and you’d have to be irrational (or very bad at arithmetic) to choose A over B.

The difference between the way the literature has interpreted von Neumann and Morgenstern and the way they intended their work to be used is that, in their model, the consumer actually gets the Expected Value of the gamble, because if you take a gamble 100 times, the outcome is very likely to be close to the predicted odds. If on the other hand you undertake a gamble only once, you don’t get the Expected Value: you get either one option or the other, and probability can tell you which is more likely, but it can’t tell you which one you’ll actually get.

Consequently I see much of the behavioural economics & finance literature as interesting, but wrong-headed–and is so often the case in economics, using a definition of “rational” that is seriously irrational. My subject therefore devotes a modicum of time to the standard literature before moving into what I see as a more realistic approach, of considering what the macro behaviour of the finance sector actually is. This leads ultimately to Minsky’s Financial Instability Hypothesis and the “Great Recession”.

All my lectures (in powerpoint format) are linked below, as well as MP3 recordings of the lectures and, in some cases, FLV recordings of my presentation. I had some hardware and software hassles while doing all this; hopefully I’ll be able to post a more complete set of these next year.

## Lecture 01: Economic Behaviour

Steve Keen's Debtwatch Podcast

## Lecture 02: Market Behaviour

Part 1 Demand: Powerpoint

Steve Keen's Debtwatch Podcast

Part 2 Supply: Powerpoint

Steve Keen's Debtwatch Podcast

## Lecture 03: Theoretical Financial Markets Behaviour

Part 1: Powerpoint

Steve Keen's Debtwatch Podcast

Part 2: Powerpoint

Steve Keen's Debtwatch Podcast

Steve Keen's Debtwatch Podcast

## Lecture 04: Actual Financial Markets Behaviour

Part 1: Powerpoint

Steve Keen's Debtwatch Podcast

Part 2: Powerpoint

Steve Keen's Debtwatch Podcast

## Lecture 05: Fractal Markets Hypothesis

Part 1: Powerpoint

Steve Keen's Debtwatch Podcast

Steve Keen's Debtwatch Podcast

## Lecture 06: Inefficient Markets Hypothesis

## Lecture 07: Statistics on money

Part 1: Powerpoint

Part 2: Powerpoint

## Lecture 08: Endogenous money

Part 1: Powerpoint

Part 2: Powerpoint

## Lecture 09: Modelling endogenous money

Part 1: Powerpoint

Part 2: Powerpoint

## Lecture 10: Extending endogenous money

Part 1: Powerpoint

Part 2: Powerpoint

## Lecture 11: The Financial Instability Hypothesis

Part 1: Powerpoint

Steve Keen's Debtwatch PodcastPart 2: Powerpoint

Steve Keen's Debtwatch Podcast## Lecture 12: The “Global Financial Crisis”

Part 1: Powerpoint

Part 2: Powerpoint

Steve Keen's Debtwatch Podcast
It’s more than that Tom. As we prove, MC=MR isn’t profit-maximizing behaviour. The threat of new non-profit-maximizing agents isn’t part of the theory.

Ultimately I want this stuff consigned to the garbage can of intellectual history, along with phlogiston, the aether and heavenly orbs. But for now pointing out that it is not internally consistent is part of the process of getting rid of it.

The other thing that has really annoyed me is that neoclassicals never concede that the Marshallian model is strictly false. If that were conceded, and it was no longer taught, then much of the neoclassical edifice would crumble with it.

Hi Tom,

I think part of the error of the EMH (as opposed to the IEH) is assuming Gaussian distributions. So while I accept aspects of your model, I would not have a Gaussian distribution of anything as part of it. Variables and firms in a market economy affect the outcome of other variables and firms; the independence required for a truly random distribution of anything is not met.

Thanks for the feedback. In fact white light itself doesn’t follow a Gaussian distribution of frequencies – it follows the black body radiation distribution. So I see no problem in allowing e.g. the power law distribution for pricing errors. The requirement is that the error of a stock at time A is independent of the error of that stock at long run time B.

Why do you say the new agents are not profit-maximising? I’ve showed you the profit motive: go from $0 to greater than $0. It would be irrational NOT to take up this opportunity!

Are you defining profit-maximising in terms of the market as a whole? The whole point of game theory is that rational, individually profit-maximising actors don’t necessarily reach the Pareto-optimum level (let alone their individual global maximum level). That doesn’t mean they’re not profit-maximising.

Regarding your goal of debunking the Marshallian model, one thing I learnt from studying Negotiation is that a large number of weak arguments does not persuade someone to change their mind. Every detail that they can rebut encourages them to entrench their position further. It’s better to repeat a few strong arguments that can’t be rebutted.

PS Here’s the essence of your circular logic. You assume that there are no new entrants, so you derive that MC=MR isn’t profit-maximising behaviour. You then use this result to prove that there are no new entrants.

Tom, the notion that MC=MR maximises profits was first asserted by Marshall in the context of partial equilibrium for a single market without the consideration of entry from other markets. It is categorically false. In partial equilibrium, the actual profit-maximising rule is, as I have derived:

MC-MR = (n-1)/n * (P-MC)

I have not derived the rule for general equilibrium–nor am I interested in doing so. I imagine it would further generalise my result for partial equilibrium. Would you like to give it a try?

Hi Steve, one of the explicit assumptions of the model is that there are no barriers to entry. How can you say with a straight face that the model doesn’t include consideration of entry from other markets?

It’s not a general equilibrium model and it’s not trying to be. My point is that the perfect competition model, with its flawed assumptions and known limitations, is at least internally consistent. Your simulation is misleading because it quickly leads to unrealistic “meta-Nash” strategies that can be disrupted by the slightest change to the sim (including but not limited to new entrants – again I’m happy to send you the code if you’re interested).

I should clarify that yes, when the Marshallian model considers entry from other markets, it’s clearly not considering the impact on those other markets and feedbacks between markets – as you say, it’s a partial equilibrium model.

If your problem with the Marshallian model is that it’s a partial equilibrium model, then that’s a perfectly valid criticism and I’d support you on it. But that has nothing to do with its internal consistency or the flaws in your sim.

Tom, pardon my boredom with this topic, but I spent 5 years engaged in this debate with neoclassicals from all over the planet, and I have heard all this before. I can say that the model doesn’t include consideration of entrant from other markets because the “proof” that MC=MC is profit maximising in every instance I’ve ever seen it has been a single market proof. Find me somewhere that entry is invoked to prove that MC=MR is profit maximising and I might be interested in continuing the discussion.

I have done countless other simulations Tom–some of which have been published in other papers. All the effects you have generated I have likewise already seen, and then some. The bottom line is that it is a mathematically flawed model (in its Marshallian guise) and a waste of time at an empirical level. The sooner economics abandons it, the better. Having spent 5 years of my life engaged in this debate, I am not interested in continuing it any further. My apologies for the bluntness here, but I’d rather contribute to the development of a worthwhile alternative than continue debating this vacuous model.

Hi Steve, I’m not sure why you’re bored given I’m offering a different proof (theoretical and simulation) of MR=MC, under neoclassical perfect competition assumptions, than any you’ve seen before! I certainly wouldn’t call myself a neoclassical: Schumpeter + Minsky + Porter as you’ve outlined them make much more sense to me. But I am capable of making my own arguments from neoclassical assumptions.

Anyway, I appreciate that you’re under no obligation to publish your lectures let alone debate random strangers on the internet ad infinitum, so thank you for your time and I hope I’m still welcome to explore more interesting areas with you (such as my model of the inefficient market hypothesis).

Try five years of the same debate Tom! So yes, that’s filed under “been there, done that” for me. Minsky and financial instability are far more interesting and relevant to the real world. And by all means keep going on the IMH.

A friend of mine sent me a link to this and I had a comment. Great article. And it’s right, but I think it didn’t go deep enough. The “irrational” human theory also ignores Adam Smith: labor = value. When you look at it in that regard it is perfectly rational.

1. Choose between $1000 for no labor or the possibility of $2000 with no labor or GIVING $1000 worth of your already expended labor (ie. wasting your time and effort).

Labor that a person works is worth more to them than someone else’s labor (ie. free money). Or to put it another way, you put more value on $1000 that you earned than $1000 that is given to you. And this is a rational position because our personal time and labor is highly valuable to us. So people will choose A because there’s no possibility of them wasting their own labor.

2. Choose between $0 for no labor or the possibility of $150 with no labor or wasting $150 of already expended labor.

Again, Option A is the only one with no chance of your own labor being wasted.

3. Choose -$100 wasted labor or the possibility of $50 with no labor or $200 wasted labor.

Here the only option that has any chance of not wasting labor is option

B. So people pick it.

We always look at this stuff in regards to the money, but the money is only a marker for the value of labor. People naturally abhor waste, and they should and it’s perfectly logical. They especially don’t want to see their own personal effort wasted. Waste reduces wealth. Everyone knows this intrinsically. It’s the Broken Window Fallacy.

Hi Hothsolo,

Thanks for the observation–that’s quite a sensible take. Ordinarily I bristle at “labor = value” arguments because of my own critique of the labor theory of value (I have some papers on that on the Research tab). But this is a very sensible application of the concept.