Behavioral Finance Lecture 05: Fractal & Inefficient Markets

Flattr this!

Lecture 5: Market Behavior–Stock Markets II. (Slides: CfESI Subscribers  Part 1Part 2; Debtwatch Subscribers Part 1 Part 2)

The Fractal Markets Hypothesis and the Inefficient Markets Hypothesis are two of several attempts to provide a realistic theory of how finance markets actually behave. In this first half of the lecture, I explain what fractals aew, and discuss their basic characteristics.

In the second half of the lecture, I outline the Fractal Markets Hypothesis and the Inefficient Markets Hypothesis (IEH). The IEH suggests precisely the opposite investment strategy to the EMH on how to maximize returns on the stock market: invest in low volatility, high Book to Market stocks.

The videos can be watched by anyone; Powerpoint files can be downloaded by members of the Center for Economic Stability

About Steve Keen

I am Professor of Economics and Head of Economics, History and Politics at Kingston University London, and a long time critic of conventional economic thought. As well as attacking mainstream thought in Debunking Economics, I am also developing an alternative dynamic approach to economic modelling. The key issue I am tackling here is the prospect for a debt-deflation on the back of the enormous private debts accumulated globally, and our very low rate of inflation.
Bookmark the permalink.

12 Responses to Behavioral Finance Lecture 05: Fractal & Inefficient Markets

  1. sj says:

    Mr Keen
    I did watch the lecture as a teacher you are too kind , very few of your students even knew what a fractal is infact it was bit of a joke.
    Well Mr Keen may I make a humble suggested next class you have if a student does not bring a fractal object examples are many tree branches, leaves,sea shells,maps,stock charts, you should failed them.
    The lazy students can point to their DNA or blood vessels for a fractal.
    This will make them think in eco systems and away from a narrow minded stable class envirnoment.

  2. TruthIsThereIsNoTruth says:

    The failure of CAPM implies that stocks don’t follow the strictly defined CAPM model. To say the failure of CAPM implies that stocks are not random is simply false. To say that the failure of CAPM implies that stocks are not random is simply false does not imply that stocks are random…

    The common misconception being employed here is that randomness necessarily means a random walk, coin toss style process. This is one specific type of random process and I would not go as far as to say that this is the truly random process.

    There are at least several processes that describe the extreme tails in stock returns. FMH is intuitive and interesting, there is nothing wrong with teaching that but as you say it is hard to use, it’s very complex. There are more practical models that fit the data equally well if not better. I think if I was your student again you would find me to be a great pain.

  3. enorlin says:

    If students of economics don’t know about fractals (or even complex numbers!), I wouldn’t hold that against them — I guess noone thus far bothered to teach them.

  4. Steve Keen says:

    Precisely Enorlin! It’s their teachers who are at fault primarily.

  5. Attitude_Check says:


    There is a specific technical definition of randomness and random behavior. Dr. Keen is using the technical definition and shows quite convincingly that the market does not behave that way. You seem to be using a colloquial definition (e.g. not predictable), which is also true about the market, but it is mathematically incorrect to call it random. It does appear to be complex, which is deterministic, but unpredictable at most times scales.

    The truth is there is truth, and we all argue what it is. You do realize your handle is inherently contradictory and illogical. I assume that was an intentional jab at modern relativistic philosophy.

  6. TruthIsThereIsNoTruth says:

    The specific ‘technical’ definition in the lecture is exactly that, it is specific. There is also a more general technical definition of randomness. Would you say a stochastic volatility model is less random than geometric brownian motion? Even the most simple stochastic model can reproduce empirical behaviour of stock returns given the appropriate probability distribution.

    I don’t assume this is something that is well understood outside quantitative finance circles, I expect that for you a random process is a geometric brownian motion, that is what you think is a ‘technical’ definition of randomness.

    My post was aimed to bring attention to the fact that there is a well refined field of research which deals with the topic of the lecture in a different and definately more practical way.

    Good pick up on the handle’s paradox, well done!

  7. TruthIsThereIsNoTruth says:

    “but it is mathematically incorrect to call it random”

    All I have to say to that, is that to learn more, you first need to come to terms with the fact that you know very little.

  8. TruthIsThereIsNoTruth says:

    Don’t get me wrong though. I think FMH is a fantastic and insightful idea, well worth teaching. However the approach of teaching it to naive students in this sort of THIS IS THE TRUTH way, results in misguided opinions such as you demonstrate. Sorry for my reaction, I do appreciate the debate, please realise there is another way of thinking.

  9. Attitude_Check says:

    The clear definition of terms and the math that follows is important. The important issue of randomness – particularly modeled as noise, is often treated as a process of zero information. In particular the mutual information of the noise and the “signal” is zero. In the case of a chaotic system, that assumption is false, and many typical mathematical “proofs” regarding noisy behavior of a system are no longer valid. I am dealing with this exact problem as an engineer. A random error process for a specific problem is not independent of the parameter being estimated. at least 50+ years of academic theory regarding this is flat wrong. I am exploiting this fact to achieve “impossible” results according to classic theory that treats this random error signal as noise.

  10. Attitude_Check says:


    I actually understand quite a bit about randomness as ysed in the context Dr. Keen is using it.

    These excerpts from Wikipedia capture the salient points.

    The fields of mathematics, probability, and statistics use formal definitions of randomness. In mathematics, a random variable is a way to assign a value to each possible outcome of an event. In probability and statistics, a random process is a repeating process whose outcomes follow no describable deterministic pattern, but follow a probability distribution, such that the relative probability of the occurrence of each outcome can be approximated or calculated. For example, the rolling of a fair six-sided die in neutral conditions may be said to produce random results, because one cannot know, before a roll, what number will show up. However, the probability of rolling any one of the six rollable numbers can be calculated.

    In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics in order to explain phenomena in thermodynamics and the properties of gases.

    According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.[6] That is, in an experiment where all causally relevant parameters are controlled, there will still be some aspects of the outcome which vary randomly. An example of such an experiment is placing a single unstable atom in a controlled environment; it cannot be predicted how long it will take for the atom to decay; only the probability of decay within a given time can be calculated.[7] Thus, quantum mechanics does not specify the outcome of individual experiments but only the probabilities. Hidden variable theories are inconsistent with the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are somehow at work “behind the scenes” determining the outcome in each case.

    Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness)—this means that random strings are those that cannot be compressed. Pioneers of this field include Andrey Kolmogorov and his student Per Martin-Löf, Ray Solomonoff, and Gregory Chaitin.

    In mathematics, there must be an infinite expansion of information for randomness to exist. This can best be seen with an example. Given a random sequence of three-bit numbers, each number can have one of only eight possible values:

    In information science, irrelevant or meaningless data is considered to be noise. Noise consists of a large number of transient disturbances with a statistically randomized time distribution.

    In communication theory, randomness in a signal is called “noise” and is opposed to that component of its variation that is causally attributable to the source, the signal.

    In terms of the development of random networks, for communication randomness rests on the two simple assumptions of Paul Erd?s and Alfréd Rényi who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.

    The random walk hypothesis considers that asset prices in an organized market evolve at random.

    Other so-called random factors intervene in trends and patterns to do with supply-and-demand distributions. As well as this, the random factor of the environment itself results in fluctuations in stock and broker markets.

    Note the emphasis of information content throughout. The random walk hypothesis of finance is taken from Brownian motion of molecules (discovered by Einstein, and actually what he was awarded the Nobel prize for). It rests upon the assumption that the velocity of individual atoms in a thermodynamic equilibrium of a 3D gas are random and completely independent of each other and the molecule being driven. The previous history of the path of the molecule provides NO INFORMATION about the future path. The underlying dynamics are assumed to be completely random and temporally and spatially independent. This is clearly a very bad model for the financial markets. Gaussian Copula theory applied to risk management clearly shows that in detail.

    The truth is you need to study a bit more on random process theory, information theory, dynamic systems, and complexity, and how those relate – or don’t, to the financial market and economics.

  11. Attitude_Check says:


    I should have read a little farther. This describes pseudo-random process.

    Some mathematically defined sequences, such as the decimals of pi mentioned above, exhibit some of the same characteristics as random sequences, but because they are generated by a describable mechanism, they are called pseudorandom. To an observer who does not know the mechanism, a pseudorandom sequence is unpredictable.

    A process may appear to be random, but as we learn more information about it (the algorithm used and the seed value for example) it becomes fully predictable, and no longer random. Randomness is only then a quantification of our ignorance of the underlying dynamics vice a description of the dynamics itself.

  12. TruthIsThereIsNoTruth says:

    Thanks for the reply.

    In finance I take a slightly different view. I don’t actually believe there exists a model which truly captures market behaviour. Market behaviour is neither random or deterministic but can be described by both, particularly with ideas such as fractality. That’s why I find a big flaw in the engineering approach to understanding markets, however at the same time the engineering approach is necessary as long as you don’t start thinking it gives you a monopoly to the truth.

    The funny thing about this specific topic, coming back to my original point, that by the same criteria which ‘proves’ that markets are fractal, you can prove that market are random, if you apply the correct process (not GBM obviously). The definition of randomness you provide only has a brief mention of financial mathematics, if you want insight and cutting edge research the most appropriate author on this topic is E. Platen.

    The most important practical application of models in modern finance is managing risk, something which has been done very poorly recently by formerly large investment banks. It is not as implied by the lecture, asset allocation… When people find out what I do, they automatically think I should know where they should invest, I think the more you know about markets the more you realise that there is no objective answer to that question (again as suggested by lecture 5). For managing risk the most appropriate models are based on randomness, this doesn’t mean that markets are random, but neither are they deterministic or complex or chaotic. Markets are sometimes all those things, neither of those things, one or more of those things or something else alltogether, but unlike mountains, they are always changing (well mountains do erode, whatever). Also unlike mountains, they are not governed by a force of gravity in overwhelmingly one direction.

    What is really interesting about fractals and markets is the question why? Or even just fractals in general, why are things fractal. Why would markets follow this law? Get the students thinking about that, instead bombarding them with derivations that leave them confused or worst in a common state of being convinced “by exposure to information they don’t understand”.

Leave a Reply