The need for plu­ral­ism in eco­nom­ics

Flattr this!

For decades, main­stream econ­o­mists have reacted to crit­i­cism of their method­ol­ogy mainly by dis­miss­ing it, rather than engag­ing with it. And the cus­tom­ary form that dis­missal has taken is to argue that crit­ics and pur­vey­ors of alter­na­tive approaches to eco­nom­ics sim­ply aren’t capa­ble of under­stand­ing the math­e­mat­ics the main­stream uses. The lat­est instal­ment of this slant on non-main­stream eco­nomic the­ory appeared in Noah Smith’s col­umn in Bloomberg View: “Eco­nom­ics With­out Math Is Trendy, But It Doesn’t Add Up”.

Fig­ure 1: Noah’s tweet announc­ing his blog post

While Noah’s col­umn made some valid points (and there’s been some good off-line dis­cus­sion between us too), its core mes­sage spouted five con­flict­ing fal­lac­ies as fact:

  • The first (pro­claimed in the title sup­plied by the Bloomberg sub-edi­tor rather than by Noah) is that non-main­stream (or “het­ero­dox”) eco­nom­ics is not math­e­mat­i­cal;
  • The sec­ond is that the het­ero­dox math­e­mat­i­cal mod­els that do exist can’t be used to make fore­casts;
  • The third, that there are some that can make fore­casts, but these have so many para­me­ters that they are eas­ily “over-fit­ted” to exist­ing data and there­fore use­less at pre­dic­tion (and they lack suf­fi­cient atten­tion to human behav­iour);
  • The fourth, that though het­ero­dox econ­o­mists make a song and dance about devel­op­ing “stock-flow con­sis­tent” mod­els, main­stream mod­els are stock-flow con­sis­tent too; and
  • The fifth, that agent-based non-main­stream approaches haven’t pro­duced decent results as yet, and may never do so.

I’ll con­sider each of these asser­tions one by one, because they cer­tainly can’t be addressed together.


There is indeed a wing of het­ero­dox eco­nom­ics that is anti-math­e­mat­i­cal. Known as “Crit­i­cal Real­ism” and cen­tred on the work of Tony Law­son at Cam­bridge UK, it attrib­utes the fail­ings of eco­nom­ics to the use of math­e­mat­ics itself. Noah has been less than com­pli­men­tary about this par­tic­u­lar sub­set of het­ero­dox eco­nom­ics in the past—see Fig­ure 2.

Fig­ure 2: Noah’s reac­tion to crit­i­cal real­ism

What Noah might not know is that many het­ero­dox econ­o­mists are crit­i­cal of this approach as well. In response to a paper by Law­son that effec­tively defined “Neo­clas­si­cal” eco­nom­ics as any eco­nom­ics that made use of math­e­mat­ics (which would define me as a Neo­clas­si­cal!), Jamie Mor­gan edited a book of replies to Law­son enti­tled What is Neo­clas­si­cal Eco­nom­ics? (includ­ing a chap­ter by me). While the authors agreed with Lawson’s pri­mary point that eco­nom­ics has suf­fered from favour­ing appar­ent math­e­mat­i­cal ele­gance above real­ism, sev­eral of us asserted that math­e­mat­i­cal analy­sis is needed in eco­nom­ics, if only for the rea­son that Noah gave in his arti­cle:

At the end of the day, pol­i­cy­mak­ers and investors need to make quan­ti­ta­tive deci­sions — how much to raise or lower inter­est rates, how big of a deficit to run, or how much wealth to allo­cate to Trea­sury bonds. (Noah Smith, August 8 2016)

The dif­fer­ence between main­stream and het­ero­dox econ­o­mists there­fore isn’t pri­mar­ily that the for­mer is math­e­mat­i­cal while the lat­ter is ver­bal. It’s that het­ero­dox math­e­mat­i­cal econ­o­mists accept Tony Lawson’s key point that math­e­mat­i­cal mod­els must be grounded in real­ism; we just reject, to vary­ing degrees, Tony’s argu­ment that math­e­mat­ics inher­ently makes mod­els unre­al­is­tic.

In con­trast, the devel­op­ment of main­stream mod­el­ling has largely fol­lowed Mil­ton Friedman’s edict that the real­ism of a model isn’t important—all that mat­ters is that it gen­er­ates real­is­tic pre­dic­tions:

Truly impor­tant and sig­nif­i­cant hypothe­ses will be found to have “assump­tions” that are wildly inac­cu­rate descrip­tive rep­re­sen­ta­tions of real­ity, and, in gen­eral, the more sig­nif­i­cant the the­ory, the more unre­al­is­tic the assump­tions (in this sense)… the rel­e­vant ques­tion to ask about the “assump­tions” of a the­ory is not whether they are descrip­tively “real­is­tic,” for they never are, but whether they are suf­fi­ciently good approx­i­ma­tions for the pur­pose in hand. And this ques­tion can be answered only by see­ing whether the the­ory works, which means whether it yields suf­fi­ciently accu­rate pre­dic­tions. (Fried­man 1966, The Method­ol­ogy of Pos­i­tive Eco­nom­ics; empha­sis added)

Even on this cri­te­rion, main­stream macro­eco­nom­ics is a fail­ure, given the occur­rence of a cri­sis that it believed could not hap­pen. But this cri­te­rion alone isn’t suf­fi­cient: real­ism does mat­ter.

If Friedman’s “only pre­dic­tive accu­racy mat­ters, not real­ism” cri­te­rion had been applied in astron­omy, we would still be using Ptolemy’s model that put the Earth at the cen­tre of the Uni­verse with the Sun, Moon, plan­ets and stars orbit­ing it, because the model yielded quite accu­rate pre­dic­tions of where celes­tial objects would appear to be in the sky cen­turies into the future. Its pre­dic­tions were in fact more accu­rate than the ini­tial pre­dic­tions from Galileo’s helio­cen­tric model, even though Galileo’s core concept—that the Sun was the cen­tre of the solar sys­tem, not the Earth—was true, while Ptolemy’s Earth-cen­tric par­a­digm was false.

Friedman’s argu­ment was sim­ply bad method­ol­ogy, and it’s led to bad main­stream math­e­mat­i­cal mod­els that make scream­ingly unre­al­is­tic assump­tions in order to reach desired results.

The piv­otal unre­al­is­tic assump­tion of main­stream eco­nom­ics prior to the cri­sis was that “eco­nomic agents” have “ratio­nal expec­ta­tions”. It sounds rea­son­able as a sound bite—who wants to be accused of hav­ing “irra­tional expectations”?—but it actu­ally means assum­ing (a) that peo­ple have an accu­rate model of the econ­omy in their heads that guides their behav­iour today and (b) that this model hap­pens to be the same as the one the Neo­clas­si­cal author has dreamed up in his (it’s rarely her, on either side of eco­nom­ics) paper. And there are many, many other unre­al­is­tic assump­tions.

Noah’s argu­ment that het­ero­dox eco­nom­ics is less math­e­mat­i­cal than the main­stream was also truer some decades ago, but today, with so many physi­cists and math­e­mati­cians in the “het­ero­dox” camp, it’s a very dated defence of the main­stream.

The stan­dard riposte to crit­ics of main­stream eco­nom­ics used to be that they are crit­i­cal sim­ply because they lack the math­e­mat­i­cal skills to under­stand Neo­clas­si­cal mod­els, and—the argu­ment Noah repeats here—their papers were just ver­bal hand-wav­ing that couldn’t be given pre­cise math­e­mat­i­cal form, and there­fore couldn’t be tested:

Also, vague ideas can’t eas­ily be tested against data and rejected. The heart of sci­ence is throw­ing away mod­els that don’t work. One of main­stream macro’s biggest fail­ings is that the­o­ries that don’t fit the data con­tinue to be regarded as good and use­ful mod­els. But ideas like Minsky’s, with no equa­tions or quan­ti­ta­tive pre­dic­tions, are almost impos­si­ble to reject — if they seem not to fit with events, they can sim­ply be rein­ter­preted. Peo­ple will for­ever argue about what Min­sky meant, or John May­nard Keynes, or Friedrich Hayek. (Noah Smith, 8th August 2016)

Ideas like Minsky’s, with no equa­tions”? If it’s equa­tions and Min­sky you want, try this macro­eco­nom­ics paper “Desta­bi­liz­ing a sta­ble cri­sis: Employ­ment per­sis­tence and gov­ern­ment inter­ven­tion in macro­eco­nom­ics” (Costa Lima, Gras­selli, Wang & Wu 2014). And I defy any Neo­clas­si­cal to tell the authors (includ­ing math­e­mati­cian Matheus Gras­selli, whose PhD was enti­tled “Clas­si­cal and Quan­tum Infor­ma­tion Geom­e­try”) that they lack the math­e­mat­i­cal abil­ity to under­stand Neo­clas­si­cal mod­els.

The math­e­mat­ics used in het­ero­dox papers like this one is in fact harder than that used by the main­stream, because it rejects a cru­cial “sim­pli­fy­ing assump­tion” that main­stream­ers rou­tinely use to make their mod­els eas­ier to han­dle: impos­ing lin­ear­ity on unsta­ble non­lin­ear sys­tems.

Impos­ing lin­ear­ity on a non­lin­ear sys­tem is a valid pro­ce­dure if, and only if, the equi­lib­rium around which the model is lin­earized is sta­ble. But the canon­i­cal model from which DSGE mod­els were derived—Ramsey’s 1925 opti­mal sav­ings model—has an unsta­ble equi­lib­rium that is sim­i­lar to the shape of a horse’s sad­dle. Imag­ine try­ing to drop a ball onto a sad­dle so that it doesn’t slide off—impossible, no?

Not if you’re a “rep­re­sen­ta­tive agent” with “ratio­nal expec­ta­tions”! Neo­clas­si­cal mod­el­ers assume that the “rep­re­sen­ta­tive agents” in their mod­els are in effect clever enough to be able to drop a ball onto the eco­nomic sad­dle and have it remain on it, rather than slid­ing off (they call it impos­ing a “trans­ver­sal­ity con­di­tion”).

The math­e­mat­i­cally more valid approach is to accept that, if your model’s equi­lib­ria are unsta­ble, then your model will dis­play far-from-equi­lib­rium dynam­ics, rather than oscil­lat­ing about and con­verg­ing on an equi­lib­rium. This requires you to under­stand and apply tech­niques from com­plex sys­tems analy­sis, which is much more sophis­ti­cated than the math­e­mat­ics Neo­clas­si­cal mod­el­ers use (see the won­der­ful free Chaos­Book for details). The upside of this effort though is that since the real world is non­lin­ear, you are much closer to cap­tur­ing it with fun­da­men­tally non­lin­ear tech­niques than you are by pre­tend­ing to model it as if it is lin­ear.

Main­stream econ­o­mists are belat­edly becom­ing aware of this mis­take, as Olivier Blan­chard stated recently:

These tech­niques how­ever made sense only under a vision in which eco­nomic fluc­tu­a­tions were reg­u­lar enough so that, by look­ing at the past, peo­ple and firms (and the econo­me­tri­cians who apply sta­tis­tics to eco­nom­ics) could under­stand their nature and form expec­ta­tions of the future, and sim­ple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on eco­nomic activ­ity…

Think­ing about macro­eco­nom­ics was largely shaped by those assump­tions. We in the field did think of the econ­omy as roughly lin­ear, con­stantly sub­ject to dif­fer­ent shocks, con­stantly fluc­tu­at­ing, but nat­u­rally return­ing to its steady state over time. Instead of talk­ing about fluc­tu­a­tions, we increas­ingly used the term “busi­ness cycle.” Even when we later devel­oped tech­niques to deal with non­lin­ear­i­ties, this gen­er­ally benign view of fluc­tu­a­tions remained dom­i­nant. (Blan­chard Sep­tem­ber 2014, “Where Dan­ger Lurks”).

This is not a prob­lem for het­ero­dox econ­o­mists, since they don’t take as an arti­cle of faith that the econ­omy is sta­ble. But it does make it much more dif­fi­cult to eval­u­ate the prop­er­ties of a model, fit it to data, and so on. For this, we need fund­ing and time—not a casual dis­missal of the cur­rent state of het­ero­dox math­e­mat­i­cal eco­nom­ics.

Can’t make forecasts?

Noah is largely cor­rect that het­ero­dox mod­els aren’t set up to make numer­i­cal fore­casts (though there are some that are). Instead the major­ity of het­ero­dox mod­els are set up to con­sider exist­ing trends, and to assess how fea­si­ble it is that they can be main­tained.

Far from being a weak­ness, this has been a strength of the het­ero­dox approach: it enabled Wynne God­ley to warn from as long ago as 1998 that the trends in the USA’s finan­cial flows were unsus­tain­able, and that there­fore a cri­sis was inevitable unless these trends were reversed. At the same time that the main­stream was crow­ing about “The Great Mod­er­a­tion”, Wynne was warn­ing that “Goldilocks is doomed”.

Wynne’s method was both essen­tially sim­ple, and at the same time impos­si­ble for the main­stream to repli­cate, because it con­sid­ered mon­e­tary flows between eco­nomic sectors—and the stocks of debt that these flows implied. The main­stream can’t do this, not because it’s impossible—it’s basi­cally accounting—but because they have falsely per­suaded them­selves that money is neu­tral, and they there­fore don’t con­sider the via­bil­ity of mon­e­tary flows in their mod­els.

Divid­ing the econ­omy into the gov­ern­ment, pri­vate and inter­na­tional sec­tors, God­ley pointed out that the flows between them must sum to zero: an out­flow from any one sec­tor is an inflow to one of the oth­ers. Since the pub­lic sec­tor under Clin­ton was run­ning a sur­plus, and the trade bal­ance was neg­a­tive, the only way this could be sus­tained was for the pri­vate sec­tor to “run a deficit”—to increase its debt to the bank­ing sec­tor. This implied unsus­tain­able lev­els of pri­vate debt in the future, so that the trends that gave rise to “The Great Mod­er­a­tion” could not con­tinue. As Wynne and Ran­dall Wray put it:

It has been widely rec­og­nized that there are two black spots that blem­ish the appear­ance of our Goldilocks econ­omy: low house­hold sav­ing (which has actu­ally fallen below zero) and the bur­geon­ing trade deficit. How­ever, com­men­ta­tors have not noted so clearly that pub­lic sec­tor sur­pluses and inter­na­tional cur­rent account deficits require domes­tic pri­vate sec­tor deficits. Once this is under­stood, it will become clear that Goldilocks is doomed. (God­ley & Wray, “Is Goldilocks Doomed?”, March 2000; empha­sis added).

Had Wynne’s warn­ings been heeded, the 2008 cri­sis might have been averted. But of course they weren’t: instead main­stream econ­o­mists gen­er­ated numer­i­cal fore­casts from their DSGE mod­els that extrap­o­lated “The Great Mod­er­a­tion” into the indef­i­nite future. And in 2008, the US (and most of the global econ­omy) crashed into the turn­ing point that Wynne warned was inevitably com­ing.

Wynne’s wasn’t the only het­ero­dox econ­o­mist pre­dict­ing a future cri­sis for the US and global econ­omy, at a time when main­stream Neo­clas­si­cal mod­ellers were wrongly pre­dict­ing con­tin­ued eco­nomic bliss. Oth­ers fol­low­ing Hyman Minsky’s “Finan­cial Insta­bil­ity Hypoth­e­sis” made sim­i­lar warn­ings.

Noah was roughly right, but pre­cisely wrong, when he claimed that “Min­sky, though trained in math, chose not to use equa­tions to model the econ­omy — instead, he sketched broad ideas in plain Eng­lish.”

In fact, Min­sky began with a math­e­mat­i­cal model of finan­cial insta­bil­ity based on Samuelson’s mul­ti­plier-accel­er­a­tor model (Min­sky, 1957 “Mon­e­tary Sys­tems and Accel­er­a­tor Mod­els” The Amer­i­can Eco­nomic Review, 47, pp. 860–883), but aban­doned that for a ver­bal argu­ment later (wonk­ish hint to Noah: aban­don­ing this was a good idea, because Samuelson’s model is eco­nom­i­cally invalid. Trans­form it into a vec­tor dif­fer­ence equa­tion and you’ll see that its matrix is invert­ible).

Min­sky sub­se­quently attempted to express his model using Kalecki’s math­e­mat­i­cal approach, but never quite got there. How­ever, that didn’t stop others—including me—trying to find a way to express his ideas math­e­mat­i­cally. I suc­ceeded in August 1992 (with the paper being pub­lished in 1995), and the most remark­able thing about it—apart from the fact that it did gen­er­ate a “Min­sky Crisis”—was that the cri­sis was pre­ceded by a period of appar­ent sta­bil­ity. That is in fact what tran­spired in the real world: the “Great Reces­sion” was pre­ceded by the “Great Mod­er­a­tion”. This was not a pre­dic­tion of Minsky’s ver­bal model itself: it was the prod­uct of putting that ver­bal model in math­e­mat­i­cal form, and then see­ing how it behaved. It in effect pre­dicted that, if a Min­sky cri­sis were to occur, then it would be pre­ceded by a period of dimin­ish­ing cycles in infla­tion (with the wages share of out­put as a proxy for infla­tion) and unem­ploy­ment.

The behav­iour was so strik­ing that I fin­ished my paper not­ing it:

From the per­spec­tive of eco­nomic the­ory and pol­icy, this vision of a cap­i­tal­ist econ­omy with finance requires us to go beyond that habit of mind which Keynes described so well, the exces­sive reliance on the (sta­ble) recent past as a guide to the future. The chaotic dynam­ics explored in this paper should warn us against accept­ing a period of rel­a­tive tran­quil­ity in a cap­i­tal­ist econ­omy as any­thing other than a lull before the storm. (Keen, S. 1995 “Finance and Eco­nomic Break­down: Mod­el­ing Minsky’s ‘Finan­cial Insta­bil­ity Hypoth­e­sis.’.” Jour­nal of Post Key­ne­sian Eco­nom­ics
17, p. 634).

This was before the so-called “Great Mod­er­a­tion” was appar­ent in the data, let alone before Neo­clas­si­cal econ­o­mists like Ben Bernanke pop­u­larised the term. This was there­fore an “out of sam­ple” pre­dic­tion of my model—and of Minsky’s hypoth­e­sis. Had the 2008 cri­sis not been pre­ceded by such a period, my model—and, to the extent that it cap­tured its essence, Minsky’s hypoth­e­sis as well—would have been dis­proved. But the phe­nom­e­non that my Min­sky model pre­dicted as a pre­cur­sor to cri­sis actu­ally occurred—along with the cri­sis itself.

Even with­out quan­ti­ta­tive pre­dic­tions, these het­ero­dox models—Godley’s stock-flow con­sis­tent pro­jec­tions and my non­lin­ear simulations—fared far bet­ter than did Neo­clas­si­cal mod­els with all their econo­met­ric bells and whis­tles.

Over-fitted to the data?

It is indeed true that many het­ero­dox mod­els have numer­ous para­me­ters, and that a judi­cious choice of para­me­ter val­ues can enable a model to closely fit the exist­ing data, but be use­less for fore­cast­ing because it tracks the noise in the data, rather than the causal trends. Of course, this is equally true of main­stream mod­els as well—compare for exam­ple the canon­i­cal Neo­clas­si­cal DSGE paper “Shocks and Fric­tions in US Busi­ness Cycles: A Bayesian DSGE Approach” (Smets and Wouters 2007) with the equally canon­i­cal Post Key­ne­sian “Stock-Flow Con­sis­tent Mod­el­ling” (SFCM) paper “Fis­cal Pol­icy in a Stock-Flow Con­sis­tent (SFC) Model” from the same year (God­ley and Lavoie 2007). Both are lin­ear mod­els, and the for­mer has sub­stan­tially more para­me­ters than the lat­ter.

The fact that this is a seri­ous prob­lem for DSGE models—and not a rea­son why they are supe­rior to het­ero­dox models—is clearly stated in a new note by Olivier Blan­chard:

The mod­els … come, how­ever, with a very large num­ber of para­me­ters to esti­mate…, a num­ber of para­me­ters are set a pri­ori, through “cal­i­bra­tion.” This approach would be rea­son­able if these para­me­ters were well estab­lished empir­i­cally or the­o­ret­i­cally… But the list of para­me­ters cho­sen through cal­i­bra­tion is typ­i­cally much larger, and the evi­dence often much fuzzier… In many cases, the choice to rely on a “stan­dard set of para­me­ters” is sim­ply a way of shift­ing blame for the choice of para­me­ters to pre­vi­ous researchers.” (Olivier Blan­chard, “Do DSGE Mod­els Have a Future?”, August 2016; empha­sis added)

What really mat­ters how­ever, as a point of dis­tinc­tion between two approaches that share this same flaw, is not the flaw itself, but the dif­fer­ent vari­ables than the mod­els regard as essen­tial deter­mi­nants of the economy’s behav­iour. There we see chalk and cheese—with the het­ero­dox choices being far more palat­able because they include finan­cial sec­tor vari­ables, whereas the pre-cri­sis DSGE mod­els did not.

The real prob­lems with over-fit­ting the data arise not from over-fit­ting to what ends up being noise rather than sig­nal, but from fit­ting a model to real world data when the model omits cru­cial deter­mi­nants of what actu­ally hap­pens in the real world, and from devel­op­ing lin­ear mod­els of a fun­da­men­tally non­lin­ear real world. The for­mer error guar­an­tees that your care­fully fit­ted model will match his­tor­i­cal data superbly, but will be wildly wrong about the future because it omits key fac­tors that deter­mine it. The lat­ter error guar­an­tees that your model can extrap­o­late exist­ing trends—if it includes the main deter­mi­nants of the economy—but it can­not cap­ture turn­ing points. A lin­ear model is, by def­i­n­i­tion, lin­ear, and straight lines don’t bend.

On the “get the vari­ables right” issue, most mod­ern het­ero­dox mod­els are supe­rior to main­stream DSGE ones, sim­ply because most of them include the finan­cial sys­tem and mon­e­tary stocks and flows in an intrin­sic way.

On the lin­ear­ity issue, most het­ero­dox SFCM mod­els are lin­ear, and are there­fore as flawed as their Neo­clas­si­cal DSGE coun­ter­parts. But it then comes down to how are these lin­ear mod­els used? In the Neo­clas­si­cal case, these mod­els are used to make numer­i­cal fore­casts and there­fore they extrap­o­late exist­ing trends into the future. In the het­ero­dox case, they are used to ask whether exist­ing trends can be sus­tained.

The for­mer pro­cliv­ity led DSGE modellers—such as the team behind the OECD’s Eco­nomic Out­look Report—to extrap­o­late the rel­a­tive tran­quil­lity of 1993–2007 into the indef­i­nite future in June of 2007:

Recent devel­op­ments have broadly con­firmed this prog­no­sis. Indeed, the cur­rent eco­nomic sit­u­a­tion is in many ways bet­ter than what we have expe­ri­enced in years. Against that back­ground, we have stuck to the rebal­anc­ing sce­nario. Our cen­tral fore­cast remains indeed quite benign: a soft land­ing in the United States, a strong and sus­tained recov­ery in Europe, a solid tra­jec­tory in Japan and buoy­ant activ­ity in China and India. In line with recent trends, sus­tained growth in OECD economies would be under­pinned by strong job cre­ation and falling unem­ploy­ment. (Cotis 2007, p. 7; emphases added)

In con­trast, God­ley and Wray used the SFCM approach (with­out an actual model) to con­clude that the 1993–2007 trends were unsus­tain­able, and that with­out a change in pol­icy, a cri­sis was inevitable:

We has­ten to add that we do not believe this pro­jec­tion. The econ­omy will not con­tinue to grow; the pro­jected bud­get sur­pluses will not be achieved; pri­vate sec­tor spend­ing will not con­tinue to out­strip income; and growth of pri­vate sec­tor indebt­ed­ness will not accel­er­ate… As soon as pri­vate sec­tor spend­ing stops grow­ing faster than pri­vate sec­tor income, GDP will stop grow­ing. (God­ley & Wray, “Is Goldilocks Doomed?”, March 2000, p. 204)

This leads to Noah’s next false point, that Neo­clas­si­cal mod­els do what het­ero­dox ones do any­way.

We’re doing it anyway?

There are three main strands in het­ero­dox macro mod­el­ling: what is known as “Stock-Flow Con­sis­tent Mod­el­ling” (SFCM) that was pio­neered by Wynne Goldey; non­lin­ear sys­tem dynam­ics mod­el­ling; and het­ero­ge­neous multi-agent mod­el­ling (there are other approaches too, includ­ing struc­turally esti­mated mod­els and big data sys­tems, but these are the main ones). Noah made a strong claim about the stock-flow con­sis­tent strand and Neo­clas­si­cal mod­el­ling:

Some het­ero­dox macro­econ­o­mists, it’s true, do have quan­ti­ta­tive the­o­ries. One is “stock-flow con­sis­tent” mod­els (a con­fus­ing name, since main­stream mod­els also main­tain con­sis­tency between stocks and flows). These mod­els, devel­oped mainly by researchers at the Levy Eco­nom­ics Insti­tute of Bard Col­lege, are large sys­tems of many equa­tions, usu­ally lin­ear equa­tions — for an exam­ple, see this paper by Levy econ­o­mists Dim­itri B. Papadim­itriou, Gen­naro Zezza and Michalis Niki­foros. (Noah Smith, 8th August 2016)

I agree the name is confusing—perhaps it would be bet­ter if the name were “Mon­e­tary Stock-Flow Con­sis­tent Mod­els”. With that clar­i­fi­ca­tion, there is no way that Neo­clas­si­cal DSGE mod­els are stock-flow con­sis­tent in a mon­e­tary sense.

Even after the cri­sis, most Neo­clas­si­cal DSGE mod­els don’t include money or debt in any intrin­sic way (the finan­cial sec­tor turns up as another source of “fric­tions” that slow down a con­ver­gence to equi­lib­rium), and they cer­tainly don’t treat the out­stand­ing stock of pri­vate debt as a major fac­tor in the econ­omy.

Het­ero­dox SFCM mod­els do include these mon­e­tary and debt flows, and there­fore the stocks as well. A trend—like that in the mid-1990s till 2000—that requires pri­vate debt to rise faster than GDP indef­i­nitely will be iden­ti­fied as a prob­lem for the econ­omy by a het­ero­dox SFCM model, but not by a Neo­clas­si­cal DSGE one. A Neo­clas­si­cal author who believes the fal­la­cious Loan­able Funds model of bank­ing is also likely to wrongly con­clude that the level of pri­vate debt is irrel­e­vant (except per­haps dur­ing a cri­sis).

This makes het­ero­dox SFCM models—such as the Kingston Finan­cial Bal­ances Model (KFBM) of the US econ­omy pro­duced by researchers at my own Department—very dif­fer­ent to main­stream DSGE mod­els.

No decent results from Agent-Based Models?

Noah con­cludes with the state­ment that what is known as “Agent Based Mod­el­ling” (ABM), which is very pop­u­lar in het­ero­dox cir­cles right now, hasn’t yet pro­duced robust results:

A sec­ond class of quan­ti­ta­tive het­ero­dox mod­els, called “agent-based mod­els,” have gained some atten­tion, but so far no robust, reli­able results have emerged from the research pro­gram. (Noah Smith, 8th August 2016)

Largely speak­ing, this is true—if you want to use these mod­els for macro­eco­nomic fore­cast­ing. But they are use­ful for illus­trat­ing an issue that the main­stream avoids: “emer­gent prop­er­ties”. A pop­u­la­tion, even of very sim­i­lar enti­ties, can gen­er­ate results that can’t be extrap­o­lated from the prop­er­ties of any one entity taken in iso­la­tion. My favourite exam­ple here is what we com­monly call water. There is no such thing as a “water mol­e­cule”, or a “steam mol­e­cule”, let alone a “snowflake mol­e­cule”. All these pecu­liar and, to life, essen­tial fea­tures of H2O, are “emer­gent prop­er­ties” from the inter­ac­tion of large num­bers of H2O under dif­fer­ent envi­ron­men­tal con­di­tions. None of these are prop­er­ties of a sin­gle mol­e­cule of H2O taken in iso­la­tion.

Neo­clas­si­cal econ­o­mists unin­ten­tion­ally proved this about iso­lated con­sumers as well, in what is known as the Son­nen­schein-Man­tel-Debreu the­o­rem. But they have side­stepped its results ever since.

The the­o­rem estab­lishes that even if an econ­omy con­sists entirely of ratio­nal util­ity max­i­miz­ers who each, taken in iso­la­tion, can be shown to have a down­ward-slop­ing indi­vid­ual demand curve, the mar­ket demand curve for any given mar­ket can the­o­ret­i­cally take any poly­no­mial shape at all:

Can an arbi­trary con­tin­u­ous func­tion … be an excess demand func­tion for some com­mod­ity in a gen­eral equi­lib­rium econ­omy?… we prove that every poly­no­mial … is an excess demand func­tion for a spec­i­fied com­mod­ity in some n com­mod­ity econ­omy… every con­tin­u­ous real-val­ued func­tion is approx­i­mately an excess demand func­tion. (Son­nen­schein, 1972 “Mar­ket Excess Demand Func­tions.” Econo­met­rica
40, pp. 549–563.pp. 549–550)

Alan Kir­man sug­gested the proper reac­tion to this dis­cov­ery almost 30 years ago: that the deci­sion by the Neo­clas­si­cal school, at the time of the sec­ond great con­tro­versy over eco­nomic the­ory, to aban­don class-based analy­sis, was unsound. Since even such a basic con­cept (to the Neo­clas­si­cal school) as a down­ward-slop­ing demand curve could not be derived by extrap­o­lat­ing from the prop­er­ties of an iso­lated indi­vid­ual, the only rea­son­able pro­ce­dure was to work at the level of groups with “col­lec­tively coher­ent behaviour”—what the Clas­si­cal School called “social classes”:

If we are to progress fur­ther we may well be forced to the­o­rise in terms of groups who have col­lec­tively coher­ent behav­iour. Thus demand and expen­di­ture func­tions if they are to be set against real­ity must be defined at some rea­son­ably high level of aggre­ga­tion. The idea that we should start at the level of the iso­lated indi­vid­ual is one which we may well have to aban­don. There is no more mis­lead­ing descrip­tion in mod­ern eco­nom­ics than the so-called micro­foun­da­tions of macro­eco­nom­ics which in fact describe the behav­iour of the con­sump­tion or pro­duc­tion sec­tor by the behav­iour of one indi­vid­ual or firm. If we aggre­gate over sev­eral indi­vid­u­als, such a model is unjus­ti­fied. Kir­man, A. (1989). “The Intrin­sic Lim­its of Mod­ern Eco­nomic The­ory: The Emperor Has No Clothes.” Eco­nomic Jour­nal 99(395): 126–139.

Instead of tak­ing this sen­si­ble route, Neo­clas­si­cal economists—mainly with­out con­sciously real­is­ing it—took the approach of mak­ing the absurd assump­tion that the entire econ­omy could be treated as a sin­gle indi­vid­ual in the fic­tion of a “rep­re­sen­ta­tive agent”.

Men­da­cious text­books played a large role here—which is why I say that they did this with­out real­is­ing that they were doing so. Most of today’s influ­en­tial Neo­clas­si­cal econ­o­mists would have learnt their advanced micro from Hal Varian’s text­book. Here’s how Var­ian “explained” the Son­nen­schein Man­tel Debreu results:

it is some­times con­ve­nient to think of the aggre­gate demand as the demand of some ‘rep­re­sen­ta­tive con­sumer’… The con­di­tions under which this can be done are rather strin­gent, but a dis­cus­sion of this issue is beyond the scope of this book…” (Var­ian 1984, p. 268)

The “con­ve­nience” of the “rep­re­sen­ta­tive con­sumer” led directly to Real Busi­ness Cycle mod­els of the macro­econ­omy, and thence DSGE—which Neo­clas­si­cals are now begin­ning to realise was a mon­u­men­tal blun­der.

Multi-agent mod­el­ling may not lead to a new pol­icy-ori­ented the­ory of macro­eco­nom­ics. But it acquaints those who do it with the phe­nom­e­non of emer­gent properties—that an aggre­gate does not func­tion as a scaled-up ver­sion of the enti­ties that com­prise it. That’s a les­son that Neo­clas­si­cal econ­o­mists still haven’t absorbed.

Previous periods of crisis in economic theory

Since I began this post by call­ing the cur­rent debate the “5th great con­flict over the nature of eco­nom­ics”, I’d bet­ter detail the first three (the 4th being Keynes’s batlle in the 1930s). These were:

  • The “Meth­o­d­en­streit” dis­pute between the Aus­trian and Ger­man His­tor­i­cal Schools—which was a dis­pute about a pri­ori rea­son­ing ver­sus empir­i­cal data;
  • The Neo­clas­si­cal revolt against the Clas­si­cal school after Marx had turned the lat­ter into the basis for a cri­tique of cap­i­tal­ism, rather than a defence of it as it was with Smith and Ricardo; and
  • The event that I per­son­ally iden­tify as the real point at which eco­nom­ics went wrong: Smith’s replace­ment of “the divi­sion of labour” as the source of ris­ing pro­duc­tiv­ity in cap­i­tal­ist over the Phys­io­cratic argu­ment that human pro­duc­tiv­ity actu­ally emanated from employ­ing the energy of the Sun. Though the Phys­iocrats were wrong that agri­cul­ture was the only “pro­duc­tive” sector—manufacturing being “ster­ile” accord­ing to them since all it did was trans­form the out­puts of agri­cul­ture into dif­fer­ent forms, when in fact it har­nesses “free energy” (energy from the Sun, fos­sil fuels and nuclear processes) to do use­ful work even more effec­tively than agriculture—they were right that har­ness­ing free energy was the basis of the pro­duc­tiv­ity of cap­i­tal­ism.

I’ll address this very last wonk­ish issue in a future post.

About Steve Keen

I am Professor of Economics and Head of Economics, History and Politics at Kingston University London, and a long time critic of conventional economic thought. As well as attacking mainstream thought in Debunking Economics, I am also developing an alternative dynamic approach to economic modelling. The key issue I am tackling here is the prospect for a debt-deflation on the back of the enormous private debts accumulated globally, and our very low rate of inflation.
Bookmark the permalink.
  • twowith­inthree­thati­sone

    We cer­tainly have an eco­nomic cri­sis, and an eco­log­i­cal and energy one as well. How­ever, there is much hope in that the sun is likely an elec­tro-mag­netic phe­nom­e­non not merely gravitc/fusion (grav­ity is likely a deriv­a­tive of elec­tro-mag­net­ism) Hence when we are able to tap into the elec­tri­cal con­nec­tion between the sun and the earth our energy prob­lems will be vir­tu­ally solved. It’s likely merely an instru­men­ta­tion prob­lem.

  • Pingback: Incorporating energy into production functions | Real-World Economics Review Blog()

  • Tim Ward

    Blis­sex: “Solar flow is very per­ish­able because if you don’t cap­ture today’s solar flow, today’s solar energy is gone for­ever, “unused”. Like nice weather in sum­mer, etc.”

    I don’t buy this, as the largest renew­able energy cat­e­gory is hydro, 46% of the 13% total elec­tric­ity pro­duc­tion that is from renew­ables, and those uncap­tured pho­tons are par­tially cap­tured, dri­ving the hydrol­ogy cycle. The ‘unused’ pho­tons are warm­ing the earth, and dri­ving the hydrol­ogy cycle. The uncap­tured pho­tons are also dri­ving the winds, 33% of the 13% is wind gen­er­ated elec­tric­ity. So 79% of the renew­able elec­tric­ity gen­er­a­tion is dri­ven by the ‘uncap­tured’ pho­tons. So it isn’t really that per­ish­able, by warm­ing the earth’s sur­face

  • Pingback: O futuro dos modelos de equilíbrio geral « Desvio Colossal()

  • Pingback: Notes for the Paper | Seminar in Advanced Macroeconomics()