Elizabeth Kolbert

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Continuar a ler »


Consciousness of Sheep

When the English and French armies landed on the shores of the Crimean peninsula in October 1853 they were ill-prepared to fight a war. Logistics – the science of provisioning armies at a distance – was in its infancy; and armies were still expected to forage for at least some of their supply needs by pillaging the local agriculture. Supply shipping was required to cross the Bay of Biscay and round the Iberian Peninsula before sailing the entire length of the Mediterranean, passing through the straits and sailing halfway across the Black Sea. Ships regularly arrived in the wrong order, or had been loaded chaotically so that non-essential cargo had to be unloaded in order to access essential equipment, ammunition and rudimentary medical supplies. In order to build a base at Sevastopol, the support fleet had to be unloaded and despatched to the opposite shore of the Black Sea in order to harvest Turkish timber. The only saving grace was that the internal communications of feudal Russia were even worse.

One problem that was resolved by the British and French engineers in the Crimea was how to get supplies unloaded at the port up the steep surrounding hills and onto the plateau above where the troops were fighting. The technology they deployed was to be central to the human activity of mass slaughter for the next century. They built a railway. Railways were in their infancy at this time. However, by a twisted chain of events, the Crimean War was to have a huge impact on the development of the railways on the other side of the planet.

By 1853 the British had become dependent upon imports to feed its growing urban population even in peacetime. With the outbreak of war, Britain was to engage in another aspect of warfare that would continue for another century – the import of food and materials across the Atlantic from the USA. American farmers saw a big boost in demand as a result of the war coming on top of the early industrialisation and urbanisation of US towns and cities. In order to profit from the growth in demand, the farmers had, somehow, to move their produce from the countryside to the cities and the ports.

Railways provided a potential solution. In 1937, however, the USA had been blighted by the collapse of an earlier railway investment bubble; so there was little appetite among investors for a repeat performance. It was at this point that one of the most seductive – and deadly – ideas ever entertained by a human rose to the forefront of our collective consciousness. As Timothy J Riddiough records:

“In the early 1850s the owners of the La Crosse & Milwaukee railroad hit on an idea. Why not approach local farmers, particularly those farmers whose property lay near the path of the railroad line and its depots, and ask them to mortgage their farm to the railroad in return for shares of stock in the railroad? The dividends from the stock would be at least equal to the interest required on the mortgage, where the dividend-interest swap negated any need for the farmer to come out of pocket for interest payments on the debt. In fact, no cash changed hands at all between the farmer and the railroad in this debt-for-equity swap…

“Now, the second step of the transaction was for the railroad to monetise the RRFMs so that it could purchase the track and equipment necessary to expand the line. The solution to this problem resulted in what we believe to be the first case of mortgage securitisation executed in the United States. It was a railroad farm mortgage-backed security – effectively a covered bond offered by the railroad to potential investors located on the east coast and in Europe.”

Instead of investing directly in the building of the railways, bondholders were investing in a derivative instrument that paid them a share of the growing income from the farms. Provided that the growing demand for farm produce continued, and provided that the railways continued to profit from transporting the produce from the farms to the cities and ports, investors could get rich. What could possibly go wrong?

We know to our cost today what goes wrong when the banking and finance corporations inflate derivate bubbles. But in the early 1850s this was a new idea. When the Crimean War came to an end in February 1856, European demand for US agricultural produce slumped. Famers began to default on their mortgages; and the derivatives based upon them began to fail. In 1857 the inevitable panic took hold as investors desperately sought to cut their losses.

Had anyone had the sense to drive a stake through the heart of the idea of securitised derivatives in 1857 the world might have been spared untold suffering. But securitisation is too seductive; and its short-term rewards too great for it to remain in its tomb for long. As Cathy M Kaplan notes:

“While writers point to the origins of securitisation in a number of precedents, including the farm railroad mortgage bonds of the 1860s, the mortgage-backed bonds of the 1880s and a form of securitisation of mortgages before the 1929 crash, the modern era of securitisation began in 1970. That was when the Department of Housing and Urban Development created the first modern residential mortgage-backed security when the Government National Mortgage Association (Ginnie Mae or GNMA) sold securities backed by a portfolio of mortgage loans.”

In its modern form, securitisation is supposed to make the issuing of loans safe for the banks by standardising risks. To understand this, imagine that you are the manager of a bank. You have issued 100 loans to businesses and households that you believe to be credit worthy. However, you also know from the statistics that in the course of these loans being repaid, three percent will default… but you do not know which. One way around this is to take your hundred loans (stage 1 below) which include the three which will default, and divide the income from them into 100 pieces (figure 2). These can then be repackaged into an investment bond – a securitised derivative – made up of one percent of the income from each of your 100 loans (stage3). You now have 100 derivatives that each carry the same risk; allowing you to sell them to third party investors with a reasonable degree of certainty that they will deliver the promised return.


Households and businesses apparently benefit from this arrangement because banks are able to be less conservative in their lending practices. Banks benefit because instead of waiting – sometimes for decades – for the return on their investment, they can be repaid more or less immediately. Investors also, apparently, benefit because of the standardisation of risk – the three percent default rate has already been built in.

Governments had, in fact, driven at least a partial stake through the heart of securitisation in the years following the 1929 crash. The impact of depression and the rise of political extremism that plunged the world into a conflict that killed perhaps 85 million people convinced politicians in the aftermath that is should never be allowed to happen again. The introduction of mortgage-backed securities in the USA in the 1970s had been subject to close regulation; while elsewhere in the world they were still illegal.

This was to change in the 1980s as governments sought an alternative to the economics and politics of a post-war consensus that was rapidly breaking down. In the USA, this involved a gradual deregulation of the banking and finance sector. In the UK the change was far more abrupt and can be seen in data compiled by Steve Keen showing Britain’s historical debt to GDP ratio:

In 1980 the Thatcher government began its experiment in selling off public assets in order to kick-start economic growth. To begin with, they focused on the sale of Britain’s then massive stock of public housing. For ideological reasons, Thatcher believed that if people owned their homes rather than renting them, they would have a greater stake in society and would be less likely to engage in radical politics or trade union activity. And, more cynically, it is much harder to go out on strike when you have a mortgage to repay every month. The problem for Thatcher was that the people who she wanted to purchase their homes were not the kind of people Britain’s highly conservative banks would ever consider lending money to. So Thatcher had to follow the Americans and begin dismantling the regulations.

The big change came in 1986 with the “Big Bang” financial deregulation which finally removed the stake from the corpse of securitisation; paving the way for the banks to bring fire and brimstone down upon the people of the earth once more. At the time, the conservative nature of banking was regarded as sufficient to prevent a re-run of the financial chaos that banks have always created throughout our history. This, however, overlooked the fact that banks had only been conservative in their practices because of the regulation that Thatcher decided to remove. As Kaplan points out:

“During the late 1980s and the 1990s the securitisation market exploded. This was aided in the United States by the REMIC legislation and changes in SEC rules, and fuelled by the growth of money market funds, investment funds and other institutional investors, such as pension funds and insurance companies looking for product. In the 1990s commercial mortgages began to be securitised. Outside the US, countries including the UK and Japan adopted laws that allowed for securitisation. The vastly expanding global consumer culture, where access to credit to purchase everything from houses and cars to mobile phones and TVs was taken as a given, continued to stoke growth in the volume of securitisations.”

Those who lived through the period will remember the banking revolution that took place. At the end of the 1970s most people did not have a bank account. Wages were paid in cash, and any savings people had left over at the end of the week would be put in a building society – an organisation little different to a modern credit union. In the early 1980s, the Thatcher government used their control of public services and nationalised industries first to bribe and later to force millions of workers to open bank accounts in order to have their wages paid by bank transfer. These first bank accounts were mostly cash accounts – the account holder could only withdraw cash; they couldn’t write cheques or agree a loan or an overdraft. As the regulations were dismantled, however, access to credit became easier. Overdraft facilities were added to most accounts, and cheque books and cheque guarantee cards were distributed. Credit cards, loans and mortgages quickly followed. By the late 1990s Britain’s householders struggled to open their front doors because of the mountains of junk mail offering pre-approved loans, credit cards and mortgages.

Continuar a ler »

One of the great misconceptions of our time is the belief that we can move away from fossil fuels if we make suitable choices on fuels. In one view, we can make the transition to a low-energy economy powered by wind, water, and solar. In other versions, we might include some other energy sources, such as biofuels or nuclear, but the story is not very different.

The problem is the same regardless of what lower bound a person chooses: our economy is way too dependent on consuming an amount of energy that grows with each added human participant in the economy. This added energy is necessary because each person needs food, transportation, housing, and clothing, all of which are dependent upon energy consumption. The economy operates under the laws of physics, and history shows disturbing outcomes if energy consumption per capita declines.

There are a number of issues:

  • The impact of alternative energy sources is smaller than commonly believed.
  • When countries have reduced their energy consumption per capita by significant amounts, the results have been very unsatisfactory.
  • Energy consumption plays a bigger role in our lives than most of us imagine.
  • It seems likely that fossil fuels will leave us before we can leave them.
  • The timing of when fossil fuels will leave us seems to depend on when central banks lose their ability to stimulate the economy through lower interest rates.
  • If fossil fuels leave us, the result could be the collapse of financial systems and governments.

[1] Wind, water and solar provide only a small share of energy consumption today; any transition to the use of renewables alone would have huge repercussions.


According to BP 2018 Statistical Review of World Energy data, wind, water and solar only accounted for 9.4% 0f total energy consumption in 2017.

Figure 1. Wind, Water and Solar as a percentage of total energy consumption, based on BP 2018 Statistical Review of World Energy.

Even if we make the assumption that these types of energy consumption will continue to achieve the same percentage increases as they have achieved in the last 10 years, it will still take 20 more years for wind, water, and solar to reach 20% of total energy consumption.

Thus, even in 20 years, the world would need to reduce energy consumption by 80% in order to operate the economy on wind, water and solar alone. To get down to today’s level of energy production provided by wind, water and solar, we would need to reduce energy consumption by 90%.

[2] Venezuela’s example (Figure 1, above) illustrates that even if a country has an above average contribution of renewables, plus significant oil reserves, it can still have major problems.

One point people miss is that having a large share of renewables doesn’t necessarily mean that the lights will stay on. A major issue is the need for long distance transmission lines to transport the renewable electricity from where it is generated to where it is to be used. These lines must constantly be maintained. Maintenance of electrical transmission lines has been an issue in both Venezuela’s electrical outages and in California’s recent fires attributed to the utility PG&E.

There is also the issue of variability of wind, water and solar energy. (Note the year-to-year variability indicated in the Venezuela line in Figure 1.) A country cannot really depend on its full amount of wind, water, and solar unless it has a truly huge amount of electrical storage: enough to last from season-to-season and year-to-year. Alternatively, an extraordinarily large quantity of long-distance transmission lines, plus the ability to maintain these lines for the long term, would seem to be required.

Continuar a ler »


Leyéndote se percibe cierta tendencia a la melancolía, pero siempre gana tu sentido del humor. ¿Son dos cosas que van juntas?

Esas cosas no las eliges tú. Son temperamentales, supongo. La ironía suele ser un drenaje por el que fluyen esos humores melancólicos. El mayor elogio que pueden hacernos, creo, es el de que alguien se ha reído de buena gana leyendo una página tuya. Ponerse triste es relativamente sencillo, no tiene uno más que levantar un poco la vista hacia el futuro y ver lo que nos espera a todos, y si eso no basta, volver la cabeza atrás. Ahora, si alguien puede meter la alegría en el pecho de la gente, es para estar contento. A mí la vanguardia histórica, viendo los cuadros y la literatura de esa época, me ha parecido en general decepcionante, algo de cortísimo recorrido, si se compara con el arte clásico. Que hayan llegado a ser igualmente museables la Victoria de Samotracia y el urinario de Duchamp supongo que será una broma a la que tarde o temprano alguien con mando en plaza dejará de verle la gracia, y mandará el urinario a la mierda. Solo es cuestión de tiempo. Si han salido de los museos la mayor parte los pelucones de Luis XIV, ¿por qué no los bigotes de la Monalisa? Pero a la vanguardia hemos de reconocerle precisamente eso, el humor, que se rieran de todo y de todos, aunque luego se viera que les hacía menos gracia que alguien se riera de ellos. La jovialidad de los vanguardistas está muy bien, es decir, que la vanguardia es muy importante como punto de partida. La lástima es que luego no encontraran ni tiempo ni talento para ponerse serios y hacer algo más que trabajos manuales. Solo en la tipografía sus logros me han parecido siempre rotundos y brillantísimos y divertidos, pero no creo que los vanguardistas se resignaran a pasar a la historia solo como artesanos.

Mais aqui

Where now for energy?


What happens when energy prices are at once too high for consumers to afford, but too low for suppliers to earn a return on capital?

That’s the situation now with petroleum, but it’s likely to apply across the gamut of energy supply as economic trends unfold. On the one hand, prosperity has turned down, undermining what consumers can afford to spend on energy. On the other, the real cost of energy – the trend energy cost of energy (ECoE) – continues to rise.

In any other industry, these conditions would point to contraction – the amount sold would fall. But the supply of energy isn’t ‘any other industry’, any more than it’s ‘just another input’. Energy is the basis of all economic activity – if the supply of energy ceases, economic activity grinds to a halt. (If you take a moment to think through what would happen if all energy supply to an economy were cut off, you’ll see why this is).

Without continuity of energy, literally everything stops. But that’s exactly what would happen if the energy industries were left to the mercies of rising supply costs and dwindling customer resources.

This leads us to a finding which is as stark as it is (at first sight) surprising – we’re going to have to subsidise the supply of energy.

Critical pre-conditions

Apart from the complete inability of the economy to function without energy, two other, critical considerations point emphatically in this direction.

The first is the vast leverage contained in the energy equation. The value of a unit of energy is hugely greater than the price which consumers pay (or ever could pay) to buy it. There is an overriding collective interest in continuing the supply of energy, even if this cannot be done at levels of purchaser prices which make commercial sense for suppliers.

The second is that we already live in an age of subsidy. Ever since we decided, in 2008, to save reckless borrowers and reckless lenders from the devastating consequences of their folly, we’ve turned subsidy from anomaly into normality.

The subsidy in question isn’t a hand-out from taxpayers. Rather, supplying money at negative real cost subsidizes borrowers, subsidizes lenders and supports asset prices at levels which bear no resemblance to what ‘reality’ would be under normal, cost-positive monetary conditions.

In the future, the authorities are going to have to do for energy suppliers what they already do for borrowers and lenders – use ‘cheap money’ to sustain an activity which is vital, but which market forces alone cannot support.

How they’ll do this is something considered later in this discussion.

If, by the way, you think that the concept of subsidizing energy supply threatens the viability of fiat currencies, you’re right. The only defence for the idea of providing monetary policy support for the supply of energy is that the alternative of not doing so is even worse.

Starting from basics  

To understand what follows, you need to know that the economy is an energy system (and not a financial one), with money acting simply as a claim on output made possible only by the availability of energy. This observation isn’t exactly rocket-science, because it is surely obvious that money has no intrinsic worth, but commands value only in terms of the things for which it can exchanged.

To be slightly more specific, all economic output (other than the supply of energy itself) is the product of surplus energy – whenever energy is accessed, some energy is always consumed in the access process, and surplus energy is what remains after the energy cost of energy (ECoE) has been deducted from the total (or ‘gross’) amount that is accessed.

From this perspective, the distinguishing feature of the world economy over the last two decades has been the relentless rise in ECoE. This process necessarily undermines prosperity, because it erodes the available quantity of surplus energy. We’re already seeing this happen – Western prosperity growth has gone into reverse, and growth in emerging market (EM) economies is petering out. Global average prosperity has already turned down.

From this simple insight, much else follows – for instance, our recent, current and impending financial problems are caused by a collision between (a) a financial system wholly predicated on perpetual growth in prosperity, and (b) an energy dynamic that has already started putting prosperity growth into reverse. Likewise, political changes are likely to result from the failure of incumbent governments to understand the worsening circumstances of the governed.

Essential premises – leverage and subsidy

Before we start, there are two additional things that you need to appreciate.

The first is that the energy-economics equation is hugely leveraged. This means that the value of energy to the economy is vastly greater than the prices paid (or even conceivably paid) for it by immediate consumers. Having (say) fuel to put in his or her car is a tiny fraction of the value that a person derives from energy – it supplies literally all economic goods and services that he or she uses.

The second is that, ever since the 2008 global financial crisis (GFC I) we have been living in a post-market economy.

In practice, this means that subsidies have become a permanent feature of the economic landscape.

These issues are of fundamental importance, so much so that a brief explanation is necessary.

Continuar a ler »

Grande Escritor + Cão


Camilo Pessanha + Cão

Samuel Beckett + Cão ( + Gato)

Samuel Beckett + Cão (+Gato)


Harry Crews + Cão


Marguerite Yourcenar + Cão


Stig Dagerman + Cão (+Família)


Gore Vidal + Câes


Anton Tchekov + Cão


Andrés Trapiello + Cão


George Orwell +Cão (+Gato)


Émile Zola + Cão

Franz Kafka + Cão

3 x Éthiopiques


Financial markets have been behaving in a very turbulent manner in the last couple of months. The issue, as I see it, is that the world economy is gradually changing from a growth mode to a mode of shrinkage. This is something like a ship changing course, from going in one direction to going in reverse. The system acts as if the brakes are being very forcefully applied, and reaction of the economy is to almost shake.

What seems to be happening is that the world economy is reaching Limits to Growth, as predicted in the computer simulations modeled in the 1972 book, The Limits to Growth. In fact, the base model of that set of simulations indicated that peak industrial output per capita might be reached right about now. Peak food per capita might be reached about the same time. I have added a dotted line to the forecast from this model, indicating where the economy seems to be in 2019, relative to the base model.

The economy is a self-organizing structure that operates under the laws of physics. Many people have thought that when the world economy reaches limits, the limits would be of the form of high prices and “running out” of oil. This represents an overly simple understanding of how the system works. What we should really expect, and in fact, what we are now beginning to see, is production cuts in finished goods made by the industrial system, such as cell phones and automobiles, because of affordability issues. Indirectly, these affordability issues lead to low commodity prices and low profitability for commodity producers. For example:

  • The sale of Chinese private passenger vehicles for the year of 2018 through November is down by 2.8%, with November sales off by 16.1%. Most analysts are forecasting this trend of contracting sales to continue into 2019. Lower sales seem to reflect affordability issues.
  • Saudi Arabia plans to cut oil production by 800,000 barrels per day from the November 2018 level, to try to raise oil prices. Profits are too low at current prices.
  • Coal is reported not to have an economic future in Australia, partly because of competition from subsidized renewables and partly because China and India want to prop up the prices of coal from their own coal mines.

The Significance of Trump’s Tariffs

If a person looks at history, it becomes clear that tariffs are a standard response to a problem of shrinking food or industrial output per capita. Tariffs were put in place in the 1920s in the time leading up to the Great Depression, and were investigated after the Panic of 1857, which seems to have indirectly led to the US Civil War.

Whenever an economy produces less industrial or food output per capita there is an allocation problem: who gets cut off from buying output similar to the amount that they previously purchased? Tariffs are a standard way that a relatively strong economy tries to gain an advantage over weaker economies. Tariffs are intended to help the citizens of the strong economy maintain their previous quantity of goods and services, even as other economies are forced to get along with less.

I see Trump’s trade policies primarily as evidence of an underlying problem, namely, the falling affordability of goods and services for a major segment of the population. Thus, Trump’s tariffs are one of the pieces of evidence that lead me to believe that the world economy is reaching Limits to Growth.

The Nature of World Economic Growth

Economic growth seems to require growth in three dimensions (a) Complexity, (b) Debt Bubble, and (c) Use of Resources. Today, the world economy seems to be reaching limits in all three of these dimensions (Figure 2).

Complexity involves adding more technology, more international trade and more specialization. Its downside is that it indirectly tends to reduce affordability of finished end products because of growing wage disparity; many non-elite workers have wages that are too low to afford very much of the output of the economy. As more complexity is added, wage disparity tends to increase. International wage competition makes the situation worse.

A growing debt bubble can help keep commodity prices up because a rising amount of debt can indirectly provide more demand for goods and services. For example, if there is growing debt, it can be used to buy homes, cars, and vacation travel, all of which require oil and other energy consumption.

If debt levels become too high, or if regulators decide to raise short-term interest rates as a method of slowing the economy, the debt bubble is in danger of collapsing. A collapsing debt bubble tends to lead to recession and falling commodity prices. Commodity prices fell dramatically in the second half of 2008. Prices now seem to be headed downward again, starting in October 2018.

Even the relatively slow recent rise in short-term interest rates (Figure 4) seems to be producing a decrease in oil prices (Figure 3) in a way that a person might expect from a debt bubble collapse. The sale of US Quantitative Easing assets at the same time that interest rates have been rising no doubt adds to the problem of falling oil prices and volatile stock markets. The gray bars in Figure 4 indicate recessions.

Growing use of resources becomes increasingly problematic for two reasons. One is population growth. As population rises, the economy needs more food to feed the growing population. This leads to the need for more complexity (irrigation, better seed, fertilizer, world trade) to feed the growing world population.

The other problem with growing use of resources is diminishing returns, leading to the rising cost of extracting commodities over time. Diminishing returns occur because producers tend to extract the cheapest to extract commodities first, leaving in place the commodities requiring deeper wells or more processing. Even water has this difficulty. At times, desalination, at very high cost, is needed to obtain sufficient fresh water for a growing population.

Continuar a ler »