Feeds:
Artigos
Comentários

Archive for Maio, 2019

Elizabeth Kolbert

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

(mais…)

Read Full Post »

Consciousness of Sheep

When the English and French armies landed on the shores of the Crimean peninsula in October 1853 they were ill-prepared to fight a war. Logistics – the science of provisioning armies at a distance – was in its infancy; and armies were still expected to forage for at least some of their supply needs by pillaging the local agriculture. Supply shipping was required to cross the Bay of Biscay and round the Iberian Peninsula before sailing the entire length of the Mediterranean, passing through the straits and sailing halfway across the Black Sea. Ships regularly arrived in the wrong order, or had been loaded chaotically so that non-essential cargo had to be unloaded in order to access essential equipment, ammunition and rudimentary medical supplies. In order to build a base at Sevastopol, the support fleet had to be unloaded and despatched to the opposite shore of the Black Sea in order to harvest Turkish timber. The only saving grace was that the internal communications of feudal Russia were even worse.

One problem that was resolved by the British and French engineers in the Crimea was how to get supplies unloaded at the port up the steep surrounding hills and onto the plateau above where the troops were fighting. The technology they deployed was to be central to the human activity of mass slaughter for the next century. They built a railway. Railways were in their infancy at this time. However, by a twisted chain of events, the Crimean War was to have a huge impact on the development of the railways on the other side of the planet.

By 1853 the British had become dependent upon imports to feed its growing urban population even in peacetime. With the outbreak of war, Britain was to engage in another aspect of warfare that would continue for another century – the import of food and materials across the Atlantic from the USA. American farmers saw a big boost in demand as a result of the war coming on top of the early industrialisation and urbanisation of US towns and cities. In order to profit from the growth in demand, the farmers had, somehow, to move their produce from the countryside to the cities and the ports.

Railways provided a potential solution. In 1937, however, the USA had been blighted by the collapse of an earlier railway investment bubble; so there was little appetite among investors for a repeat performance. It was at this point that one of the most seductive – and deadly – ideas ever entertained by a human rose to the forefront of our collective consciousness. As Timothy J Riddiough records:

“In the early 1850s the owners of the La Crosse & Milwaukee railroad hit on an idea. Why not approach local farmers, particularly those farmers whose property lay near the path of the railroad line and its depots, and ask them to mortgage their farm to the railroad in return for shares of stock in the railroad? The dividends from the stock would be at least equal to the interest required on the mortgage, where the dividend-interest swap negated any need for the farmer to come out of pocket for interest payments on the debt. In fact, no cash changed hands at all between the farmer and the railroad in this debt-for-equity swap…

“Now, the second step of the transaction was for the railroad to monetise the RRFMs so that it could purchase the track and equipment necessary to expand the line. The solution to this problem resulted in what we believe to be the first case of mortgage securitisation executed in the United States. It was a railroad farm mortgage-backed security – effectively a covered bond offered by the railroad to potential investors located on the east coast and in Europe.”

Instead of investing directly in the building of the railways, bondholders were investing in a derivative instrument that paid them a share of the growing income from the farms. Provided that the growing demand for farm produce continued, and provided that the railways continued to profit from transporting the produce from the farms to the cities and ports, investors could get rich. What could possibly go wrong?

We know to our cost today what goes wrong when the banking and finance corporations inflate derivate bubbles. But in the early 1850s this was a new idea. When the Crimean War came to an end in February 1856, European demand for US agricultural produce slumped. Famers began to default on their mortgages; and the derivatives based upon them began to fail. In 1857 the inevitable panic took hold as investors desperately sought to cut their losses.

Had anyone had the sense to drive a stake through the heart of the idea of securitised derivatives in 1857 the world might have been spared untold suffering. But securitisation is too seductive; and its short-term rewards too great for it to remain in its tomb for long. As Cathy M Kaplan notes:

“While writers point to the origins of securitisation in a number of precedents, including the farm railroad mortgage bonds of the 1860s, the mortgage-backed bonds of the 1880s and a form of securitisation of mortgages before the 1929 crash, the modern era of securitisation began in 1970. That was when the Department of Housing and Urban Development created the first modern residential mortgage-backed security when the Government National Mortgage Association (Ginnie Mae or GNMA) sold securities backed by a portfolio of mortgage loans.”

In its modern form, securitisation is supposed to make the issuing of loans safe for the banks by standardising risks. To understand this, imagine that you are the manager of a bank. You have issued 100 loans to businesses and households that you believe to be credit worthy. However, you also know from the statistics that in the course of these loans being repaid, three percent will default… but you do not know which. One way around this is to take your hundred loans (stage 1 below) which include the three which will default, and divide the income from them into 100 pieces (figure 2). These can then be repackaged into an investment bond – a securitised derivative – made up of one percent of the income from each of your 100 loans (stage3). You now have 100 derivatives that each carry the same risk; allowing you to sell them to third party investors with a reasonable degree of certainty that they will deliver the promised return.

Securitisation

Households and businesses apparently benefit from this arrangement because banks are able to be less conservative in their lending practices. Banks benefit because instead of waiting – sometimes for decades – for the return on their investment, they can be repaid more or less immediately. Investors also, apparently, benefit because of the standardisation of risk – the three percent default rate has already been built in.

Governments had, in fact, driven at least a partial stake through the heart of securitisation in the years following the 1929 crash. The impact of depression and the rise of political extremism that plunged the world into a conflict that killed perhaps 85 million people convinced politicians in the aftermath that is should never be allowed to happen again. The introduction of mortgage-backed securities in the USA in the 1970s had been subject to close regulation; while elsewhere in the world they were still illegal.

This was to change in the 1980s as governments sought an alternative to the economics and politics of a post-war consensus that was rapidly breaking down. In the USA, this involved a gradual deregulation of the banking and finance sector. In the UK the change was far more abrupt and can be seen in data compiled by Steve Keen showing Britain’s historical debt to GDP ratio:

In 1980 the Thatcher government began its experiment in selling off public assets in order to kick-start economic growth. To begin with, they focused on the sale of Britain’s then massive stock of public housing. For ideological reasons, Thatcher believed that if people owned their homes rather than renting them, they would have a greater stake in society and would be less likely to engage in radical politics or trade union activity. And, more cynically, it is much harder to go out on strike when you have a mortgage to repay every month. The problem for Thatcher was that the people who she wanted to purchase their homes were not the kind of people Britain’s highly conservative banks would ever consider lending money to. So Thatcher had to follow the Americans and begin dismantling the regulations.

The big change came in 1986 with the “Big Bang” financial deregulation which finally removed the stake from the corpse of securitisation; paving the way for the banks to bring fire and brimstone down upon the people of the earth once more. At the time, the conservative nature of banking was regarded as sufficient to prevent a re-run of the financial chaos that banks have always created throughout our history. This, however, overlooked the fact that banks had only been conservative in their practices because of the regulation that Thatcher decided to remove. As Kaplan points out:

“During the late 1980s and the 1990s the securitisation market exploded. This was aided in the United States by the REMIC legislation and changes in SEC rules, and fuelled by the growth of money market funds, investment funds and other institutional investors, such as pension funds and insurance companies looking for product. In the 1990s commercial mortgages began to be securitised. Outside the US, countries including the UK and Japan adopted laws that allowed for securitisation. The vastly expanding global consumer culture, where access to credit to purchase everything from houses and cars to mobile phones and TVs was taken as a given, continued to stoke growth in the volume of securitisations.”

Those who lived through the period will remember the banking revolution that took place. At the end of the 1970s most people did not have a bank account. Wages were paid in cash, and any savings people had left over at the end of the week would be put in a building society – an organisation little different to a modern credit union. In the early 1980s, the Thatcher government used their control of public services and nationalised industries first to bribe and later to force millions of workers to open bank accounts in order to have their wages paid by bank transfer. These first bank accounts were mostly cash accounts – the account holder could only withdraw cash; they couldn’t write cheques or agree a loan or an overdraft. As the regulations were dismantled, however, access to credit became easier. Overdraft facilities were added to most accounts, and cheque books and cheque guarantee cards were distributed. Credit cards, loans and mortgages quickly followed. By the late 1990s Britain’s householders struggled to open their front doors because of the mountains of junk mail offering pre-approved loans, credit cards and mortgages.

(mais…)

Read Full Post »