Quantcast
Channel: Evo and Proud
Viewing all 353 articles
Browse latest View live

Survival of the nicest-smelling?

$
0
0

 
The Perfume Maker, Rudolf Ernst (1854-1932) (Wikicommons)

 

It has long been known that we vary not only in our sensitivity to different smells but also in our preferences for them—the degree to which they seem pleasant or unpleasant. This variability often contains a large genetic component (Gross-Isseroff et al., 1992; Karstensen and Tommerup, 2012; Keller et al., 2007; Keller et al., 2012; Weiss et al., 2011). In the case of one odor, a single gene explains over 96% of the variability in smell sensitivity (Jaegar et al., 2013). A twin study has similarly found two odorants to be 78% and 73% heritable (Gross-Isseroff, 1992). This hardwiring is selective, however, because sensitivity to other odors can show little or no heritable variation (Hubert et al., 1980). There is also selective hardwiring in smell preferences. Different individuals will perceive androstenone, for instance, as offensive ('sweaty, urinous'), pleasant ('sweet, floral'), or odorless (Keller et al., 2007). It seems that selection has produced specific algorithms in the human brain for specific smells and that these algorithms can differ from one individual to the next (Keller et al., 2007; Knaapila et al., 2012).

This genetic variability exists between men and women, and also between age groups (Keller et al., 2012). Does it also exist between different human populations? The sense of smell does seem to matter more in some than in others, particularly hunter-gatherers:

People pay attention to smells when they are important to their daily lives and are not just part of the sensory and emotional background. This is certainly the case with the Umeda of New Guinea: in a tropical rainforest scent plays as important a role as sight in terms of spatial orientation. The Waanzi in southeast Gabon use odors daily in fishing, hunting, and gathering, thanks to a kind of 'olfactory apprenticeship' in family life and rituals of initiation. In Senegal, the Ndut are even more skillful: they are able to distinguish the odors of the different parts of plants and they are able to give a name to these odors. In our society, of course, most of us are incapable of this.(Candau, 2004)

For hunter-gatherers farther away from the tropics, the sense of smell matters less because the land supports a lower diversity of plant species and has less plant life altogether per unit of land area. Parallel to this north-south trend, more food comes from hunting of game animals and less from gathering of plant items. The end point is Arctic tundra, where opportunities for gathering are limited even in summer and where most food takes the form of meat. There, the senses of sight and sound matter more, being of greater value for long-range detection and tracking of game animals.

Candau (2004) sees these differences between human groups as evidence for "cultural influences" rather than "genetic inheritance." The two are not mutually exclusive: culture itself can select for some heritable abilities over others. On this point, it may be significant that different human groups continue to show differences in smell sensitivity and preference long after their ancestors had moved to very different environments. Thus, in a study from New York City, Euro-American and African American participants were exposed to a wide range of odors. It was found that the two groups differed in their pleasantness rating of 18 of the 134 stimuli, generally floral or vegetative odors. Moreover in 14 of the 18, the African Americans were the ones who responded more positively (Keller et al., 2012).

In addition, the African Americans responded more readily to aromatic metabolites of the male hormone testosterone, i.e., androstadienone and androstenone. The authors note: "This is consistent with the finding from the National Geographic Smell Survey, which found that African respondents were more sensitive to androstenone than American respondents. This difference is undoubtedly at least partially caused by the fact that the functional RT variant of OR7D4 is more common in African-Americans than in Caucasians" (Keller et al., 2012).

Gene-culture coevolution during historic times

This coevolution did not stop with hunter-gatherers. Beginning 10,000 years ago, some of them became farmers and that change set off a lot of other changes: population growth, land ownership, creation of a class system, social inequality on a much greater scale, year-round settlement in villages and then in towns and cities ... And on and on. We entered new environments—not natural ones of climate and vegetation, but rather human-made ones.

These environments offered us new olfactory stimuli: salves, perfumes, incense, scented oils, aromatic baths ... Havlicek and Roberts (2013) argue that our sense of smell coevolved with human-made fragrances and that this coevolution went on for the longest in the Middle East and South Asia, where the use of perfumes is attested as early as the fourth millennium B.C. (Wikipedia, 2015). A sort of positive feedback then developed between use of these fragrances and praise of them in prose, song, and poetry, the two reinforcing each other and thereby strengthening the pressure of selection. This may be seen in the Bible:

The Hebrew Song of Songs furnishes a typical example of a very beautiful Eastern love-poem in which the importance of the appeal to the sense of smell is throughout emphasized. There are in this short poem as many as twenty-four fairly definite references to odors,—personal odors, perfumes, and flowers,—while numerous other references to flowers, etc., seem to point to olfactory associations. Both the lover and his sweetheart express pleasure in each other's personal odor. 

"My beloved is unto me," she sings, "as a bag of myrrh
That lieth between my breasts;
My beloved is unto me as a cluster of henna flowers
In the vineyard of En-gedi."

And again: "His cheeks are as a bed of spices [or balsam], as banks of sweet herbs." While of her he says: "The smell of thy breath [or nose] is like apples." (Ellis, 1897-1928)

In the 9th century the Arab chemist Al-Kindi wrote the Book of the Chemistry of Perfume and Distillations, which contained more than a hundred recipes for fragrant oils, salves, and aromatic waters (Wikipedia, 2015). Today, the names of our chief perfumes are often of Arabic or Persian origin: civet, musk, ambergris, attar, camphor …

Finally, the use of perfumes, like kissing and cosmetics in general, moved the center of sexual interest away from the genitals and toward the face, thereby creating a second channel of arousal:

[...] the focus of olfactory attractiveness has been displaced. The centre of olfactory attractiveness is not, as usually among animals, in the sexual region, but is transferred to the upper part of the body. In this respect the sexual olfactory allurement in man resembles what we find in the sphere of vision, for neither the sexual organs of man nor of woman are usually beautiful in the eyes of the opposite sex, and their exhibition is not among us regarded as a necessary stage in courtship. The odor of the body, like its beauty, in so far as it can be regarded as a possible sexual allurement, has in the course of development been transferred to the upper parts. The careful concealment of the sexual region has doubtless favored this transfer. (Ellis, 1897-1928)

Differences between human populations?

To recapitulate, we humans vary a lot in the degree to which we have been exposed to perfumes and to a perfume-friendly culture, a possible analogy being the degree to which we have been exposed to alcoholic beverages. There may thus have been selection against individuals whose own smell preferences or body chemistry failed to match the perfumes available, this selection being not only stronger in some populations but also qualitatively different:

[...] individual communities vary considerably in the substances they employ for perfume production (in most of the speculations below we deliberately ignore recent trends such as technological advancement in global transfer of goods and production of synthetic chemicals used in perfumery: these phenomena appeared only very recently and one might not expect their immediate effect on biological evolution which operates on a much longer time scale). The absence of a specific ingredient in the perfumes of a particular community could be due to the following reasons: (1) the source of the odour is unavailable in the area and is not traded from neighbours. For example, we know that aromatic plants were an important commodity in trading networks in Ancient Egypt or Greece, but some of the scents routinely employed in that era in India were rare or absent in Mediterranean cultures. (2) The community is constrained by a technology. Some of the aromas can be extracted only using a specific technology which might not be available for or discovered by the particular community. In ancient Greece, for instance, ethanol distillation was not used and perfumers instead used mechanic extraction or enfleurage (Brun 2000). (3) Particular scents or their source (e.g. a particular plant) are believed to be inappropriate for body adornment. Such beliefs might stem from religious considerations.

[...] considering that only some scent ingredients will complement particular body odours (i.e. particular genotypes) and that a particular community employs only a restricted variety of scents for perfuming, it is plausible that some individuals may not be able to select a perfume which complements their body odour and may therefore suffer a social disadvantage. In the long run, the frequency of genotypes of such individuals would decrease in the particular community. (Havlicek and Roberts, 2013)

It is disappointing that Havlicek and Roberts do not develop this argument further with plausible evidence for such gene-culture coevolution. For instance, Hall et al. (1968) discussed how smell and touch hold greater importance for Arabs than for Americans. This point has likewise been remarked upon with respect to the Gulf countries:

The importance of good smell in Qatari homes is inherent in the requirement of cleanness and purity (taharah) in Islam, both physical and spiritual (Sobh and Belk, 2010). Purity, cleanness, and good smell are central to Muslims everywhere in the world, but the obsession with perfuming bodies and homes is something of a fetish in Gulf countries and is very prominent in Qatar. (Sobh and Belk, 2011)

This heightened smell sensitivity is all the more striking because plant life is less abundant and less diverse in the Middle East, certainly in comparison with the tropics. It seems unlikely, then, that it had been acquired during the hunter-gatherer stage of cultural evolution, being instead a later development, possibly through coevolution with the development of perfumes in historic times.

References 

Candau, J. (2004). The olfactory experience: constants and cultural variables, Water Science & Technology, 49, 11-17.
https://halshs.archives-ouvertes.fr/halshs-00130924/ 

Ellis, H. (1897-1928). Studies in the Psychology of Sex, vol. IV, Appendix A. The origins of the kiss.
https://www.gutenberg.org/files/13613/13613-h/13613-h.htm 

Gross-Isseroff, R., D. Ophir, A. Bartana, H. Voet, and D. Lancet. (1992). Evidence for genetic determination in human twins of olfactory thresholds for a standard odorant, Neuroscience Letters, 141, 115-118.
http://www.sciencedirect.com/science/article/pii/030439409290347A 

Hall, E.T., R.L. Birdwhistell, B. Bock, P. Bohannan, A.R. Diebold, Jr., M. Durbin, M.S. Edmonson, J.L. Fischer, D. Hymes, S.T. Kimball, W. La Barre, F. Lynch, J.E. McClellan, D.S. Marshall, G.B. Milner, H.B. Sarles, G. L Trager, and A.P. Vayda  (1968). Proxemics, Current Anthropology, 9, 83-108
http://www.jstor.org/stable/2740724?seq=1#page_scan_tab_contents 

Havlícek, J., and S.C. Roberts (2013). The Perfume-Body Odour Complex: An Insightful Model for Culture-Gene Coevolution? Chemical Signals in Vertebrates, 12, 185-195.
http://www.scraigroberts.com/uploads/1/5/0/4/15042548/2013_csiv_perfume.pdf 

Hubert, H.B., R.R. Fabsitz, M. Feinleib, and K.S. Brown. (1980). Olfactory sensitivity in humans: genetic versus environmental control, Science, 208, 607-609.
http://www.sciencemag.org/content/208/4444/607.short 

Jaeger, S.R., J.F. McRae, C.M. Bava, M.K. Beresford, D. Hunter, Y. Jia et al. (2013). A Mendelian Trait for Olfactory Sensitivity Affects Odor Experience and Food Selection, Current Biology, 23, 1601 - 1605
http://www.cell.com/current-biology/abstract/S0960-9822(13)00853-1?_returnURL=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0960982213008531%3Fshowall%3Dtrue 

Knaapila, A., G. Zhu, S.E. Medland, C.J. Wysocki, G.W. Montgomery, N.G. Martin, M.J. Wright, D.R. Reed. (2012). A genome-wide study on the perception of the odorants androstenone and galaxolide, Chemical Senses, 37, 541-552.
http://www.researchgate.net/profile/Danielle_Reed2/publication/221859158_A_genome-wide_study_on_the_perception_of_the_odorants_androstenone_and_galaxolide/links/02e7e52b367564a573000000.pdf

Karstensen, H.G. and N. Tommerup. (2012). Isolated and syndromic forms of congenital anosmia, Clinical Genetics, 81, 210-215. 

Keller, A., M. Hempstead, I.A. Gomez, A.N. Gilbert and L.B Vosshall. (2012). An olfactory demography of a diverse metropolitan population, BMC Neuroscience, 13, 122
http://www.biomedcentral.com/1471-2202/13/122/ 

Keller, A., H. Zhuang, Q. Chi, L.B. Vosshall, and H. Matsunami. (2007). Genetic variation in a human odorant receptor alters odour perception, Nature, 449, 468-472
http://vosshall.rockefeller.edu/reprints/KellerMatsunamiNature07.pdf

Sobh, R. and R. Belk. (2011). Domains of privacy and hospitality in Arab Gulf homes, Journal of Islamic Marketing, 2, 125-137
http://www.researchgate.net/profile/Rana_Sobh/publication/241699027_Domains_of_privacy_and_hospitality_in_Arab_Gulf_homes/links/54a1374b0cf257a63602614b.pdf

Weiss, J., M. Pyrski, E. Jacobi, B. Bufe, V. Willnecker, B. Schick, P. Zizzari, S.J. Gossage, C.A. Greer, T. Leinders-Zufall, et al. (2011). Loss-of-function mutations in sodium channel Nav1.7 cause anosmia, Nature, 472, 186-190. 

Wikipedia (2015). Perfume
https://en.wikipedia.org/wiki/Perfume 

In the wrong place at the wrong time?

$
0
0

Dick Turpin was convicted of robbery but had also been guilty of a string of murders (Wikicommons)


In each generation from 1500 to 1750, between 1 and 2% of all English men were executed either by court order or extra-judicially (at the scene of the crime or while in prison). This was the height of a moral crusade by Church and State to punish the wicked so that the good may live in peace.

Meanwhile, the homicide rate fell ten-fold. Were the two trends related? In a recent paper, Henry Harpending and I argued that a little over half of the homicide decline could be explained by the high execution rate, and its steady removal of violent males from the gene pool. The rest could be partly explained by Clark-Unz selection—violent males lost out reproductively because they were increasingly marginalized in society and on the marriage market. Finally, this decline was also due to a strengthening of controls on male violence: judicial punishment (policing, penitentiaries); quasi-judicial punishment (in schools, at church, and in workplaces); and stigmatization of personal violence in popular culture.

These controls drove the decline in the homicide rate, but they also tended over time to hardwire the new behavior pattern, by hindering the ability of violent males to survive and reproduce. The last half-century has seen a dramatic relaxation of these controls but only a modest rise in the homicide rate among young men of native English origin.

The above argument has been criticized on two grounds:

1. Executed offenders were not the worst of the worst. They were often people caught in the wrong place at the wrong time.

2. Executed offenders may have had children who survived to adulthood.

This week's column will address the first criticism. Did execution remove the most violent men? Or did it randomly remove individuals from, say, the most violent third?

Many genetic factors influence our propensity for personal violence: impulse control; violence ideation; pleasure from inflicting pain; etc. Regardless of how strong or weak these factors may be, the propensity itself should be normally distributed within the male population—it should follow a bell curve. If we move right or left from the population mean, the number of men should initially decline very little, with the result that over two-thirds of the men can be found within one standard deviation of the mean.

We really have to go one standard deviation to the right before the men begin to seem abnormally violent, but the remaining right-hand “tail” leaves us only 16% of the male population. What if we’re looking for a man who’s at least twice as violent as the normal two-thirds? He’s in the far right 1%. In a single gene pool, violent men stand out not just because they are noticeably abnormal but also because they are much less common.

Identifying the most violent men. But how?

Were these men the ones that the English justice system executed between 1500 and 1750? Murder is violence taken to its logical extreme, yet most murder cases went unsolved in early modern England. The crime was difficult to prove for want of witnesses, either because none wished to come forward or because they had likewise been murdered. There were no police, no forensic laboratories, and much less of the investigative infrastructure that we have today. If you committed a one-time murder, your chances of not getting caught were good.

The criminal justice system in the eighteenth century [...] therefore operated on a rationale very different from that of a modern state, with its professional police forces, social services and a fully bureaucratised law-enforcement system. In the early eighteenth century at least, the enforcement of law and order depended largely on unpaid amateur officials, the justices of the peace and the parish constables and other local officers. (Sharpe, 2010, p. 92)

This is not to say that the justice system gave murder a lower priority. Rather, with the limited resources available, judges and juries engaged in "profiling." They scrutinized not only the offence but also the accused—his character and demeanor, his behavior during the crime and in the courtroom, and his previous offences. Juries could be lenient in cases of first-time offences for theft, but this leniency disappeared if the accused had a criminal history.

The justice system thus looked for signs that the accused had already committed worse crimes or would go on to do so. Ironically, our current system is the one that tends to catch people who were in the wrong place at the wrong time, i.e., inexperienced one-time murderers.

Hanged for robbery but guilty of murder

This may be seen in a book, published in London in 1735, that told the life stories of 198 executed criminals. Of the 198, only 34 (17%) had been sentenced to death for murder. A much larger number, 111 (56%), were charged with robbery, being described as highwaymen, footpads, robbers, and street robbers. Finally, another 37 (19%) were executed simply for theft (Hayward, 2012; see note). Robbery was punished more severely than simple theft because it threatened both life and property, especially if the victim failed to cooperate sufficiently or seemed to recognize the robber.

Robbery is the taking away violently and feloniously the goods or money from the person of a man, putting him in fear [...]. Yea, where there is a gang of several persons, only one of which robs, they are all guilty as to the circumstance of putting in fear, wherever a person attacks another with circumstances of terror [...] And in respect of punishment, though judgment of death cannot be given in any larceny whatsoever, unless the goods taken exceed twelve pence in value, yet in robbery such judgment is given, let the value of the goods be ever so small. (Hayward, 2013, p. 27)

Sooner or later, a robber ended up killing. We see this in the life story of Dick Turpin, who was hanged for cattle theft, even though he had committed worse crimes:

The process of reconstruction may not tell us much about Turpin's personality, but it does give us the opportunity to put together a remarkable criminal biography, a tale of violent robberies, of murder, and, eventually, of the horse-thefts that led to his execution. (Sharpe, 2010, p. 8)

Allegations of murder came up in trials of robbers, but typically remained unproven because no witnesses could be produced. Nonetheless, the accused would sometimes confess to murder, either to clear his conscience or, in the wake of a death sentence, because he had nothing left to lose, like this man convicted for highway robbery: "This Reading had been concerned in abundance of robberies, and, as he himself owned, in some which were attended with murder" (Hayward, 2013, p. 91). A member of another gang, when caught, confessed to a long string of murders:

[...] he, without any equivocation, began to confess all the crimes of his life. He said that it was true they all of them deserved death, and he was content to suffer; he said, moreover, that in the course of his life he had murdered upwards of three-score with his own hands. He also carried the officers to an island in the river, which was the usual place of the execution of those innocents who fell into the hands of their gang [...] (Hayward,2013, p. 1014)

In most cases, however, the accused would deny involvement in murders even after being condemned to death:

There has been great suspicions that he murdered the old husband to this woman, who was found dead in a barn or outhouse not far from Hornsey; but Wigley, though he confessed an unlawful correspondence with the woman, yet constantly averred his innocency of that fact, and always asserted that though the old man's death was sudden, yet it was natural. (Hayward, 2013, pp. 92-93)

At the place of execution he behaved with great composure and said that as he had heard he was accused in the world of having robbed and murdered a woman in Hyde Park, he judged it proper to discharge his conscience by declaring that he knew nothing of the murder, but said nothing as to the robbery. (Hayward, 2013, p.96)

In the wrong place at the wrong time?

If we look at executed criminals, their profile is not that of unfortunates caught in the wrong place at the wrong time. Most were young men who had done their work in the company of likeminded young men. Those who operated alone were atypical, like this highwayman:

Though this malefactor had committed a multitude of robberies, yet he generally chose to go on such expeditions alone, having always great aversion for those confederacies in villainy which we call gangs, in which he always affirmed there was little safety, notwithstanding any oaths, by which they might bind themselves to secrecy. (Hayward, 2013, p. 93)

For most, long-term safety was a secondary concern. Their behavioral profile—fast life history, disregard for the future, desire to be with other young men and impress them with acts of bravado and violence—stood in contrast to the ascendant culture of early modern England. One example is this robber:

[...] when he returned to liberty he returned to his old practices. His companions were several young men of the same stamp with himself, who placed all their delight in the sensual and brutal pleasures of drinking, gaming, whoring and idling about, without betaking themselves to any business. Natt, who was a young fellow naturally sprightly and of good parts, from thence became very acceptable to these sort of people, and committed abundance of robberies in a very small space of time. The natural fire of his temper made him behave with great boldness on such occasions, and gave him no small reputation amongst the gang. [...] He particularly affected the company of Richard James, and with him robbed very much on the Oxford Road, whereon it was common for both these persons not only to take away the money from passengers, but also to treat them with great inhumanity [...] (Hayward, 2013, pp. 108-109)

This sort of description comes up repeatedly. Most condemned men struck observers as very atypical, and not merely among the worst third of society. In 1741, an observer described a hanging and the interactions between the condemned men and a crowd composed largely of their friends:

The criminals were five in number. I was much disappointed at the unconcern and carelessness that appeared in the faces of three of the unhappy wretches; the countenance of the other two were spread with that horror and despair which is not to be wondered at in men whose period of life is so near [...]

[...] the three thoughtless young men, who at first seemed not enough concerned, grew most shamefully wanton and daring, behaving themselves in a manner that would have been ridiculous in men in any circumstances whatever. They swore, laughed, and talked obscenely, and wished their wicked companions good luck with as much assurance as if their employment had been the most lawful.

At the place of execution the scene grew still more shocking, and the clergyman who attended was more the subject of ridicule than of their serious attention. The Psalm was sung amidst the curses and quarrelling of hundreds of the most abandoned and profligate of mankind, upon them (so stupid are they to any sense of decency) all the preparation of the unhappy wretches seems to serve only for subject of a barbarous kind of mirth, altogether inconsistent with humanity. And as soon as the poor creatures were half dead, I was much surprised to see the populace fall to hauling and pulling the carcasses with so much earnestness as to occasion several warm rencounters and broken heads. These, I was told, were the friends of the persons executed, or such as, for the sake of tumult, chose to appear so; as well as some persons sent by private surgeons to obtain bodies for dissection. The contests between these were fierce and bloody, and frightful to look at [...] The face of every one spoke a kind of mirth, as if the spectacle they beheld had afforded pleasure instead of pain, which I am wholly unable to account for. (Hayward, 2013, pp. 8-10)

The situation in early modern England was akin to a low-grade war, and it was not for nothing that its justice system seems to us so barbaric. The judges and juries were dealing with barbarians: gangs of young men who led a predatory lifestyle that made life miserable for people who ventured beyond the safety of their own homes.

Conclusion

We are still left with the original question: Were these criminals the most violent 1 to 2% or a random sample of a much larger proportion? In general, they behaved quite unlike most people, especially if they belonged to gangs, which seem to have been responsible for most homicides. It is hard to see how such people could correspond even to the most violent 16%—a range of individuals that begins one standard deviation to the right of the mean, at which point behavior just begins to seem "abnormal."

In all likelihood, execution removed individuals who were more than one standard deviation to the right of the mean, with a strong skew toward people more than two standard deviations to the right—in other words, something less than the most violent 16% with a strong skew toward the most violent 1%.

These assumptions differ from those of our model, which assumes that execution removed the most violent 1 to 2%. On the other hand, our model also assumes that each executed criminal would, in the absence of execution, have killed only one person over a normal lifetime. Clearly, many people among the executed were already serial murderers, not so much among the convicted murderers as among the convicted robbers. It is difficult to say whether the two sources of error would balance each other out, since we need more information on (1) just how abnormal the executed were in terms of behavior and (2) how many people they would have otherwise killed over a normal lifetime.

Executed criminals were probably a heterogeneous group. A quarter of them (mostly the thieves) would have likely killed 0 to 1 people on average if allowed to live out their lives. Another quarter may have averaged 1 to 2 murders. Finally, the remaining half may have had an ever higher score. Within this last group, we can be sure that a hard core of individuals would have each gone on to kill dozens of people, if they had not already done so.

Note

The other executed criminals were identified as 8 housebreakers, 7 forgers, 4 pirates, 2 incendiaries, 1 threatening letter writer, 1 ravisher, 1 thief-taker, and 1 releaser of prisoners. Wherever a single individual was charged with more than one crime, I classified him or her under the most serious offence, i.e., murder took precedence over robbery, and robbery took precedence over theft.

Of the 198 executed criminals, 10 were women. The book actually tells the life stories of 201 criminals, but three of them were not executed. I excluded the life stories in the appendix (7 murderers and 4 thieves) because they came from a much earlier time period and may have been less representative.

References

Frost, P. and H. Harpending. (2015). Western Europe, state formation, and genetic pacification, Evolutionary Psychology, 13, 230-243. http://www.epjournal.net/articles/western-europe-state-formation-and-genetic-pacification/  


Hayward, A.L. (2013[1735]). Lives of the Most Remarkable Criminals - who Have Been Condemned and Executed for Murder, the Highway, Housebreaking, Street Robberies, Coining Or Other Offences, Routledge.


Sharpe, J. (2010). Dick Turpin: The Myth of the English Highwayman, Profile Books. 

Guess who first came to America?

$
0
0

Semang from the Malayan Peninsula, Wikicommons

 

Before the Europeans came, the Americas were settled by three waves of people from northeast Asia: the oldest wave beginning some 12,000 to 15,000 years ago, which gave rise to most Amerindians, and two later waves, which gave rise respectively to the Athapaskan and Inuit peoples of northern Canada and Alaska. That's the conventional view.

Kennewick Man. An earlier form of Northeast Asian?

There is growing evidence, however, for earlier waves of settlement. There's Kennewick Man, who lived nine thousand years ago in the American northwest and who looked more European than Amerindian, the closest match being the Ainu of northern Japan. He also looked a lot like Patrick Stewart.

Nonetheless, a DNA study has found him to be closer to Amerindians than to any other existing population in the world (Rasmussen et al., 2015). He was apparently descended from the same Northeast Asians who would later become today's Native Americans. Those earlier Northeast Asians looked more European because they lived closer to the time when these two groups were one and the same people. It may be that the Ainu best preserve the appearance of this ancestral population that would later develop into present-day Europeans, East Asians, and Amerindians.

But why would Kennewick Man be closer anatomically to an Ainu while being closer genetically to an Amerindian? The answer is that the genes that shape our anatomy are a tiny subset of the entire genome. Most genes are of low selective value, often being junk DNA, so they change at a steady rate through random processes. Taken as a whole, the genome thus provides a "clock" that can measure how long two populations have been moving apart since their common ancestors. Genealogically speaking, Kennewick Man is closer to present-day Native Americans than he is to the Ainu. Anatomically speaking, the reverse is true ... probably because his ancestors had escaped the extreme Ice Age conditions that affected northeast Asia 20,000 - 15,000 years ago by retreating to an ice age refugium on the Northwest Pacific Coast. The Ainu may have similarly sat out the Ice Age in another refugium on the other side of the Pacific.

[...]  ancient plant and animal remains found on several offshore islands provide evidence that some areas of land on the outer coast remained unglaciated and habitable during the Ice Age. These ice-free areas are called refugia, and evidence for their existence has been found off the Pacific coast from Alaska to southern British Columbia.

Although there is no direct evidence for human occupation of these refugia during the mid-glacial period, it is clear that a chain of habitable environments existed along the Pacific Northwest Coast, and that these environments could have supported people as they made their way down the coast.

If people moved down the West Coast, and then into the interior from there, where and when did this inward movement occur? Is there any archaeology suggesting that populations on the coast began moving inland? 

A few sites from the interior areas of Washington State, Oregon and Idaho may demonstrate this. Stemmed projectile points are found in a site along the Snake River in Washington State, with dates ranging from 8,800 to 10,800 years ago. Another site in south-central Oregon, Fort Rock Cave, contained a layer of gravel that had two obsidian points within it. Dates from this layer are as old as 13,000 years BP. Wilson Butte Cave from Idaho also contains human made artifacts dating to between 14,500 and 13,000 years ago. Perhaps these sites are examples of early people moving in-land; however the small number of sites uncovered so far makes it hard to determine definitively whether the early settlers came from the coast, or from the east. (VMC, 2005)

Kennewick Man may thus have been part of an earlier wave of people moving into the Americas, which was long confined to the coastal Northwest. With the end of the ice age, circa 12,000 years ago, another wave of settlement opened up via an ice-free corridor running from Alaska to Montana along the eastern side of the Rockies. This second wave, associated with the Clovis culture, brought more people than the first one and ultimately contributed the most to present-day Amerindians.

There were humans even earlier

But the story doesn't end here. There seems to have been another people before the Amerindians and even before the older and more European-like Kennewick humans. A recent study has looked at the gene pool of Native Americans from the Amazon. Not surprisingly, most of it closely matches that of Northeast Asians. But a tiny portion is like what we see in the natives of Australia, Papua New Guinea, and Melanesia (Skoglund et al., 2015)

It would be easy to dismiss this finding as a fluke, were it not for other evidence of a very different people who once lived in the Amazon basin 16,000 to 9,000 years ago (Roosevelt et al., 1996). While overlapping in time with the Clovis culture, they show none of its emphasis on big game hunting, as seen in the well-known Clovis projectile point and other hunting tools. In fact, they were much like tropical foragers of central Africa or Papua-New Guinea. And their earliest remains precede the Clovis culture by at least three thousand years, even though the Amazon rain forest should have been one of the last areas to be penetrated by former denizens of the Arctic.

There's more. A site in central Brazil has yielded several skulls dated to between 8,200 and 9,500 years ago. They don't look at all Amerindian: 

[...] they exhibit strong morphological affinities with present day Australians and Africans, showing no resemblance to recent Northern Asians and Native Americans. These findings confirm our long held opinion that the settlement of the Americas was more complicated in terms of biological input than has been widely assumed. The working hypothesis is that two very distinct populations entered the New World by the end of the Pleistocene, and that the transition between the cranial morphology of the Paleoindians and the morphology of later Native Americans, which occurred around 8-9 ka, was abrupt. [...] The similarities of the first South Americans with sub-Saharan Africans may result from the fact that the non-Mongoloid Southeast Asian ancestral population came, ultimately, from Africa, with no major modification in the original cranial bau plan of the first modern humans.(Neves et al., 2003).

Similar findings have emerged from analysis of skulls from Mexico dated to between 9,000 and 11,000 years ago and skulls from Colombia dated to between 7,500 and 8,300:

[...] only 6 out of 25 comparisons displayed in Table 3 tend to tie an early Mexican specimen to an Amerindian sample. Conversely, 19 of the 25 comparisons reflect the greatest similarity to Africans (6/25), Paleoindians (5/25), Australians (3/25), Polynesians (3/25), South Asians (1/25), or the Ainu (1/25). When first-place positions are explored, all five are circum-Pacific, either recent or early. Among second-place positions, 4 out of 5 are circum-Pacific, and the remaining one is African.

[...] To summarize, analyses of individual skulls against reference samples suggest that the early Mexican fossils studied do not share a common craniofacial morphology with Amerindians or East Asians, as reported elsewhere for South Paleoindians, some North Paleoindian specimens […] and some modern groups like Fuegian-Patagonians and the Pericúes from Baja California.

[...] This study does not support continuity between Early and Late Holocene groups in the Americas: Archaic remains from Colombia are not an intermediate point between Paleoamericans and modern groups. Moreover, the data presented here support the idea that the first settlers of the New World preceded the origin of the more specialized morphology observed in modern populations from Northeast Asia. (Gonzalez-Jose et al., 2005)

This shouldn't be too surprising. Here and there in Southeast Asia we find relic groups of small, dark-skinned, and woolly-haired hunter-gatherers: the Andamanese of India, the Semang of Malaysia, and the Aeta of the Philippines. They used to predominate throughout that region as late as four thousand years ago. Farther back in time, in prehistory, they may have also lived farther north, perhaps at one point the entire East Asian littoral ... and from there into the Americas. This would be before the last ice age, and probably before another wave of modern humans moved into northern Eurasia.

The past is another country, just as the future is another country. We unthinkingly assume that a place has always been home to a people who look a certain way, behave a certain way, and organize their lives a certain way. This is as untrue for the Americas as it is for anywhere else. Going back in time, we see people who look more and more ancestral not only to Amerindians but also to Europeans and East Asians. Eventually, those ancestral Eurasians disappear and we meet a very different sort of human.

What happened to those first inhabitants of the Americas? Did they go peacefully into the night when the newcomers arrived, retreating farther and farther into more remote areas? Or did the two groups fight it out? There was probably a range of scenarios—perhaps small numbers of newcomers initially worked out a modus vivendi with the natives, which later broke down as they became more and more numerous. In any case, the process matters less than the result. Those first Americans went into the night, peacefully or not.

References

Gonzalez-Jose, R., W. Neves, M. Mirazon Lahr, S. Gonzalez, H. Pucciarelli, M. Hernandez Martinez, and G. Correal. (2005). Late Pleistocene/Holocene Craniofacial Morphology in Mesoamerican Paleoindians: Implications for the Peopling of the New World, American Journal of Physical Anthropology, 128, 772-780
http://www.hectorpucciarelli.com.ar/pdf/112.AJPA-Gonzalez-Jose%20et%20al.%202005(b).pdf

Neves, W.A., A. Prous, R. Gonzalez-Jose, R. Kipnis, and J. Powell. (2003). Early Holocene human skeletal remains from Santana do Riacho, Brazil: implications for the settlement of the New World, Journal of Human Evolution, 45, 19-42.
http://www.museunacional.ufrj.br/arqueologia/docs/papers/Prous/nevesprous2003skeletalremains.pdf 

Rasmussen, M., M. Sikora, A. Albrechtsen, T. Sand Korneliussen, J.Victor Moreno-Mayar, G. David Poznik, C.P.E. Zollikofer, M.S. Ponce de Leon, M.E. Allentoft, I. Moltke, H. Jonsson, C. Valdiosera, R.S. Malhi, L. Orlando, C.D. Bustamante, T.W. Stafford Jr. D.J. Meltzer, R. Nielsen, and E. Willerslev. (2015). The ancestry and affiliations of Kennewick Man. Nature, early view
http://www.nature.com/nature/journal/vnfv/ncurrent/full/nature14625.html 

Roosevelt, A.C., M. Lima da Costa, C. Lopes Machado, M. Michab, N. Mercier, H. Valladas, J. Feathers, W. Barnett, M. Imazio da Silveira, A. Henderson, J. Silva, B. Chernoff, D.S. Reese, J.A. Holman, N. Toth, and K. Schick. (1996). Paleoindian cave dwellers in the Amazon: The peopling of the Americas, Science, 272, 373-384.
http://www.researchgate.net/profile/William_Barnett3/publication/235237012_Paleoindian_Cave_Dwellers_in_the_Amazon_The_Peopling_of_the_Americas/links/00b7d524c6599de57c000000.pdf 

Skoglund, P., S. Mallick, M.C. Bortolini, N. Chennagiri, T. Hunemeier, M.L. Petzl-Erler, F. Mauro Salzano, N. Patterson, and D. Reich. (2015). Genetic evidence for two founding populations of the Americas, Nature, early view.
http://www.nature.com/nature/journal/vnfv/ncurrent/full/nature14895.html 

VMC (2005). A journey to a new land. Coastal Refugia
http://www.sfu.museum/journey/an-en/postsecondaire-postsecondary/refuges_cotiers-coastal_refugia

In the eye of the ancient beholder

$
0
0

 
Egyptian painting of a Libyan, a Kushi, a Syrian, and an Egyptian.  In the Middle East, the Egyptians were seen as the Dark Other (Wikicommons)

 

Mention the term ‘skin color’ and people usually think of race or ethnicity. Yet this way of thinking became dominant only when Europeans began moving out and colonizing the rest of the world, beginning in the 16th century. Previously, physical features were less useful as ethnic markers. We knew about and quarrelled with those groups of people who lived within close range, and they tended to look a lot like us. People farther away looked more different, but we had less to do with them. Often, we didn’t even know they existed. So we separated "us" from "them” mainly on the basis of culture—language, religion, customs, and so on.

In those earlier times, skin color was used to distinguish among individuals of the same people and between the two sexes, women being paler and men ruddier and browner. A pale color also set infants apart, particularly in those societies where everyone else was much darker-skinned.

Skin color thus had meanings related to gender, age, or simply the identity of any one individual. This was true for all cultures. For example, in pre-Islamic writings from Arabia:

Human beings are frequently described by words which we might translate as black, white, red, olive, yellow, and two shades of brown, one lighter and one darker. These terms are usually used in a personal rather than an ethnic sense and would correspond to such words as "swarthy,""sallow,""blonde," or "ruddy" in our own modern usage more than to words like "black" and "white." (Lewis, 1990, p. 22)

Similarly, the Japanese would use the terms shiroi(white) and kuroi (black) to describe their gradations of skin color (Wagatsuma, 1967). The Igbo of precolonial Nigeria used ocha (white) and ojii (black) in the same way, so that nwoko ocha (white man) merely meant an African with a yellowish or reddish complexion (Ardener, 1954).

Jews of Antiquity

This older way of viewing skin color—personal, relativistic, and gender-oriented—has been studied by David Goldenberg with respect to the Jews of the ancient world.

The Jews considered their skin to be light brown. A second-century rabbi compared it to “the boxwood tree, neither black nor white, but in between" (Goldenberg, 2003, p. 95). In papyri from Ptolemaic Egypt, Jews are almost always described as "honey-colored" (Cohen, 1999, pp. 29-30).

Nonetheless, Jewish women were preferentially referred to as "white." This reflected the naturally lighter complexion of women, which was made lighter still by sun avoidance and various cosmetics. One rabbinic text advises, "He who wishes to whiten his daughter's complexion, let him give her milk and young fowl," while another recommends using olive oil as a body lotion for the same purpose. A Midrash recounts that after returning from exile in Babylon the men didn't wish to marry the women who came with them because the sun had darkened their faces on the long journey home (Goldenberg, 2003, p. 86). This preference is implicit in a rabbinic discussion of a vow "not to marry a particular woman who is ugly, but it turns out that the woman is beautiful; or black (dark; shehorah), but it turns out that she is white (fair; levanah); or short, but she is tall. Even if she was ugly, but became beautiful; or black, and became white" (Goldenberg, 2003, pp. 85-86).

"White" was also the preferred color of infants. According to a rabbinic tradition, if a woman was suspected of infidelity and found innocent, she would go through the following changes: "if she formerly bore ugly babies, she will now bear beautiful babies; if she formerly bore dark [shehorin] children, she will now bear fair [levanim] children" (Goldenberg, 2003, p. 96).

In the above cases, the terms "white" and "black" were projected onto individuals and onto the two sexes in a relative sense that is better translated by "light" and "dark." This relativism also held true when the same terms were projected onto ethnic groups. Hence, the Jews often called themselves "white" in relation to darker-skinned peoples, usually Egyptians or kushi (black Africans).

For example, in one parable a kushitmaidservant claims she is the most beautiful of her household. Her matronah (a free woman of good family) replies: "Come the morning and we'll see who is black [shahor] and who is white [lavan]" (Goldenberg, 2003, p. 88). Interestingly, the Jews also considered themselves “white” in comparison to Arabs (Goldenberg,2003, pp. 120-124).

There was also the reverse semantic process: the description of an individual’s skin color by a word that originally applied to an ethnic group. A lighter-skinned Jew could for instance be called a germani, and a darker-skinned Jew a kushi. There are even cases of the word kushibeing used for inanimate objects, like dark wine (Goldenberg, 2003, p. 116).

Whatever the case, use of color terms in an ethnic sense tended to carry over values from the non-ethnic sense, specifically the aesthetic ones associated with the lighter skin of women and infants. We see this in a commentary on Gen 12:11 where Abraham enters Egypt and, fearing that the Egyptians will covet his wife, says: "Now I know that you are a beautiful woman." This is explained in the commentary as meaning: "Now we are about to enter a place of ugly and dark [people]" (Goldenberg, 2003, p. 86).

The Egyptians were the Dark Other. Depreciation of their darker skin became associated with negative values, not only ugliness but also uncleanliness and servility. In rabbinic writings, Egypt is called "a house of slaves" and the Pharaoh himself is said to be a "slave." In one text, Jacob debates whether to go to Egypt: "Shall I go to an unclean land, among slaves, the children of Ham?" (Goldenberg, 2003, pp. 160-161). This view is preserved in a homily by the third-century Christian writer Origen: 

But Pharao easily reduced the Egyptian people to bondage to himself, nor is it written that he did this by force. For the Egyptians are prone to a degenerate life and quickly sink to every slavery of the vices. Look at the origin of the race and you will discover that their father Cham, who had laughed at his father's nakedness, deserved a judgment of this kind, that his son Chanaan should be a servant to his brothers, in which case the condition of bondage would prove the wickedness of his conduct. Not without merit, therefore, does the discolored posterity imitate the ignobility of the race.
Homily on Genesis XVI

Most academics argue that dark skin became mentally associated with slavery through the Atlantic slave trade of the 16th to 19th centuries. Others, like Bernard Lewis, believe this mental association goes back to the expansion of the Muslim world into Africa in the seventh century (Lewis, 1990). Actually, it seems to go even farther back, at least to the third century and perhaps even to the establishment of Roman rule over the region (Goldenberg, 2003, pp. 155-156, 168-174). From that time onward, a pigmentocracy took shape in Egypt with Greeks, Jews, and Romans forming the dominant class. Meanwhile, a trade in slaves grew and developed between sub-Saharan Africa and the Middle East. Once the Roman Empire had stopped growing, and stopped taking large numbers of prisoners of war, trade became the main source of domestic servants. It is perhaps significant that the kushit maidservant appears as a recurring motif in rabbinic literature, since that period—Late Antiquity—would correspond to the time when the black slave trade was slowly but steadily growing (Goldenberg, 2003, pp. 126-128). 

This trade may have undermined the status of Egyptians as the Dark Other. Initially, the kushi were often seen as an especially dark sort of Egyptian, perhaps because they were usually encountered in the Middle East as subjects of the Pharaoh (Goldenberg, 2003, pp. 17, 109, 301 n111). In Late Antiquity, they emerged more and more as a distinct category, probably because they were becoming more and more numerous as slaves, particularly in the eastern provinces of the Empire. It was during this time that their dark skin came to be explained as a curse on their forefather Kush, whose father Ham had sinned either by seeing Noah naked or by copulating in the Ark. In one text, Noah curses Ham with the words: "May your progeny be dark and ugly" (Goldenberg, 2003, p. 97). This is not a specifically Jewish tradition, being also attested in early Christian and early Islamic writings (Goldenberg, 2003, pp. 150-177).

Conclusion

We perceive human skin color by means of mental algorithms that originally processed non-ethnic differences in pigmentation: 1) the minor variability that exists among individuals; 2) the difference between infants (who are born with little pigmentation) and older humans; and 3) the sex difference, female skin being paler than male skin because it has less melanisation and less blood flowing through its outer layers. This is a universal sex difference, although it is most visible in humans of medium color (Frost, 2007).

Initially, these algorithms focused on the second source of variability. At some point in evolution, human skin acquired a new meaning when the adult female body began to mimic the relative lightness of infant skin, as well as other visible, audible, and tangible aspects of infants—smoother, more pliable skin, a higher-pitched voice, and a more childlike face. This mimicry arose apparently as a means to provide the adult female with the psychological effects that these traits induce in other adults, particularly males, i.e., a lower level of aggressiveness and a greater desire to provide care and nurturance (Frost, 2011).

After being a sign of age difference and then gender difference, skin color took on a third meaning within historic times—to varying degrees in Antiquity and then overwhelmingly with the expansion of the European world from the sixteenth century onward. Today, this new meaning has eclipsed the older ones, at least at the level of conscious thought.

References 

Ardener, E.W. (1954). Some Ibo attitudes to skin pigmentation, Man, 54, 71-73.
http://www.jstor.org/stable/2793760?seq=1#page_scan_tab_contents

Cohen, S.J.D. (1999). The Beginnings of Jewishness, Berkeley. 

Frost, P. (2007). Comment on Human skin-color sexual dimorphism: A test of the sexual selection hypothesis, American Journal of Physical Anthropology, 133, 779-781.
http://www.researchgate.net/publication/6480611_Human_skin-color_sexual_dimorphism_A_test_of_the_sexual_selection_hypothesis  

Frost, P. (2011). Hue and luminosity of human skin: a visual cue for gender recognition and other mental tasks, Human Ethology Bulletin, 26(2), 25-34.
http://www.researchgate.net/publication/256296588_Hue_and_luminosity_of_human_skin_a_visual_cue_for_gender_recognition_and_other_mental_tasks/file/72e7e5223eb2c3eb3b.pdf

Goldenberg, D.M. (2003). The Curse of Ham. Race and Slavery in Early Judaism, Christianity, and Islam, Princeton: Princeton University Press.

Goldenberg, D.M. (2009). Racism, Color Symbolism, and Color Prejudice, in M. Eliav-Feldon, B. Isaac, and J. Ziegler (eds.) The Origins of Racism in the West, Cambridge.
http://www.researchgate.net/profile/David_Goldenberg2/publication/263161501_Racism_Color_Symbolism_and_Color_Prejudice/links/00b7d53a09ef919429000000.pdf 

Lewis, B. (1990). Race and Slavery in the Middle East, Oxford: Oxford University Press.

Origen (2010). Homilies on Genesis and Exodus, transl. by R.E. Heine., Washington D.C.: Catholic University of America Press
https://books.google.ca/books?id=X_mSBavPcq4C&pg=PA214&source=gbs_toc_r&cad=2#v=onepage&q&f=false  

Wagatsuma, H. (1967). The social perception of skin color in Japan, Daedalus, 96, 407-443.
http://www.jstor.org/stable/20027045?seq=1#page_scan_tab_contents  

The past is another country

$
0
0

 
Male figurine, pottery, c. 7,000–5,000 years ago, Greece, Archaeological Museum of Heraklion, Wikicommons

 

A very important recent finding is the recovery of the entire genomes of three prehistoric farmers who lived in northern Greece 7500-5500 years BP. The data have been analyzed and are expected to shed light on the ancestral relationships of the first Europeans and provide a wealth of information about functional and morphological characteristics. Already it is known that some of our Neolithic ancestors could not digest milk, i.e., they were intolerant to lactose, and had brown eyes and dark skin. (Anon, 2015)

This is one of several findings with a common theme: the farther back in time we go, the less familiar people look. And we don't have to go very far.

This fact came up in a column I wrote about the Americas. If we turn back the clock, Amerindians look more and more European, yet their genes say they're still Amerindian. We're just getting closer to the time when both groups were the same people. If we turn back the clock even farther, those "proto-Amerindians" give way to a very different sort of human, much like the inhabitants of Papua New Guinea (Frost, 2015).

What happened to those first Americans? They were "replaced." If you're looking for family entertainment, don't study history or prehistory.

Ironically, one of the comments on that column argued that European settlers had stolen this land from the Native Americans and had thus forfeited any moral right to complain about immigration. Well, one genocide doesn’t justify another. I would also venture to say that the universe cares little about our notions of morality. There is only survival or extinction. Everything else is sophistry.

Early and not-so-early Europeans

Ancient DNA is telling a similar story about early Europeans. As late as 8,000 years ago, only the hunting peoples of northern and eastern Europe had white skin and a diverse palette of hair and eye colors. Farther west and south, in Spain, Luxembourg, and Hungary, we find hunter-gatherers with a strange mix of brown skin and eyes of blue, green, or grey. Central Europe was also home to early farmers with white skin, dark hair, and brown eyes. If we go still farther south, beyond the Alps, we see faces and bodies that seem to evoke another continent (Gibbons, 2015; Olalde and Lalueza-Fox, 2015).

This is in line with earlier work on skeletal remains. Angel (1972) found that "one can identify Negroid (Ethiopic or Bushmanoid?) traits of nose and prognathism appearing in Natufian latest hunters [...] and in Anatolian and Macedonian first farmers." In the Middle East, the Natufians (15,000-12,000 BP) were anatomically more similar to present-day West Africans than to present-day Middle Easterners (Brace et al., 2006).

Many African-looking skulls and skeletons have been found in an arc of territory stretching from Brittany, through Switzerland and northern Italy, and into the Balkans. Most are from the Neolithic, but some are as recent as the Bronze Age and the early Iron Age (Boule and Vallois, 1957, pp. 291-292)

Does this mean that prehistoric Greek farmers were more closely related to sub-Saharan Africans than to present-day Greeks? The genome analysis isn't complete, but I think not. They may have looked un-European, but their genomes would probably place them a lot closer to present-day Europeans than to anyone else. We saw the same thing with Kennewick Man. His skull looked European, yet genetically he was closer to Amerindians.

Those prehistoric Greeks were descended from a wave of modern humans that entered Europe some 40,000 years ago. In the north and east, the new settlers encountered selection pressures that recolored and reshaped their most visible features, making them look very different from their African-like relatives to the south and west. Yet this new look came about through changes to just a tiny subset of the genome.

This is not to argue that "we're all pretty much the same under the skin." One could just as well say that humans and chimps are pretty much the same under the skin. They are, actually, if one looks only at flesh and blood. Nonetheless, a human is not a chimp with a body shave.

A second look at the spread of farming

This portrait of early Europeans is still incomplete, and some findings seem contradictory. For instance, why did those Greek farmers lack the alleles for white skin and lactose tolerance when the same alleles were present in Central European farmers from the same time period? In fact, it now seems that both traits evolved in Europe (Gibbons, 2015). A year ago, almost everyone pointed to those Central Europeans as proof that white skin and lactose tolerance must have come from the Middle East, along with farming itself.

It has become popular to argue that farming spread out of the Middle East and into Central Europe through a process of population replacement. The argument seems logical. Because farming supports a larger population per unit of land area, immigrants from the Middle East should have overwhelmed the native hunter-gatherers of Europe by force of numbers. Apparently, things weren’t so simple. Early European farmers were a mixed bunch, and their relationship to the Middle East looks just as problematic. Farming did spread out of the Middle East, but the extent to which this diffusion was genetic or cultural is far from clear. Even the hard evidence looks soft when given a second look.

For instance, we know that a sharp genetic boundary separates late hunter-gatherers from early farmers in Europe. That's good evidence for population replacement. But when a Danish team used a more complete time series of ancient DNA samples, they found that the genetic boundary actually separated early farmers from somewhat later farmers. Haplogroup U, the supposed genetic signature of Europe's ancient hunter-gatherers, reached its current low level after the Neolithic, according to that time series (Melchior et al., 2010). The genetic boundary must therefore be partly due to something else than population replacement, perhaps new selection pressures.

Another piece of hard evidence is the cultural conservatism of hunter-gatherers, who generally prefer to die out than embrace farming and who especially dislike having to plan their lives over a yearly cycle. But that finding is based on tropical hunter-gatherers. Northern hunter-gatherers plan ahead over the coming year and are better able to make the leap. If we take the Mississippian culture of the American Midwest and Southeast (c. 800 -1600), we find that small groups of hunter-gatherers had little trouble making the shift not only to large-scale intensive maize farming but also to life in large towns of up to 40,000 people—all this in half a millennium.

Indeed, if we look at pre-Columbian America, we see that farming first developed in Mesoamerica and then spread north through cultural diffusion. There were very few cases of farmers demographically replacing hunter-gatherers. Why would the situation have been so different in prehistoric Europe? As a general rule, it seems that population replacement occurs only when there is a profound difference in mental makeup that cannot be easily changed.

A final question

Southern Europe and the Middle East were initially home to dark African-like people, who were then replaced by European-like people, apparently from the north, beginning around 12,000 ago. The process of replacement was still incomplete, however, during the time of those northern Greek farmers 7,500 to 5,500 years ago. That last date is very close to the dawn of history. Only a millennium and a half later, the Minoans were building the palace of Knossos. Are those African-like people remembered in European myths, legends, and folk tales?

h/t to Dienekes

References

Angel, J.L. (1972). Biological relations of Egyptian and eastern Mediterranean populations during Pre-dynastic and Dynastic times, Journal of Human Evolution, 1, 307-313. 

Anon. (2015). The Archeological Museum of Thessaloniki. Report on ancient DNA -Learn what eye color your ancestor had and what he ate in the Neolithic! Iefimerida
http://www.iefimerida.gr/news/219751/ekthesi-gia-arhaio-dna-mathe-ti-hroma-matia-eihe-kai-ti-etroge-o-neolithikos-progonos

Boule, M. & Vallois, H.V. (1957). Fossil Men. New York: Dryden Press. 

Brace, C.L., N. Seguchi, C.B. Quintyn, S.C. Fox, A.R. Nelson, S.K. Manolis, and P. Qifeng. (2006). The questionable contribution of the Neolithic and the Bronze Age to European craniofacial form, Proceedings of the National Academy of Sciences U.S.A., 103, 242-247
http://www.pnas.org/content/103/1/242.full 

Dienekes. (2015). Prehistoric farmers from northern Greece had lactose intolerance, brown eyes, dark skin, Dienekes' Anthropology Blog, August 7
http://dienekes.blogspot.ca/2015/08/prehistoric-farmers-from-northern.html 

Frost, P. (2015). Guess who first came to America? The Unz Review
http://www.unz.com/pfrost/guess-who-first-came-to-america/ 

Gibbons, A. (2015). How Europeans evolved white skin, Science, Latest News, April 2
http://news.sciencemag.org/archaeology/2015/04/how-europeans-evolved-white-skin 

Melchior, L., N. Lynnerup, H.R. Siegismund, T. Kivisild, J. Dissing. (2010). Genetic diversity among ancient Nordic populations, PLoS ONE, 5(7): e11898
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0011898#pone-0011898-g002 

Olalde, I. and C. Lalueza-Fox. (2015). Modern humans' paleogenomics and the new evidences on the European prehistory, Science and Technology of Archaeology Research, 1
http://www.maneyonline.com/doi/pdfplus/10.1179/2054892315Y.0000000002

Wishing you a merry ... something

$
0
0

 
Yale was founded by English Congregationalist ministers. Today, only 22% of its student body has a Christian European background of any sort.

 

Last year, around this time, friends and acquaintances offered me all sorts of religiously neutral salutations: Seasons Greetings! Happy Holidays! Joyeuses fêtes! Meilleurs vœux! Only two people wished me Merry Christmas.

One was Muslim, the other was Jewish.

They meant well. After all, isn't that the culturally correct greeting? In theory, yes. In practice, most Christians feel uncomfortable affirming their identity. And this self-abnegation gets worse the closer you are to the cultural core of Anglo-America. Immigrants of Christian background enjoy being wished Merry Christmas. Black people likewise. Catholics seem to split half and half, depending on how traditional or nominal they are.

But the WASPs. Oh, the WASPs! With them, those two words are a faux pas. The response is usually polite but firm: "And a very happy holiday season to you!"

Things weren’t always that way. The situation calls to mind a Star Trek episode where Capt. Kirk persuades an alien robot to destroy itself. "That which excludes is evil. If you affirm your identity, you are excluding those who don't share your identity. You are therefore evil."

I could question this logic. What about other cultural groups? Why single out just one? But I’ve heard the answer already. WASPs and their culture dominate North America. The path to power, or simply a better life, runs through their institutions. Minorities can affirm their own identities without restricting the life choices of others, but the same does not hold true for WASPs. Their identity affects everyone and must belong to everyone.

I’m still not convinced. Yes, WASPs did create the institutions of Anglo-America, but their influence in them is now nominal at best. The U.S. Supreme Court used to be a very WASPy place. Now, there's not a single White Protestant on it. That's a huge underrepresentation for a group that is still close to 40% of the population. We see the same thing at the Ivy League universities, which originally trained Protestant clergy for the English colonists. Today, how many of their students have any kind of Christian European background? The proportions are estimated to be 20% at Harvard, 22% at Yale, and 15% at Columbia (Unz, 2012).

Sometimes reality is not what is commonly believed.  WASPs are not at all privileged. In fact, they have been largely pushed aside in a country that was once theirs.

Whenever this ethnic displacement comes up for discussion (it usually doesn't), it gets put down to meritocracy. In the past, WASPs were the best people for the job of running the country. Now it's a mix of Jews, Asians, and other high-performing groups. A cynic might ask whether merit is the only factor ... and whether the U.S. is better run today than it was a half-century ago. Indeed, the latest Supreme Court appointee had little experience as a solicitor general and a scanty record of academic scholarship.

Merit isn't the whole story. There is also networking. In most parts of the world, an individual gets ahead in life by forming bonds of reciprocal assistance with family and kinfolk. "You scratch my back, I'll scratch yours." That's how most of the world works.

But not all of the world. Northwest Europeans have diverged the most from this pattern, at least since the 12th century (Macfarlane, 1978a - 2012). Their kinship ties have been weaker and their sense of individualism correspondingly stronger. As a result, their cultural evolution has to a large degree been emancipated from the restraints of kinship, and this emancipation has facilitated other ways of organizing social relations: the nation-state, ideology, the market economy ... not to mention the strange idea of personal advancement through personal merit alone. This model of society has succeeded economically, militarily, and geopolitically, but it's vulnerable to people who don't play by the rules, since the threat of kin retaliation is insufficient to keep them in line. Societal survival is possible only to the extent that rule-breakers are ostracized and immigration restricted from cultures that play by other rules. 

This brings us to the dark side of traditional WASP culture: the busybodiness, the judgmentalism, the distrust of foreigners no matter how nice or refined they may seem. That mentality still exists, but it has been turned against itself. The people to be excluded are now those who exclude. The cultural programming for survival has been turned into a self-destruct mechanism ... as in that Star Trek episode.

Even if we could somehow abort this self-destruct sequence, it's hard to see how WASPs can survive on the current playing field. WASPs believe in getting ahead through rugged individualism. Most of the other groups believe in using family and ethnic connections. Guess who wins.

Anyway, I wish all of you a merry end of 2014! Far be it for me to exclude anyone from the merriment.
 

References
 

Unz, R. (2012). The myth of American meritocracy, The American Conservative, November 28
http://www.theamericanconservative.com/articles/the-myth-of-american-meritocracy/ 

Macfarlane, A. (2012). The invention of the modern world. Chapter 8: Family, friendship and population, The Fortnightly Review, Spring-Summer serial
http://fortnightlyreview.co.uk/2012/07/invention-8/ 

Macfarlane, A. (1992). On individualism, Proceedings of the British Academy, 82, 171-199.
http://www.alanmacfarlane.com/TEXTS/On_Individualism.pdf 

Macfarlane, A. (1978a). The origins of English individualism: Some surprises, Theory and society: renewal and critique in social theory, 6, 255-277.
http://www.alanmacfarlane.com/TEXTS/Origins_HI.pdf

Macfarlane, A. (1978b). The Origins of English Individualism: The Family, Property and Social Transition, Oxford: Blackwell.

Sometimes the consensus is phony

$
0
0

Migrants arriving on the island of Lampedusa (Wikicommons). The NATO-led invasion of Libya has opened a huge breach in Europe's defences.

 

A synthesis has been forming in the field of human biodiversity. It may be summarized as follows: 

1. Human evolution did not end in the Pleistocene or even slow down. In fact, it speeded up with the advent of agriculture 10,000 years ago, when the pace of genetic change rose over a hundred-fold. Humans were no longer adapting to relatively static natural environments but rather to faster-changing cultural environments of their own making. Our ancestors thus directed their own evolution. They created new ways of life, which in turn influenced who would survive and who wouldn't.

2. When life or death depends on your ability to follow a certain way of life, you are necessarily being selected for certain heritable characteristics. Some of these are dietary—an ability to digest milk or certain foods. Others, however, are mental and behavioral, things like aptitudes, personality type, and behavioral predispositions. This is because a way of life involves thinking and behaving in specific ways. Keep in mind, too, that most mental and behavioral traits have moderate to high heritability.

3. This gene-culture co-evolution began when humans had already spread over the whole world, from the equator to the arctic. So it followed trajectories that differed from one geographic population to another. Even when these populations had to adapt to similar ways of life, they may have done so differently, thus opening up (or closing off) different possibilities for further gene-culture co-evolution. Therefore, on theoretical grounds alone, human populations should differ in the genetic adaptations they have acquired. The differences should generally be small and statistical, being noticeable only when one compares large numbers of individuals. Nonetheless, even small differences, when added up over many individuals and many generations, can greatly influence the way a society grows and develops.

4. Humans have thus altered their environment via culture, and this man-made environment has altered humans via natural selection. This is probably the farthest we can go in formulating a unified theory of human biodiversity. For Gregory Clark, the key factor was the rise of settled, pacified societies, where people could get ahead through work and trade, rather than through violence and plunder. For Henry Harpending and Greg Cochrane, it was the advent of agriculture and, later, civilization. For J. Philippe Rushton and Ed Miller, it was the entry of humans into cold northern environments, which increased selection for more parental investment, slower life history, and higher cognitive performance. Each of these authors has identified part of the big picture, but the picture itself is too big to reduce to a single factor.

5. Antiracist scholars have argued against the significance of human biodiversity, but their arguments typically reflect a lack of evolutionary thinking. Yes, human populations are open to gene flow and are thus not sharply defined (if they were, they would be species). It doesn't follow, however, that the only legitimate objects of study are sharply defined ones. Few things in this world would pass that test.

Yes, genes vary much more within human populations than between them, but these two kinds of genetic variation are not comparable. A population boundary typically coincides with a geographic or ecological barrier, such as a change from one vegetation zone to another or, in humans, a change from one way of life to another. It thus separates not only different populations but also differing pressures of natural selection. This is why genetic variation within a population differs qualitatively from genetic variation between populations. The first kind cannot be ironed out by similar selection pressures and thus tends to involve genes of little or no selective value. The second kind occurs across population boundaries, which tend to separate different ecosystems, different vegetation zones, different ways of life ... and different selection pressures. So the genes matter a lot more.

This isn't just theory. We see the same genetic overlap between many sibling species that are nonetheless distinct anatomically and behaviorally. Because such species have arisen over a relatively short span of time, like human populations, they have been made different primarily by natural selection, so the genetic differences between them are more likely to have adaptive, functional consequences ... as opposed to "junk variability" that slowly accumulates over time.

Why is the above so controversial?

The above synthesis should not be controversial. Yet it is. In fact, it scarcely resembles acceptable thinking within academia and even less so within society at large. There are two main reasons.

The war on racism 

In the debate over nature versus nurture, the weight of opinion shifted toward the latter during the 20th century. This shift began during the mid-1910s and was initially a reaction against the extreme claims being made for genetic determinism. In reading the literature of the time, one is struck by the restraint of early proponents of environmental determinism, especially when they argue against race differences in mental makeup. An example appears in The Clash of Colour (1925), whose author condemned America's Jim Crow laws and the hypocrisy of proclaiming the rights of Europeans to self-determination while ignoring those of Africans and Asians. Nonetheless, like the young Franz Boas, he was reluctant to deny the existence of mental differences:

I would submit the principle that, although differences of racial mental qualities are relatively small, so small as to be indistinguishable with certainty in individuals, they are yet of great importance for the life of nations, because they exert throughout many generations a constant bias upon the development of their culture and their institutions. (Mathews, 1925, p. 151)

That was enlightened thinking in the 1920s. The early 1930s brought a radical turn with Hitler's arrival to power and a growing sense of urgency that led many Jewish and non-Jewish scholars to declare war on "racism." The word itself was initially a synonym for Nazism, and even today Nazi Germany still holds a central place in antiracist discourse.

Why didn't the war on racism end when the Second World War ended? For one thing, many people, feared a third global conflict in which anti-Semitism would play a dominant role. For another, antiracism took on a life of its own during the Cold War, when the two superpowers were vying for influence over the emerging countries of Asia and Africa.

Globalism

The end of the Cold War might have brought an end to the war on racism, or at least a winding down, had it not replaced socialism with an even more radical project: globalism. This is the hallmark of "late capitalism," a stage of historical development when the elites no longer feel restrained by national identity and are thus freer to enrich themselves at their host society's expense, mainly by outsourcing jobs to low-wage countries and by insourcing low-wage labor for jobs that cannot be relocated, such as those in construction and services. That's globalism in a nutshell.

This two-way movement redistributes wealth from owners of labor to owners of capital. Businesses get not only a cheaper workforce but also weaker labor and environmental standards. To stay competitive, workers in high-wage countries have to accept lower pay and a return to working conditions of another age. The top 10% are thus pulling farther and farther ahead of everyone else throughout the developed world. They're getting richer ... not by making a better product but by making the same product with cheaper and less troublesome inputs of labor. This is not a win-win situation, and the potential for revolutionary unrest is high.

To stave off unrest, economic systems require legitimacy, and legitimacy is made possible by ideology: a vision of a better future; how we can get there from here; and why we're not getting there despite the best efforts. Economic systems don't create ideology, but they do create conditions that favor some ideologies over others. With the collapse of the old left in the late 1980s, and the rise of market globalization, antiracism found a new purpose ... as a source of legitimacy for the globalist project.

I saw this up close in an antiracist organization during the mid to late 1980s. Truth be told, we mostly did things like marching in the May Day parade, agitating for a higher minimum wage, denouncing the U.S. intervention in Panama, organizing talks about Salvador Allende and what went wrong in Chile ... you get the drift. Antiracism was subservient to the political left. This was not a natural state of affairs, since the antiracist movement—like the Left in general—is a coalition of ethnic/religious factions that prefer to pursue their own narrow interests. This weakness was known to the political right, many of whom tried to exploit it by supporting Muslim fundamentalists in Afghanistan and elsewhere and black nationalists in Africa, Haiti, and the U.S. Yes, politics makes strange bedfellows.

With the onset of the 1990s, no one seemed to believe in socialism anymore and we wanted to tap into corporate sources of funding. So we reoriented. Leftist rhetoric was out and slick marketing in. Our educational materials looked glossier but now featured crude "Archie Bunker" caricatures of working people, and the language seemed increasingly anti-white. I remember feeling upset, even angry. So I left.

Looking back, I realize things had to happen that way. With the disintegration of the old socialist left, antiracists were freer to follow their natural inclinations, first by replacing class politics with identity politics, and second by making common cause with the political right, especially for the project of creating a globalized economy. Antiracism became a means to a new end.

This is the context that now frames the war on racism. For people in a position to influence public policy, antiracism is not only a moral imperative but also an economic one. It makes the difference between a sluggish return on investment of only 2 to 3% (which is typical in a mature economy) and a much higher one.

What to do?

Normally, I would advise caution. People need time to change their minds, especially on a topic as emotional as this one. When tempers flare, it's usually better to let the matter drop and return later. That's not cowardice; it's just a recognition of human limitations. Also, the other side may prove to be right. So, in a normal world, debate should run its course, and the policy implications discussed only when almost everyone has been convinced one way or the other.

Unfortunately, our world is far from normal. A lot of money is being spent to push a phony political consensus against any controls on immigration. This isn't being done in the dark by a few conspirators. It's being done in the full light of day by all kinds of people: agribusiness, Tyson Foods, Mark Zuckerberg, the U.S. Chamber of Commerce, and small-time operations ranging from landscapers to fast-food joints. They all want cheaper labor because they're competing against others who likewise want cheaper labor. It's that simple ... and stupid.

This phony consensus is also being pushed at a time when the demographic cauldron of the Third World is boiling over. This is particularly so in sub-Saharan Africa, where the decline in fertility has stalled and actually reversed in some countries. The resulting population overflow is now following the path of least resistance—northward, especially with the chaos due to the NATO-led invasion of Libya. In the current context, immigration controls should be strengthened, and yet there is lobbying to make them even weaker. The idiocy is beyond belief.

For these reasons, we cannot wait until even the most hardboiled skeptics are convinced. We must act now to bring anti-globalist parties to power: the UKIP in Britain, the Front national in France, the Partij voor de Vrijheid in the Netherlands, the Alternative für Deutschland in Germany, and the Sverigedemokraterna in Sweden. How, you may ask? It's not too complicated. Just go into the voting booth and vote. You don't even have to talk about your dirty deed afterwards. 

It looks like such parties will emerge in Canada and the United States only when people have seen what can be done in Europe. Until then, the tail must wag the dog. We in North America can nonetheless prepare the way by learning to speak up and stand up, and by recognizing that the "Right" is just as problematic as the "Left."
 

References


Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World, Princeton University Press, Princeton and Oxford. 

Clark, G. (2009a). The indicted and the wealthy: surnames, reproductive success, genetic selection and social class in pre-industrial England,
http://www.econ.ucdavis.edu/faculty/gclark/Farewell%20to%20Alms/Clark%20-Surnames.pdf 

Clark, G. (2009b). The domestication of Man: The social implications of Darwin, ArtefaCTos, 2(1), 64-80.
http://campus.usal.es/~revistas_trabajo/index.php/artefactos/article/viewFile/5427/5465 

Clark, G. (2010). Regression to mediocrity? Surnames and social mobility in England, 1200-2009
http://www.econ.ucdavis.edu/faculty/gclark/papers/Ruling%20Class%20-%20EJS%20version.pdf

Cochran, G. and H. Harpending. (2010). The 10,000 Year Explosion: How Civilization Accelerated Human Evolution, New York: Basic Books. 

Frost, P. (2011a). Human nature or human natures? Futures, 43, 740-748.
http://www.researchgate.net/publication/251725125_Human_nature_or_human_natures/file/504635223eaf8196f0.pdf  

Frost, P. (2011b). Rethinking intelligence and human geographic variation, Evo and Proud, February 11
http://evoandproud.blogspot.ca/2011/02/rethinking-intelligence-and-human.html 

Harpending, H., and G. Cochran. (2002). In our genes, Proceedings of the National Academy of Sciences U.S.A., 99, 10-12.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC117504/  

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution, Proceedings of the National Academy of Sciences U.S.A., 104, 20753-20758.
http://www.researchgate.net/publication/5761823_Recent_acceleration_of_human_adaptive_evolution/file/9c9605240c4bb57b55.pdf

Mathews, B. (1925). The Clash of Colour. A Study in the Problem of Race, London: Edinburgh House Press. 

Miller, E. (1994). Paternal provisioning versus mate seeking in human populations, Personality and Individual Differences, 17, 227-255.
http://www.prometheism.net/paternal/  

Rushton, J. P. (2000). Race, Evolution, and Behavior, 3rd ed., Charles Darwin Research Institute.
http://lazypawn.com/wordpress/wp-content/uploads/Race_Evolution_Behavior.pdf

French lesson

$
0
0


A burning car during the 2005 riots. (Wikicommons: Strologoff)

 

The gruesome attack on Charlie Hebdo has earned condemnation around the world. It has been called "cowardly" and "evil" by Barack Obama, "a barbaric act" by Stephen Harper, and an "infamy" by François Hollande.

Yes, violence is serious. It's a crime when done by an individual and war when done by a country. It's a grave breach of the rules that govern our society. Whatever differences we may have, they are to be settled peacefully, through the courts if need be. Violence is just not to be done.

Except it increasingly is. The attack on Charlie Hebdo is not an isolated incident. It's part of a worsening trend of violence by people described as jeunes[youths] or simply not described at all. That was not the case in the recent attack; the victims were too well known. But it is generally the case, and this conspiracy of silence has become something of a social norm, particularly in the media.

Yet statistics do exist, notably those compiled by the Gendarmerie. According to French criminologist Xavier Raufer:

The criminality we are talking about is the kind that is making life unbearable for the population: burglaries, thefts of all sorts, assaults, violent thefts without firearms, etc. In these specific cases, 7 out of 10 of these crimes are committed by people who in one way or another have an immigrant background, either directly (first generation on French territory, with or without a residence permit) or indirectly (second generation). (Chevrier and Raufer, 2014)

The word "immigrant" is misleading. Many if not most are French-born, and they tend to come much more from some immigrant groups than from others. In general, they are young men of North African or sub-Saharan African background, plus smaller numbers of Roma and Albanians. 

This criminality, when not being denied, is usually put down to social marginalization and lack of integration. Yet the reverse is closer to the truth. The typical French person is an individual in a sea of individuals, whereas immigrant communities enjoy strong social networks and a keen sense of solidarity. This is one of the reasons given why the targets of the crime wave are so often Français de souche[old-stock French]. "Whites don't stick up for each other."


Personal violence in human societies

In France, as in other Western countries, personal violence is criminalized and even pathologized. The young violent male is said to be "sick." Or "deprived." He has not had a chance to get a good job and lead a nice quiet life.

Yet this is not how young violent males perceive themselves or, for that matter, how most human societies have perceived them down through the ages. Indeed, early societies accepted the legitimacy of personal violence. Each adult male had the right to defend himself and his kin with whatever violence he deemed necessary. The term "self-defence" is used loosely here—a man could react violently to a lack of respect or to slurs on his honor or the honor of his ancestors. There were courts to arbitrate this sort of dispute but they typically had no power, enforcement of court rulings being left to the aggrieved party and his male kin. In general, violence was a socially approved way to prove one’s manhood, attract potential mates, and gain respect from other men.

Things changed as human societies developed. The State grew in power and increasingly monopolized the legitimate use of violence, thus knocking down the violent young male from hero to zero. This course of action was zealously pursued in Northwest Europe from the 11th century onward (Carbasse, 2011, pp. 36-56). There were two reasons. First, the end of the Dark Ages brought a strengthening of State power, a resumption of trade and, hence, a growing need and ability by the authorities to pacify social relations. Second, the main obstacle to criminalization of personal violence—kin-based morality and the desire to avenge wrongs committed against kin—seems to have been weaker in Northwest Europe than elsewhere. There was correspondingly a greater susceptibility to more universal and less kin-based forms of morality, such as the Christian ban on murder in almost all circumstances. 

Murder was increasingly punished not only by the ultimate penalty but also by exemplary forms of execution, e.g., burning at the stake, drawing and quartering, and breaking on the wheel (Carbasse, 2011, pp. 52-53). This "war on murder" reached a peak from the 16th to 18th centuries when, out of every two hundred men, one or two would end up being executed (Taccoen, 1982, p. 52). A comparable number of murderers would die either at the scene of the crime or in prison while awaiting trial (Ireland, 1987).


Gene-culture co-evolution?

The cultural norm thus shifted toward nonviolence. There was now strong selection against people who could not or would not lead peaceful lives, their removal from society being abrupt, via the hangman's noose, or more gradual, through ostracism by one's peers and rejection on the marriage market. As a result, the homicide rate fell from between 20 and 40 homicides per 100,000 in the late Middle Ages to between 0.5 and 1 per 100,000 in the mid-20th century (Eisner, 2001, pp. 628-629).

Was this decline due solely to legal and cultural restraints on personal violence? Or were there also changes to the gene pool? Was there a process of gene-culture co-evolution whereby Church and State created a culture of nonviolence, which in turn favored some genotypes over others? We know that aggressive/antisocial behavior is moderately to highly heritable. In the latest twin study, heritability was 40% when the twins had different evaluators and 69% when they had the same one (Barker et al., 2009). The actual neural basis is still unsure. Perhaps a predisposition to violence is due to stronger impulsiveness and weaker internal controls on behavior (Niv et al., 2012). Perhaps the threshold for expression of violence is lower. Perhaps ideation comes easier (van der Dennen, 2006). Or perhaps the sight and smell of blood is more pleasurable (vanden Bergh and Kelly, 1964).

It was probably a mix of cultural and genetic factors that caused the homicide rate to decline in Western societies. Even if culture alone were responsible, we would still be facing the same problem. Different societies view male violence differently:

In Algerian society for example, children are raised according to their sex. A boy usually receives an authoritarian and severe type of upbringing that will prepare him to become aware of the responsibilities that await him in adulthood, notably responsibility for his family and for the elderly. This is why a mother will allow her son to fight in the street and will scarcely be alarmed if the boy has a fall or if she sees a bruise. The boy of an Algerian family is accustomed from an early age to being hit hard without whimpering too much. People orient him more toward combat sports and group games in order to arm him with courage and endurance—virtues deemed to be manly. (Assous, 2005)

In Algeria and similar societies, a shaky equilibrium contains the worst excesses of male violence. Men think twice before acting violently, for fear of retaliation from the victim's brothers and other kinsmen. Of course, this "balance of terror" does not deter violence against those who have few kinsmen to count on.

Problems really begin, however, when a culture that legitimizes male violence coexists with one that delegitimizes it. This is France’s situation. Les jeunes perceive violence as a legitimate way to advance personal interests, and they eagerly pursue this goal with other young men. Conversely, les Français de souche perceive such violence as illegitimate and will not organize collectively for self-defence. The outcome is predictable. The first group will focus their attacks on members of the second group—not out of hate but because the latter are soft targets who cannot fight back or get support from others. 

But what about the obviously Islamist motives of the Charlie Hebdo attackers? Such motives can certainly channel violent tendencies, but those tendencies would exist regardless. Even if we completely eradicated radical Islam, les jeuneswould still be present and still engaging in the same kind of behavior that is becoming almost routine. At best, there would be fewer high-profile attacks—the kind that make the police pull out all stops to find and kill the perps. It is this "high end" that attracts the extremists, since they are the least deterred by the risks incurred. The “low end” tends to attract devotees of American hip hop. Keep in mind that less than two-thirds of France's Afro/Arab/Roma population is even nominally Muslim.


Conclusion

Modern France is founded on Western principles of equality, human betterment, and universal morality. Anyone anywhere can become French. That view, the official one, seems more and more disconnected from reality. Many people living in France have no wish to become French in any meaningful sense. By "French" I don't mean having a passport, paying taxes, or agreeing to a set of abstract propositions. I mean behaving in certain concrete ways and sharing a common culture and history.

This reality is sinking in, and with it a loss of faith in the official view of France. Faith can be restored, on the condition that outrageous incidents stop happening. But they will continue to happen. And they will matter a lot more than the much more numerous incidents tout court—the rising tide of thefts, assaults, and home invasions that are spreading deeper and deeper into areas that were safe a few years ago. The attack on Charlie Hebdo matters more because it cannot be hidden from public view and public acknowledgment. How does one explain the disappearance of an entire newspaper and the mass execution of its editorial board? 

The Front national will be the beneficiary, of course. It may already have one third of the electorate, but that's still not enough to take power, especially with all of the other parties from the right to the left combining to keep the FN out. Meanwhile, the Great Replacement proceeds apace, regardless of whether the government is "left-wing" or "right-wing."


References 

Assous, A. (2005). L'impact de l'éducation parentale sur le développement de l'enfant, Hawwa, 3(3), 354-369.
http://booksandjournals.brillonline.com/content/journals/10.1163/156920805774910033 

Barker, E.D., H. Larsson, E. Viding, B. Maughan, F. Rijsdijk, N. Fontaine, and R. Plomin. (2009). Common genetic but specific environmental influences for aggressive and deceitful behaviors in preadolescent males, Journal of Psychopathology and Behavioral Assessment, 31, 299-308.
http://www.researchgate.net/publication/226851959_Common_Genetic_but_Specific_Environmental_Influences_for_Aggressive_and_Deceitful_Behaviors_in_Preadolescent_Males/file/9fcfd506c1944288cb.pdf  

Chevrier, G. and X. Raufer. (2014). Aucun lien entre immigration et délinquance ? Une France peu généreuse avec ses immigrés ? Radiographie de quelques clichés "bien pensants"à la peau dure, Atlantico, November 26
http://www.atlantico.fr/decryptage/aucun-lien-entre-immigration-et-delinquance-france-peu-genereuse-avec-immigres-radiographie-quelques-cliches-bien-pensants-peau-1875772.html  

Eisner, M. (2001). Modernization, self-control and lethal violence. The long-term dynamics of European homicide rates in theoretical perspective, British Journal of Criminology, 41, 618-638.
http://www.researchgate.net/publication/249284795_Modernization_Self-Control_and_Lethal_Violence._The_Long-term_Dynamics_of_European_Homicide_Rates_in_Theoretical_Perspective/file/60b7d52cbfa9aec78c.pdf

Ireland, R.W. (1987). Theory and practice within the medieval English prison, The American Journal of Legal History, 31, 56-67. 

Niv, S., C. Tuvblad, A. Raine, P. Wang, and L.A. Baker. (2012). Heritability and longitudinal stability of impulsivity in adolescence, Behavior Genetics, 42, 378-392.
http://europepmc.org/articles/PMC3351554

Taccoen, L. (1982). L'Occident est nu, Paris: Flammarion. 

Vanden Bergh, R.L., and J.F. Kelly. (1964). Vampirism. A review with new observations. Archives of General Psychiatry, 11, 543-547.
http://archpsyc.jamanetwork.com/article.aspx?articleid=488664  

Van der Dennen, J.M.G. (2006). Review essay: The murderer next door: Why the mind is designed to kill, Homicide Studies, 10, 320-335.
http://hsx.sagepub.com/content/10/4/320.short

Moving on ...

$
0
0
Dear readers,

I've decided to complete my move to The Unz Review (http://www.unz.com/author/peter-frost/), so there will be no further blogging at this site. I'm doing this partly to reduce my workload of supervising two websites and partly to gain more control over my posts at TUR (commenting, correction of errors in the post, etc.). Thanks to Ron, my exposure on the Internet has greatly increased and my audience now includes a silent readership of mainstream journalists and, perhaps, newspaper editors.

Early last year, I had thoughts of closing up shop. I had the feeling of making the same points over and over again. I still have that feeling, but it no longer troubles me so much. For one thing, the same point can have many different applications in real life. For another, a lot of people have short memories, and it doesn't hurt to repeat a point I may have made two or three years ago.

Thank you for your loyalty! This isn't the end; it's just a move onward and upward.

Coming home

$
0
0
Photo by Shawn


Dear readers,

I have returned to my old website, after being expelled from The Unz Review. The immediate cause was my decision to close commenting on my last column. A catfight was developing between myself and Ron Unz in the comments, and I wanted to give the two of us time to cool off. I also needed time to write my next column, which would have replied in detail to two of his criticisms. Ron felt I was violating his freedom of speech and promptly blocked access to my author's account.

This may be all for the best. In the heat of anger, people say certain things they had previously kept quiet about. Ron was still resentful over my criticisms of his article "Race, IQ, and Wealth," which he had published some three years ago (Unz, 2012; Frost, 2012a, 2012b, 2012c). I thought his thinking had evolved since then, particularly with the latest findings by Piffer (2013, see also Frost, 2014). Apparently not. 

It's important not to lose sight of the big picture. The Unz Review is still doing good work by assisting writers like Steve Sailer and Razib Khan, and others may join them in the future. But it is very unlikely that I will return there.


Note 

I don't wish to be drawn into a tedious argument over "I say, he says." For what it's worth, I did not delete any comments that Ron made in reply to my criticisms of his 2012 article. He may be thinking of an email exchange between the two of us, which was partly reproduced in Frost (2012b). In the time I've known him, I've only deleted one of his comments, and that was the one he left after I had closed commenting on my last column.


References 

Frost, P. (2012a). Ron Unz on Race, IQ, and Wealth, Evo and Proud, July 21,
http://www.evoandproud.blogspot.ca/2012/07/ron-unz-on-race-iq-and-wealth.html 

Frost, P. (2012b). More on Race, IQ, and Wealth, Evo and Proud, July 28
http://www.evoandproud.blogspot.ca/2012/07/more-on-race-iq-and-wealth.html 

Frost, P. (2012c). He who pays the piper, Evo and Proud, August 18
http://www.evoandproud.blogspot.ca/2012/08/he-who-pays-piper.html 

Frost, P. (2014). Population differences in intellectual capacity: a new polygenic analysis, Evo and Proud, March 8
http://evoandproud.blogspot.ca/2014/03/population-differences-in-intellectual.html 

Piffer, D. (2013). Factor analysis of population allele frequencies as a simple, novel method of detecting signals of recent polygenic selection: The example of educational attainment and IQ, Interdisciplinary Bio Central, provisional manuscript
http://www.ibc7.org/article/journal_v.php?sid=312 

Unz, R. (2012). Race, IQ, and Wealth, The American Conservative, July 18.
http://www.theamericanconservative.com/articles/race-iq-and-wealth/

In the wrong place at the wrong time?

$
0
0

Dick Turpin was convicted of robbery but had also been guilty of a string of murders (Wikicommons)


In each generation from 1500 to 1750, between 1 and 2% of all English men were executed either by court order or extra-judicially (at the scene of the crime or while in prison). This was the height of a moral crusade by Church and State to punish the wicked so that the good may live in peace.

Meanwhile, the homicide rate fell ten-fold. Were the two trends related? In a recent paper, Henry Harpending and I argued that a little over half of the homicide decline could be explained by the high execution rate, and its steady removal of violent males from the gene pool. The rest could be partly explained by Clark-Unz selection—violent males lost out reproductively because they were increasingly marginalized in society and on the marriage market. Finally, this decline was also due to a strengthening of controls on male violence: judicial punishment (policing, penitentiaries); quasi-judicial punishment (in schools, at church, and in workplaces); and stigmatization of personal violence in popular culture.

These controls drove the decline in the homicide rate, but they also tended over time to hardwire the new behavior pattern, by hindering the ability of violent males to survive and reproduce. The last half-century has seen a dramatic relaxation of these controls but only a modest rise in the homicide rate among young men of native English origin.

The above argument has been criticized on two grounds:

1. Executed offenders were not the worst of the worst. They were often people caught in the wrong place at the wrong time.

2. Executed offenders may have had children who survived to adulthood.

This week's column will address the first criticism. Did execution remove the most violent men? Or did it randomly remove individuals from, say, the most violent third?

Many genetic factors influence our propensity for personal violence: impulse control; violence ideation; pleasure from inflicting pain; etc. Regardless of how strong or weak these factors may be, the propensity itself should be normally distributed within the male population—it should follow a bell curve. If we move right or left from the population mean, the number of men should initially decline very little, with the result that over two-thirds of the men can be found within one standard deviation of the mean.

We really have to go one standard deviation to the right before the men begin to seem abnormally violent, but the remaining right-hand “tail” leaves us only 16% of the male population. What if we’re looking for a man who’s at least twice as violent as the normal two-thirds? He’s in the far right 1%. In a single gene pool, violent men stand out not just because they are noticeably abnormal but also because they are much less common.

Identifying the most violent men. But how?

Were these men the ones that the English justice system executed between 1500 and 1750? Murder is violence taken to its logical extreme, yet most murder cases went unsolved in early modern England. The crime was difficult to prove for want of witnesses, either because none wished to come forward or because they had likewise been murdered. There were no police, no forensic laboratories, and much less of the investigative infrastructure that we have today. If you committed a one-time murder, your chances of not getting caught were good.

The criminal justice system in the eighteenth century [...] therefore operated on a rationale very different from that of a modern state, with its professional police forces, social services and a fully bureaucratised law-enforcement system. In the early eighteenth century at least, the enforcement of law and order depended largely on unpaid amateur officials, the justices of the peace and the parish constables and other local officers. (Sharpe, 2010, p. 92)

This is not to say that the justice system gave murder a lower priority. Rather, with the limited resources available, judges and juries engaged in "profiling." They scrutinized not only the offence but also the accused—his character and demeanor, his behavior during the crime and in the courtroom, and his previous offences. Juries could be lenient in cases of first-time offences for theft, but this leniency disappeared if the accused had a criminal history.

The justice system thus looked for signs that the accused had already committed worse crimes or would go on to do so. Ironically, our current system is the one that tends to catch people who were in the wrong place at the wrong time, i.e., inexperienced one-time murderers.

Hanged for robbery but guilty of murder

This may be seen in a book, published in London in 1735, that told the life stories of 198 executed criminals. Of the 198, only 34 (17%) had been sentenced to death for murder. A much larger number, 111 (56%), were charged with robbery, being described as highwaymen, footpads, robbers, and street robbers. Finally, another 37 (19%) were executed simply for theft (Hayward, 2012; see note). Robbery was punished more severely than simple theft because it threatened both life and property, especially if the victim failed to cooperate sufficiently or seemed to recognize the robber.

Robbery is the taking away violently and feloniously the goods or money from the person of a man, putting him in fear [...]. Yea, where there is a gang of several persons, only one of which robs, they are all guilty as to the circumstance of putting in fear, wherever a person attacks another with circumstances of terror [...] And in respect of punishment, though judgment of death cannot be given in any larceny whatsoever, unless the goods taken exceed twelve pence in value, yet in robbery such judgment is given, let the value of the goods be ever so small. (Hayward, 2013, p. 27)

Sooner or later, a robber ended up killing. We see this in the life story of Dick Turpin, who was hanged for cattle theft, even though he had committed worse crimes:

The process of reconstruction may not tell us much about Turpin's personality, but it does give us the opportunity to put together a remarkable criminal biography, a tale of violent robberies, of murder, and, eventually, of the horse-thefts that led to his execution. (Sharpe, 2010, p. 8)

Allegations of murder came up in trials of robbers, but typically remained unproven because no witnesses could be produced. Nonetheless, the accused would sometimes confess to murder, either to clear his conscience or, in the wake of a death sentence, because he had nothing left to lose, like this man convicted for highway robbery: "This Reading had been concerned in abundance of robberies, and, as he himself owned, in some which were attended with murder" (Hayward, 2013, p. 91). A member of another gang, when caught, confessed to a long string of murders:

[...] he, without any equivocation, began to confess all the crimes of his life. He said that it was true they all of them deserved death, and he was content to suffer; he said, moreover, that in the course of his life he had murdered upwards of three-score with his own hands. He also carried the officers to an island in the river, which was the usual place of the execution of those innocents who fell into the hands of their gang [...] (Hayward,2013, p. 1014)

In most cases, however, the accused would deny involvement in murders even after being condemned to death:

There has been great suspicions that he murdered the old husband to this woman, who was found dead in a barn or outhouse not far from Hornsey; but Wigley, though he confessed an unlawful correspondence with the woman, yet constantly averred his innocency of that fact, and always asserted that though the old man's death was sudden, yet it was natural. (Hayward, 2013, pp. 92-93)

At the place of execution he behaved with great composure and said that as he had heard he was accused in the world of having robbed and murdered a woman in Hyde Park, he judged it proper to discharge his conscience by declaring that he knew nothing of the murder, but said nothing as to the robbery. (Hayward, 2013, p.96)

In the wrong place at the wrong time?

If we look at executed criminals, their profile is not that of unfortunates caught in the wrong place at the wrong time. Most were young men who had done their work in the company of likeminded young men. Those who operated alone were atypical, like this highwayman:

Though this malefactor had committed a multitude of robberies, yet he generally chose to go on such expeditions alone, having always great aversion for those confederacies in villainy which we call gangs, in which he always affirmed there was little safety, notwithstanding any oaths, by which they might bind themselves to secrecy. (Hayward, 2013, p. 93)

For most, long-term safety was a secondary concern. Their behavioral profile—fast life history, disregard for the future, desire to be with other young men and impress them with acts of bravado and violence—stood in contrast to the ascendant culture of early modern England. One example is this robber:

[...] when he returned to liberty he returned to his old practices. His companions were several young men of the same stamp with himself, who placed all their delight in the sensual and brutal pleasures of drinking, gaming, whoring and idling about, without betaking themselves to any business. Natt, who was a young fellow naturally sprightly and of good parts, from thence became very acceptable to these sort of people, and committed abundance of robberies in a very small space of time. The natural fire of his temper made him behave with great boldness on such occasions, and gave him no small reputation amongst the gang. [...] He particularly affected the company of Richard James, and with him robbed very much on the Oxford Road, whereon it was common for both these persons not only to take away the money from passengers, but also to treat them with great inhumanity [...] (Hayward, 2013, pp. 108-109)

This sort of description comes up repeatedly. Most condemned men struck observers as very atypical, and not merely among the worst third of society. In 1741, an observer described a hanging and the interactions between the condemned men and a crowd composed largely of their friends:

The criminals were five in number. I was much disappointed at the unconcern and carelessness that appeared in the faces of three of the unhappy wretches; the countenance of the other two were spread with that horror and despair which is not to be wondered at in men whose period of life is so near [...]

[...] the three thoughtless young men, who at first seemed not enough concerned, grew most shamefully wanton and daring, behaving themselves in a manner that would have been ridiculous in men in any circumstances whatever. They swore, laughed, and talked obscenely, and wished their wicked companions good luck with as much assurance as if their employment had been the most lawful.

At the place of execution the scene grew still more shocking, and the clergyman who attended was more the subject of ridicule than of their serious attention. The Psalm was sung amidst the curses and quarrelling of hundreds of the most abandoned and profligate of mankind, upon them (so stupid are they to any sense of decency) all the preparation of the unhappy wretches seems to serve only for subject of a barbarous kind of mirth, altogether inconsistent with humanity. And as soon as the poor creatures were half dead, I was much surprised to see the populace fall to hauling and pulling the carcasses with so much earnestness as to occasion several warm rencounters and broken heads. These, I was told, were the friends of the persons executed, or such as, for the sake of tumult, chose to appear so; as well as some persons sent by private surgeons to obtain bodies for dissection. The contests between these were fierce and bloody, and frightful to look at [...] The face of every one spoke a kind of mirth, as if the spectacle they beheld had afforded pleasure instead of pain, which I am wholly unable to account for. (Hayward, 2013, pp. 8-10)

The situation in early modern England was akin to a low-grade war, and it was not for nothing that its justice system seems to us so barbaric. The judges and juries were dealing with barbarians: gangs of young men who led a predatory lifestyle that made life miserable for people who ventured beyond the safety of their own homes.

Conclusion

We are still left with the original question: Were these criminals the most violent 1 to 2% or a random sample of a much larger proportion? In general, they behaved quite unlike most people, especially if they belonged to gangs, which seem to have been responsible for most homicides. It is hard to see how such people could correspond even to the most violent 16%—a range of individuals that begins one standard deviation to the right of the mean, at which point behavior just begins to seem "abnormal."

In all likelihood, execution removed individuals who were more than one standard deviation to the right of the mean, with a strong skew toward people more than two standard deviations to the right—in other words, something less than the most violent 16% with a strong skew toward the most violent 1%.

These assumptions differ from those of our model, which assumes that execution removed the most violent 1 to 2%. On the other hand, our model also assumes that each executed criminal would, in the absence of execution, have killed only one person over a normal lifetime. Clearly, many people among the executed were already serial murderers, not so much among the convicted murderers as among the convicted robbers. It is difficult to say whether the two sources of error would balance each other out, since we need more information on (1) just how abnormal the executed were in terms of behavior and (2) how many people they would have otherwise killed over a normal lifetime.

Executed criminals were probably a heterogeneous group. A quarter of them (mostly the thieves) would have likely killed 0 to 1 people on average if allowed to live out their lives. Another quarter may have averaged 1 to 2 murders. Finally, the remaining half may have had an ever higher score. Within this last group, we can be sure that a hard core of individuals would have each gone on to kill dozens of people, if they had not already done so.

Note

The other executed criminals were identified as 8 housebreakers, 7 forgers, 4 pirates, 2 incendiaries, 1 threatening letter writer, 1 ravisher, 1 thief-taker, and 1 releaser of prisoners. Wherever a single individual was charged with more than one crime, I classified him or her under the most serious offence, i.e., murder took precedence over robbery, and robbery took precedence over theft.

Of the 198 executed criminals, 10 were women. The book actually tells the life stories of 201 criminals, but three of them were not executed. I excluded the life stories in the appendix (7 murderers and 4 thieves) because they came from a much earlier time period and may have been less representative.

References

Frost, P. and H. Harpending. (2015). Western Europe, state formation, and genetic pacification, Evolutionary Psychology, 13, 230-243. http://www.epjournal.net/articles/western-europe-state-formation-and-genetic-pacification/  


Hayward, A.L. (2013[1735]). Lives of the Most Remarkable Criminals - who Have Been Condemned and Executed for Murder, the Highway, Housebreaking, Street Robberies, Coining Or Other Offences, Routledge.


Sharpe, J. (2010). Dick Turpin: The Myth of the English Highwayman, Profile Books. 

How many were already fathers?

$
0
0

Hanging outside Newgate Prison (Wikicommons)


In England, executions peaked between 1500 and 1750 at 1 to 2% of all men of each generation. Were there genetic consequences? Were propensities for violence being removed from the gene pool? Did the English population become kinder and gentler? Such is the argument I made in a recent paper with Henry Harpending.

In this column, I will address a second criticism made against this argument: Many executed criminals already had children, so execution came too late in their lives to change the makeup of the next generation.

Reproductive success

Hayward (2013) provides a sample of 198 criminals who were executed in the early 1700s. Of this total, only 32 (16%) had children at the time of execution, and 12 of them had one child each. Their reproductive success breaks down as follows:

Family size — # executed criminals (out of 198)

1 child  -    12
2 children - 3
3 children - 3
3-4 children - 1
5 children - 3
9 children - 1
"children" - 9

Although the above figures include illegitimate children, some executed criminals may have had offspring that they were unaware of or didn't wish to acknowledge. So we may be underestimating their reproductive success. But what were the chances of such children surviving to adulthood and reproducing? In pre-1840 England, 30% of all children were dead by the age of 15; in pre-1800 London, only 42% of all boys reached the age of 25 (Clark and Cummins, 2009). Chances of survival were undoubtedly even lower for children raised by single parents.

Here and there, we find references to high infantile mortality among the progeny of executed criminals. The coiner John Johnson regretted "the heavy misfortune he had brought upon himself and family, two of his children dying during the time of his imprisonment, and his wife and third child coming upon the parish." Prospects seemed better for childless widows, as noted in the life story of the thief Robert Perkins: "He said he died with less reluctance because his ruin involved nobody but himself, he leaving no children behind him, and his wife being young enough to get a living honestly" (Hayward, 2013).

Reproductive success was also curbed by marital instability. The footpad Joseph Ward was married for all of two days:

The very next morning after their wedding, Madam prevailed on him to slip on an old coat and take a walk by the house which she had shown him for her uncle's. He was no sooner out of doors, but she gave the sign to some of her accomplices, who in a quarter of an hour's time helped her to strip the lodging not only of all which belonged to Ward, but of some things of value that belonged to the people of the house. (Hayward, 2013)

In these life stories, the word "wife" is often qualified: "lived as wife,""whom he called his wife,""who passed for his wife,""he at that time owned for his wife," etc. Overall, only 40% of the executed criminals had been married: 38% of the men and 80% of the women.

Age structure

The age composition of the executed criminals suggests another reason for their low reproductive success. More than half were put to death before the age of 30. Since the mean age of first marriage for English men at that time was 27 (Wikipedia, 2015b), it's likely that most of these criminals were still trying to amass enough resources to get married and start a family.

Ages  — # executed criminals (out of 198)

10 - 19 years - 18
20 - 29 years - 88
30 - 39 years - 41
40 - 49 years - 20
50 - 59 years - 6
60 - 69 years - 0
70 + years - 1

Many criminals may have planned to steal enough money to give up crime and lead a straight life. Such plans came to nought for the thief John Little:

[...] the money which they amass by such unrighteous dealings never thrives with them; that though they thieve continually, they are, notwithstanding that, always in want, pressed on every side with fears and dangers, and never at liberty from the uneasy apprehensions of having incurred the displeasure of God, as well as run themselves into the punishments inflicted by the law. To these general terrors there was added, to Little, the distracting fears of a discovery from the rash and impetuous tempers of his associates, who were continually defrauding one another in their shares of the booty, and then quarrelling, fighting, threatening, and what not, till Little sometimes at the expense of his own allotment, reconciled and put them in humour. (Hayward, 2013)

Nonetheless, it is possible that others would have saved up a "nest egg," started a family, and moved on to a respectable life. Dick Turpin, for instance, was able to abandon highway robbery and pose as a horse trader. His ruse ultimately failed because he continued to run afoul of the law (Wikipedia, 2015a). The extent of this life strategy is difficult to measure because the existing information almost wholly concerns those criminals who were caught and executed.

Conclusion

Clearly, some of the executed criminals had already reproduced, but the overall reproductive success was very low, and probably lower still if we adjust for infantile mortality. Instead of arguing that executions had little impact on the gene pool because too many of the executed had already reproduced, one could argue the opposite: the genetic impact was inconsequential because so few would have reproduced anyway, even if allowed to live out their lives.

Reproductive success was highly variable in the criminal underclass. Many would have had few children with or without being sent to the gallows. But some would have done much better. At the age of 26, the highwayman William Miller already had two children by two wives, and many other women gravitated around him, even as he prepared for death: "Yet in the midst of these tokens of penitence and contrition several women came still about him." At the age of 25, the murderer Captain Stanley had fathered three or four children by one woman and was looking for a new wife. One might also wonder about some of the executed teenagers. At the age of 19, the footpad Richard Whittingham was already married, though still childless, and the thief William Bourne likewise at the age of 18.

In an earlier England, such young men would have done well reproductively, as leaders of warrior bands. But that England no longer existed, and criminal gangs offered the only outlet for engaging in plunder, violence, and debauchery with other young men.

References

Clark, G. and N. Cummins. (2009). Disease and Development: Historical and Contemporary Perspectives. Urbanization, Mortality, and Fertility in Malthusian England, American Economic Review: Papers & Proceedings, 99,2, 242-247
http://neilcummins.com/Papers/AER_2009.pdf

Frost, P. and H. Harpending. (2015). Western Europe, state formation, and genetic pacification, Evolutionary Psychology, 13, 230-243. http://www.epjournal.net/articles/western-europe-state-formation-and-genetic-pacification/  

Hayward, A.L. (2013[1735]). Lives of the Most Remarkable Criminals - who Have Been Condemned and Executed for Murder, the Highway, Housebreaking, Street Robberies, Coining Or Other Offences, Routledge.
http://www.gutenberg.org/files/13097/13097-h/13097-h.htm

Wikipedia. (2015a). Dick Turpin
http://en.wikipedia.org/wiki/Dick_Turpin

Wikipedia (2015b). Western European Marriage Pattern

http://en.wikipedia.org/wiki/Western_European_marriage_pattern

The hidden past of Claude Lévi-Strauss

$
0
0



Claude Lévi-Strauss, 1973 (Wikicommons)


The anthropologist Claude Lévi-Strauss died six years ago, leaving behind a treasure trove of correspondence and unpublished writings. We can now trace where his ideas came from and how they evolved.

I admired Lévi-Strauss during my time as an anthropology student because he asked questions that Marxist anthropologists would never ask. That's why I preferred to call myself a Marxisant, and not a full-blown Marxist. I especially admired him for addressing the issue of nature versus nurture, which had once been a leading issue in anthropology but was now studiously ignored. Only he, it seemed, could defy this omertà and not suffer any ill effects, perhaps because of his age and status.

In his best known tome, The Elementary Structures of Kinship, this issue dominated the first chapter:

Man is both a biological being and a social individual. Among his responses to external or internal stimuli, some are wholly dependent upon his nature, others upon his social environment.

Lévi-Strauss admitted that the two were not always easy to separate:

Culture is not merely juxtaposed to [biological] life nor superimposed upon it, but in one way serves as a substitute for life, and in the other, uses and transforms it, to bring about the synthesis of a new order.
 
He reviewed the different ways of disentangling one from the other:

The simplest method would be to isolate a new-born child and to observe its reactions to various stimuli during the first hours or days after birth. Responses made under such conditions could then be supposed to be of a psycho-biological origin, and to be independent of ulterior cultural syntheses.

[Nonetheless,] the question always remains open whether a certain reaction is absent because its origin is cultural, or because, with the earliness of the observation, the physiological mechanisms governing its appearances are not yet developed. Because a very young child does not walk, it cannot be concluded that training is necessary, since it is known that a child spontaneously begins to walk as soon as it is organically capable of doing so.

His interest in the interactions between culture and biology went further. The gene pool of a population will influence its culture, which in turn will alter the gene pool:

The selection pressure of culture—the fact that it favors certain types of individuals rather than others through its forms of organization, its ideas of morality, and its aesthetic values—can do infinitely more to alter a gene pool than the gene pool can do to shape a culture, all the more so because a culture's rate of change can certainly be much faster than the phenomena of genetic drift. (Lévi-Strauss, 1979, p. 24-25)

This is of course gene-culture co-evolution. He may have given the idea to L.L. Cavalli-Sforza, who first began to propound it while teaching a cultural evolution class in 1978-1979. Two of his students, Robert Boyd and Peter Richerson, went on to popularize the idea in their book Culture and the Evolutionary Process (1985) (Stone and Lurquin 2005, p. 108). Lévi-Strauss had in fact mentioned the same idea long before in a UNESCO lecture:

When cultures specialize, they consolidate and favor other traits, like resistance to cold or heat for societies that have willingly or unwillingly had to adapt to extreme climates, like dispositions to aggressiveness or contemplation, like technical ingenuity, and so on. In the form these traits appear to us on the cultural level, none can be clearly linked to a genetic basis, but we cannot exclude that they are sometimes linked partially and distantly via intermediate linkages. In this case, it would be true to say that each culture selects for genetic aptitudes that, via a feedback loop, influence the culture that had initially helped to strengthen them. (Lévi-Strauss, 1971)

In the same lecture, he made another point:

[Humanity] will have to relearn that all true creation implies some deafness to the call of other values, which may go so far as to reject or even negate them. One cannot at the same time melt away in the enjoyment of the Other, identify oneself with the Other, and keep oneself different. If fully successful, complete communication with the Other will doom its creative originality and my own in more or less short time. The great creative ages were those when communication had increased to the point that distant partners stimulated each other but not so often and rapidly that the indispensable obstacles between individuals, and likewise between groups, dwindled to the point that excessively easy exchanges would equalize and blend away their diversity. (Lévi-Strauss, 1971)

His audience was taken aback, according to fellow anthropologist Wiktor Stoczkowski:

These words shocked the listeners. One can easily imagine how disconcerted UNESCO employees were, who, meeting Lévi-Strauss in the corridor after the lecture, expressed their disappointment at hearing the institutional articles of faith to which they thought they had the merit of adhering called into question. René Maheu, the Director General of UNESCO, who had invited Lévi-Strauss to give this lecture, seemed upset. (Stoczkowski, 2008; Frost, 2014)

Where his ideas came from

Since his death in 2009, we have gained a clearer picture of his intellectual evolution. His published writings had already provided an answer:

When I was about sixteen, I was introduced to Marxism by a young Belgian socialist, whom I had got to know on holiday, and who is now one of his country's ambassadors abroad. I was all the more delighted by Marx in that the reading of the works of the great thinker brought me into contact for the first time with the line of philosophical development running from Kant to Hegel; a whole new world was opened up to me. Since then, my admiration for Marx has remained constant [...] (Lévi-Strauss, 2012 [1973])

Looking through Lévi-Strauss' published and unpublished writings, Wiktor Stoczkowski tried to learn more about this episode but found nothing:

It suffices however to look closely at the milieus that Lévi-Strauss frequented in the 1920s and 1930s, or to reread the articles he published during that period to realize that his references to Marx were at that time astonishingly rare, in flagrant contradiction with his declarations […] In contrast, another name often came up during that time in the writings of the young Lévi-Strauss: that of Henri De Man. And that name, curiously, Lévi-Strauss would never mention after the war. (Stoczkowski, 2013)

As a young leftist disenchanted with Marxism, Lévi-Strauss was especially fascinated by De Man's book Au-delà du marxisme (Beyond Marxism), published in 1927. One of his friends invited De Man to Paris to present his ideas to French socialists. Lévi-Strauss was given the job of organizing the lecture and wrote to De Man about the difficulties encountered:

We have run into many difficulties, which we scarcely suspected and which have sadly shed light on the conservative and sectarian spirit of a good part of French socialism [...]. We thought that the best means to give this [lecture] all of the desirable magnitude would be to make it public [...] [but] to obtain the key support of the Socialist Students, we have agreed to make your lecture non-public, and to reserve admission to members of socialist organizations. Thus, we have learned that Marxism is a sacrosanct doctrine in our party, and that to study theories that stray from it, we have to shut ourselves in very strongly, so that no one on the outside will know (Stoczkowski, 2013)

The lecture was held the next year. Stoczkowski describes the letter that Lévi-Strauss wrote to the invitee afterwards:

"Thanks to you," he wrote, "socialist doctrines have finally emerged from their long sleep; the Party is undergoing, thanks to you, a revival of intellectual activity ...." But there is more. Speaking on his behalf and on behalf of his young comrades, Lévi-Strauss informed De Man that his book Au-delà du marxisme had been for them "a genuine revelation..." Speaking personally, Lévi-Strauss added that he was "profoundly grateful" to De Man's teachings for having "helped me get out of an impasse I believed to have no way out."(Stoczkowski, 2013)

Nothing indicates that Lévi-Strauss had ever been a Marxist in his youth. Both he and his friends saw it as a pseudo-religion that stunted the development of socialism.

But who was Henri De Man?

He was a Belgian Marxist who had lived in Leipzig, Germany, where he became the editor of a radical socialist journal, Leipziger Volkszeitung, that ran contributions by Rosa Luxembourg, Pannekoek, Radek, Trotsky, Karl Liebknecht, and others. In 1907, he helped found the Socialist Youth International. He later returned to Belgium and enrolled when war broke out, seeing the Allied side as a progressive alternative to German authoritarianism.

His views changed during the 1920s, while teaching at the University of Frankfurt. He came to feel that Marxists erred in seeing themselves as an antithesis to the current system; such a perspective made them oppose all traditional values, particularly Christianity and national identity. He now argued that laws, morality, and religion are not bourgeois prejudices, but rather things that are necessary to make any society work. Marxists also erred, he felt, in their narrow focus on economic determinism and their disregard for psychology and the will to act. Although De Man acknowledged the self-destructive tendencies of capitalism, these tendencies do not inevitably lead to revolution. Rather, revolution will happen only when enough people realize that current conditions are neither tolerable nor inevitable. Above all, revolution cannot happen unless it respects existing cultural, religious, and national traditions:

If one sees in socialism something other than and more than an antithesis to modern capitalism, and if one relates it to its moral and intellectual roots, one will find that these roots are the same as those of our entire Western civilization. Christianity, democracy, and socialism are now, even historically, merely three forms of one idea.

De Man returned to Belgium during the 1930s, becoming vice-president and then president of the Belgian Labour Party. In 1935, with the formation of a government of national unity to fight the Great Depression, he was made minister of public works and job creation. In this role, he pushed for State planning and looked to Germany and Italy as examples to be followed. He became increasingly disillusioned with parliamentary democracy and began to call for an “authoritarian democracy” where decisions would be made primarily through the legislature and referendums, rather than through the executive and party politics (Tremblay, 2006).

When Germany overran Belgium in 1940, De Man issued a manifesto to Labour Party members and advised them to collaborate: "For the working classes and for socialism, this collapse of a decrepit world, far from being a disaster, is a deliverance" (Wikipedia, 2015). Over the next year, he served as de factoprime minister before falling into disfavor with the German authorities. He spent the rest of the war in Paris and then fled to Switzerland where he lived his final years. Meanwhile, a Belgian court convicted him in absentia of treason.

Conclusion

Like many people after the war, Claude Lévi-Strauss had to invent a new past. It didn't matter that he had admired Henri de Man at a time when the Belgian socialist was not yet a fascist or a collaborator. As Stoczkowski notes, guilt by association would have been enough to ruin his academic career. Ironically, if he had really been a loyal Marxist during the late 1920s and early 1930s, he would also have denied back then the crimes being committed in the name of Marxism: the Ukrainian famine, Stalin's purges ... Yet, for that, he never faced any criticism.

References

De Man. (1927). Au-delà du marxisme, Brussels, L'Églantine.

Frost, P. (2014). Negotiating the gap. Four academics and the dilemma of human biodiversity, Open Behavioral Genetics, June 20.
http://openpsych.net/OBG/2014/06/negotiating-the-gap/ 

Lévis-Strauss, C. (1969 [1949]). The Elementary Structures of Kinship, Beacon Press.

Lévi-Strauss, C. (1971). Race et culture, conférence de Lévi-Strauss à L'UNESCO le 22 mars 1971

Lévi-Strauss, C. (2012[1973]). Tristes Tropiques, New York: Penguin

Lévi-Strauss, C. (1985). Claude Lévi-Strauss à l'université Laval, Québec (septembre 1979), prepared by Yvan Simonis, Documents de recherche no. 4, Laboratoire de recherches anthropologiques, Département d'anthropologie, Faculté des Sciences sociales, Université Laval.

Stoczkowski, W. (2008). Claude Lévi-Strauss and UNESCO, The UNESCO Courrier, no. 5, pp. 5-8.

Stoczkowski, W. (2013). Un étrange socialisme de Claude-Lévi-Strauss / A weird socialism of Claude Lévi-Strauss, Europe91, n° 1005-1006, 37-53.

Stone, L. and P.F. Lurquin. (2005). A Genetic and Cultural Odyssey. The Life and Work of L. Luca Cavalli-Sforza. New York: Columbia University Press.

Tremblay, J-M. (2006). Henri de Man, 1885-1953, Les classiques des sciences sociales, UQAC

Wikipedia. (2015). Henri de Man


A genetic marker for empathy?

$
0
0

 
The Starry Night, Vincent van Gogh (1853-1890). The more you empathize with the world, the more you feel its joy and pain, but too much can lead to overload.

 

One of my interests is affective empathy, the involuntary desire not only to understand another person's emotional state but also to make it one's own—in short, to feel the pain and joy of other people. This mental trait has a heritability of 68% and is normally distributed along a bell curve within any one population (Chakrabarti and Baron-Cohen, 2013). Does it also vary statistically among human populations? This is possible. Different cultures give varying importance to affective empathy, and humans have adapted much more to their cultural environments than to their natural environments. This is why human genetic evolution accelerated over 100-fold about 10,000 years ago when humans began to abandon hunting and gathering for farming, which in turn led to increasingly diverse forms of social organization (Hawks et al., 2007).

I have argued previously that Europeans to the north and west of the Hajnal Line (an imaginary line running from Trieste to Saint-Petersburg) have adapted to a cultural environment of weaker kinship and, conversely, greater individualism. In such an environment, the reciprocal obligations of kinship are insufficient to ensure compliance with social rules. This isn’t a new situation. Weak kinship is inherent to the Western European Marriage Pattern, which goes back to at least the 12th century, if not earlier.

This cultural environment has selected for a package of mental adaptations:

- capacity to internalize punishment for disobedience of social rules (guilt proneness)

- capacity to simulate and then transfer to oneself the emotional states of people who may be affected by rule-breaking (affective empathy)

- desire to seek out and expel rule-breakers from the moral community (ideological intolerance).

The above mental package has enabled Northwest Europeans to free themselves from the limitations of kinship and organize their societies along other lines, notably the market economy, the modern State, and political ideology. They have thus managed to meet the threefold challenge of creating larger societies, ensuring greater compliance with social rules, and making possible a higher level of personal autonomy.

So much for the theory. What direct evidence do we have that affective empathy is stronger on average in Northwest Europeans? We know that a higher capacity for affective empathy is associated with a larger amygdala, which seems to control our response to facial expressions of fear and other signs of emotional distress (Marsh et al., 2014). Two studies, one American and one English, have found that "conservatives" tend to have a larger right amygdala (Kanai et al., 2011; Schreiber et al., 2013). In both cases, my hunch is that "conservatives" are disproportionately drawn from populations that have, on average, a higher capacity for affective empathy.

But testing this kind of hunch would require a large-scale comparative study, which in turn would require cutting up a lot of cadavers or doing a lot of MRIs. It would be nicer to have a genetic marker that shows up on a simple test. It would also be cheaper.

We may now have that marker: a deletion variant of the ADRA2b gene. Carriers remember emotionally arousing images more vividly and for a longer time, and they also show more activation of the amygdala when viewing such images (Todd and Anderson, 2009; Todd et al., 2015). This is not to say that the ADRA2bdeletion variant is the sole reason or even the major reason why some people have increased capacity for affective empathy. As with intelligence, an increase in capacity seems to have come about through changes of small effect at many genes.

Nor can we say that "emotional memory" is equivalent to affective empathy. Instead, it seems to be one component, albeit a critical one: the capacity to imagine an emotional state based on visual information (a picture of a person's face, a puppy dog, etc.) and then keep it as part of one's current emotional experience. Emotional memory may be upstream to affective empathy, being perhaps closer to cognitive empathy—the ability to imagine how another person feels without involuntarily making that feeling one's own.

Does its incidence differ among human populations?

This variant was first studied in the United States. Small et al. (2001) found a higher incidence in Caucasians (31%) than in African Americans (12%). Belfer et al. (2005) likewise found a higher incidence in Caucasians (37%) than in African Americans (21%).

In a press release, the authors of the latest study noted that this variant is not equally common in all humans:

The ADRA2b deletion variant appears in varying degrees across different ethnicities. Although roughly 50 per cent of the Caucasian population studied by these researchers in Canada carry the genetic variation, it has been found to be prevalent in other ethnicities. For example, one study found that just 10 per cent of Rwandans carried the ADRA2b gene variant. (UBC News, 2015)

Curiously, its incidence seems higher among “Canadian Caucasians” (50%) than among "American Caucasians” (31-37%). This may reflect differences in participant recruitment or in ethnic mix between the two countries. Indeed, the "Caucasian" category may prove to be problematic because it includes people from both sides of the Hajnal Line. If the average incidence is 31% to 50%, there may be populations that score much higher.

I have found only three studies on specific European ethnicities. The first study found an incidence of 50% in Swiss participants (de Quervain, 2007). The second one found an incidence of 56% in Dutch participants (Cousijn et al., 2010). The third one had two groups of participants: Israeli Holocaust survivors and a control group of European-born Israelis who had emigrated with their parents to the British Mandate of Palestine. The incidence was 48% in the Holocaust survivors and 63% in the controls (Fridman et al., 2012).

From East Asia, a study on Chinese participants reported an incidence of 68% (Zhang et al., 2005). This is surprising because Chinese seem less likely to distinguish between cognitive empathy and affective empathy (Siu and Shek,2005). Japanese participants had an incidence of 56% in one study (Suzuki et al., 2003) and 71% in another (Ishii et al., 2015). Among the Shors, a Turkic people of Siberia, the incidence was 73%. Curiously, the incidence was higher in men (79%) than in women (69%). It may be that male non-carriers had a higher death rate, since the incidence increased with age (Mulerova et al., 2015).

Conclusion

The picture is still incomplete but the incidence of the ADRA2b deletion variant seems to range from a low of 10% in some sub-Saharan African groups to a high of 50-65% in some European groups and 55-75% in some East Asian groups. Given the high values for East Asians, I suspect this variant is not a marker for affective empathy per se but rather for empathy in general (cognitive and affective).

It may be significant that a high incidence was found among the Shors, who were largely hunter-gatherers until recent times. This suggests that empathy reached high levels in Eurasia long before the advent of complex societies, or even farming. The example of the Shors also suggests that non-carriers of the deletion variant suffer from higher mortality—a somewhat surprising finding, given the evidence that carriers have a higher risk of heart disease.

More research is needed on how this variant interacts with variants at other genes. For instance, it has been found that people with at least one copy of the short allele of 5-HTTLPR tend to be too sensitive to negative emotional information. This effect seems to be attenuated by the deletion variant of ADRA2b, which either keeps one from dwelling too much on a bad emotional experience or helps one anticipate and prevent repeat experiences (Naudts et al., 2012). Nonetheless, too much affective empathy may lead to an overload where one ends up helping others to the detriment of oneself and one’s family and kin.

References 

Belfer, I., B. Buzas, H. Hipp, G. Phillips, J. Taubman, I. Lorincz, C. Evans, R.H. Lipsky, M.-A. Enoch, M.B. Max, and D. Goldman. (2005). Haplotype-based analysis of alpha 2A, 2B, and 2C adrenergic receptor genes captures information on common functional loci at each gene. Journal of Human Genetics, 50, 12-20.
http://www.researchgate.net/profile/Mary_Anne_Enoch/publication/8134892_Haplotype-based_analysis_of_alpha_2A_2B_and_2C_adrenergic_receptor_genes_captures_information_on_common_functional_loci_at_each_gene/links/02e7e53559c67a2c02000000.pdf 

Chakrabarti, B. and S. Baron-Cohen. (2013). Understanding the genetics of empathy and the autistic spectrum, in S. Baron-Cohen, H. Tager-Flusberg, M. Lombardo. (eds). Understanding Other Minds: Perspectives from Developmental Social Neuroscience. Oxford: Oxford University Press.
http://books.google.ca/books?hl=fr&lr=&id=eTdLAAAAQBAJ&oi=fnd&pg=PA326&ots=fHpygaxaMQ&sig=_sJsVgdoe0hc-fFbzaW3GMEslZU#v=onepage&q&f=false

Cousijn, H., M. Rijpkema, S. Qin, H.J.F. van Marle, B. Franke, E.J. Herman, G. van Wingen, and G. Fernández. (2010). Acute stress modulates genotype effects on amygdala processing in humans. Proceedings of the National Academy of Sciences U.S.A., 107, 9867-9872.
http://www.pnas.org/content/107/21/9867.full.pdf

de Quervain, D.J. F., I.-T. Kolassa, V. Ertl, L.P. Onyut, F. Neuner, T. Elbert, and A. Papassotiropoulos. (2007). A deletion variant of the alpha2b-adrenoceptor is related to emotional memory in Europeans and Africans. NatureNeuroscience, 10, 1137-1139.

Fridman, A., M.H. van IJzendoorn, A. Sagi-Schwartz, and M.J. Bakermans-Kranenburg. (2012). Genetic moderation of cortisol secretion in Holocaust survivors: A pilot study on the role of ADRA2B. International Journal of Behavioral Development. 36, 79
http://www.researchgate.net/profile/Abraham_Sagi-Schwartz/publication/230887396_Genetic_moderation_of_cortisol_secretion_in_Holocaust_survivors__A_pilot_study_on_the_role_of_ADRA2B/links/0912f505c75bb4a01d000000.pdf 

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.
http://harpending.humanevo.utah.edu/Documents/accel_pnas_submit.pdf

Ishii, M., H. Katoh, T. Kurihara, and S. Shimizu. (2015). Catechol-O-methyl transferase gene polymorphisms in Japanese patients with medication overuse headaches. JSM Genetics and Genomics, 2(1), 1-4.
http://www.researchgate.net/profile/Masakazu_Ishii/publication/273587694_Catechol-O-methyl_transferase_gene_polymorphisms_in_Japanese_patients_with_medication_overuse_headaches/links/5507e87a0cf26ff55f7f719d.pdf

Kanai, R., T. Feilden, C. Firth, and G. Rees. (2011). Political orientations are correlated with brain structure in young adults. Current Biology, 21, 677 - 680.
http://www.cell.com/current-biology/abstract/S0960-9822(11)00289-2

Marsh, A.A., S.A. Stoycos, K.M. Brethel-Haurwitz, P. Robinson, J.W. VanMeter, and E.M. Cardinale. (2014). Neural and cognitive characteristics of extraordinary altruists. Proceedings of the National Academy of Sciences, 111, 15036-15041.
http://www.pnas.org/content/111/42/15036.short 

Mulerova, T.A., A.Y. Yankin, Y.V. Rubtsova, A.A. Kuzmina, P.S. Orlov, N.P. Tatarnikova, V.N. Maksimov, M.I. Voevoda, and M.Y. Ogarkov. (2015). Association of ADRA2B polymorphism with risk factors for cardiovascular diseases in native population of mountain Shoria. Bulletin of Siberian Medicine, 14, 29-34.

Naudts, K.H., R.T. Azevedo, A.S. David, K. van Heeringen, and A.A. Gibbs. (2012). Epistasis between 5-HTTLPR and ADRA2B polymorphisms influences attentional bias for emotional information in healthy volunteers. International Journal of Neuropsychopharmacology, 15, 1027-1036.

Schreiber, D., Fonzo, G., Simmons, A.N., Dawes, C.T., Flagan, T., et al. (2013). Red Brain, Blue Brain: Evaluative Processes Differ in Democrats and Republicans. PLoS ONE, 8(2): e52970.
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052970

Siu, A.M.H. and D.T. L. Shek. (2005). Validation of the Interpersonal Reactivity Index in a Chinese Context. Research on Social Work Practice, 15, 118-126.
http://rsw.sagepub.com/content/15/2/118.short

Small, K.M., and S.B. Liggett. (2001) Identification and functional characterization of alpha(2)-adrenoceptor polymorphisms. Trends in Pharmacological Sciences, 22, 471-477.

Suzuki N, Matsunaga T, Nagasumi K, Yamamura T, Shihara N, Moritani T, et al. (2003).a2B adrenergic receptor deletion polymorphism associates with autonomic nervous system activity in young healthy Japanese. The Journal of Clinical Endocrinology & Metabolism, 88, 1184-1187.
http://www.researchgate.net/profile/Tetsuro_Matsunaga/publication/10863380_Alpha(2B)-adrenergic_receptor_deletion_polymorphism_associates_with_autonomic_nervous_system_activity_in_young_healthy_Japanese/links/00b7d530379a0f06bf000000.pdf

Todd, R.M. and A.K. Anderson. (2009). The neurogenetics of remembering emotions past. Proceedings of the National Academy of Sciences U.S.A., 106, 18881-18882
http://www.pnas.org/content/106/45/18881.short

Todd, R.M., M.R. Ehlers,  D. J. Muller, A. Robertson, D.J. Palombo, N. Freeman, B. Levine, and A.K. Anderson (2015). Neurogenetic Variations in norepinephrine availability enhance perceptual vividness. The Journal of Neuroscience, 35, 6506-6516.

UBC News. (2015). How your brain reacts to emotional information is influenced by your genes, May 6
http://news.ubc.ca/2015/05/06/how-your-brain-reacts-to-emotional-information-is-influenced-by-your-genes/ 

Zhang, H., X. Li, J. Huang, Y. Li, L. Thijs, Z. Wang, X. Lu, K. Cao, S. Xie, J.A. Staessen, J-G. Wang. (2005). Cardiovascular and metabolic phenotypes in relation to the ADRA2B insertion/deletion polymorphism in a Chinese population. Journal of Hypertension, 23, 2201-2207.
http://www.staessen.net/publications/2001-2005/05-30-P.pdf

Hiatus

$
0
0

 
The Second Class Carriage, Honoré Daumier (1808-1879)

 

I'll be on vacation until October and will probably have little time for my weekly column. I hope to profit from this hiatus to rethink my priorities for the next twelve months.

That rethink will include this column. Is it reaching its target audience? Are changes needed? A recurring suggestion is that I should write more simply and in a less pedantic style. Yes, plain language is best. A lot of academic writing suffers from turgid jargon, not to mention silly attempts to imitate the syntax of French deconstructionists. But it’s not as if I write my columns first and later try to impress folks by inserting “organizing principle,” “evolutionary trajectory,” and other bafflegab. That’s how I think. Jargon also allows me to squeeze complex ideas into a few words. A certain amount is unavoidable, unless you want to read columns that are twice as long.

Russia, Russia, Russia ...

Another suggestion is that I should write more pieces about foreign politics, like "Impressions of Russia." In The Unz Review that column got me 246 comments (My score was higher with only one other column “The Jews of West Africa”). Yet I wrote it off the top of my head.

So why not write about Russia? To be honest I don't feel qualified. I remember my first impressions of that country and how so many turned out to be incomplete or dead wrong. Nonetheless, those same first impressions turn up again and again in pieces by journalists and other writers.

Like the ones who go on about "grim-faced Russians weighed down by centuries of oppression." I've read that refrain so often it's no longer funny. Russians dislike smiling at strangers because it’s considered rude—and also because a stranger with attitude might take it the wrong way. But among friends and family they laugh and smile like anyone else. This is changing, to be sure. On my last visit I noticed many store employees flashing American-style smiles at customers.

Then there's that travel writer who said he knew he was being spied upon because the hotel maid looked like a top model. Uh, that's just the local demographics, and the fact that many young women work in services to pay for their university education. In the West, students are supposed to work as unpaid "interns."

Finally, many journalists have been writing that Russia is hell on earth for gays and lesbians. The real situation is like that of the West in the 1970s: homosexuality is no longer illegal but most people still consider it wrong. So gays and lesbians get disowned by their parents and beaten up by young toughs. On the other hand, they form a large and very visible community with its own bars, magazines, and festivals. I remember going to a night club where about a third of the clientele were openly gay or lesbian. It was no hole-in-the wall either.

So if some journalists think Russia today is evil, they should also think the West in the 1970s was evil. Maybe they do.

Of course, there is a big difference between us in the 1970s and Russians today. We had to wait forty years to see how things would turn out. They don’t have to wait. They can just look at us. That cuts two ways. On the one hand, Russian gays and lesbians look at the West and feel frustrated. They want change to happen faster. On the other hand, traditional Russians look at the West and feel dismayed. They want no part of this change.

Can you blame them? In the 1980s I supported gay rights on the principle of "live and let live." Gays weren't asking to be accepted by people who didn't accept them, least of all religious conservatives. They just wanted to be left alone, as consenting adults, and who could be against what consenting adults do in private?

The next three decades then saw a ratcheting upward of gay rights. For example, since 2012 all Ontario schools have had to allow gay/lesbian clubs on their premises, even Catholic and elementary schools. So much for freedom of religion. So much for "consenting adults."  Gays and lesbians seem to be like any pressure group: they make whatever promises are necessary to get what they want and then forget them when they get what they want.

So Russia is a bit like our past. Only it's a past where people have a better idea of the future.

Punditry, left vs. right, and globalism

That's about all I have to say about Russia. If you want to know more, ask someone from that country.

What about punditry on other topics? Again, I don't feel qualified, and there are columnists far better at that than me.

I also have mixed feelings about punditry. It aims not so much to change how people think as to confirm what they think. So the net effect is to polarize public opinion. Liberals become more self-assured about their ideology and conservatives likewise. Yet, as I see it, both groups are equally wrong, and both have betrayed their original principles. 

As I see it (again), the worst threat comes from the right. It’s the right that best articulates globalism and is best able to persuade everyone that it's for their own good. And globalism will be much more far-reaching—and devastating—than communism ever was. It is literally the abolition of all barriers to the free flow of capital, trade, and labor. In the best scenario, wages and working conditions will be levelled downward throughout the West. In the worst scenario, the whole world will be worse off because the conditions most suitable to wealth creation are in the high-trust societies of the West.

Those societies are not high-trust because of laws, constitutions, or charters of rights. They are that way because of their cultural, behavioral, and psychological characteristics—low levels of personal violence, high levels of affective empathy and guilt proneness, strong orientation toward the future rather than the present, and so on. It was that mental package that made the rise of the West possible.

That mental package is now being dissolved, not so much by "cultural Marxists" as by business interests that want to cut labor costs and increase GDP. They feel no animosity toward the West and its national identities. They just feel those identities have had their day. In their opinion, this is how we'll all move into a better and more prosperous future.

People are entitled to their opinions, but this one—globalism—isn’t competing with the others on a level playing field. It dominates the media, the think tanks, and even the entertainment industry. And it dominates both the left and the right. It’s an opinion that has succeeded not on its own merits but because it has much more money behind it.

This has always been a problem in open, democratic societies. It has gotten worse, however. This is partly because the top 1% have proportionately more money nowadays and partly because they have less sense of national loyalty nowadays. They’ll say it out loud: “Why should I feel more loyal to someone who works here than to someone who works in another country?” This sort of view is promoted by eminently conservative groups, like the Fraser Institute here in Canada.

Punditry becomes part of the problem to the degree it shores up the false dichotomy of “left” versus “right.” Today, the real one is globalism versus the forces it opposes.

Reference 

Ostroff, J. (2015). How Canada got its first Catholic elementary school gay-straight alliance, Huffington Post, May 11
http://www.huffingtonpost.ca/2015/05/11/polly-quinn-gsa-catholic-elementary-school_n_7226896.html 

Déjà vu ?

$
0
0

 
Goths traversant une rivière, Évariste-Vital Luminais (1822-1896). The Goths came en masse and unopposed, as immigrants. They discovered that Roman civilians would not defend themselves and had not done so for a long time.

 


When discussing the influx of Syrian refugees into Europe, we often ignore one thing: most of them are neither Syrians nor refugees. The majority are Iraqis, Iranians, Afghans, Pakistanis, or even Bangladeshis. They live crummy lives but are in no immediate danger, their motive being simply the prospect of a better life in the West.

A Pakistani identity card in the bushes, a Bangladeshi one in a cornfield. A torn Iraqi driver's license bearing the photo of a man with a Saddam-style mustache, another one with a scarfed woman displaying a shy smile.

Documents scattered only metres from Serbia's border with Hungary provide evidence that many of the migrants flooding Europe to escape war or poverty are scrapping their true nationalities and likely assuming new ones, just as they enter the European Union. Serbian border police say that 90 percent of those arriving from Macedonia, some 3,000 a day, claim they are Syrian, although they have no documents to prove it. [...]

"You can see that something is fishy when most of those who cross into Serbia enter January first as the date of their birth," said border police officer Miroslav Jovic. "Guess that's the first date that comes to their mind."(The New Zealand Herald, 2015)

A breach has opened up in the defenses of Europe, and large numbers of people are pouring through. Meanwhile, another breach has been made in Libya.

Steve Sailer has compared this influx to the entry of the Goths into the Roman Empire (Sailer, 2015). They too came en masse and unopposed, as refugees. Is the comparison justified? There are both similarities and dissimilarities, but the latter, I will argue, are such that the current crisis may actually be the worse one.

Let's begin with the similarities:

Demographic imbalance

Contemporary observers (Augustus, Tacitus, Pliny the Younger, Plutarch, Stobaeus) believed that birth rates had fallen considerably, largely because too many people were postponing marriage and resorting to abortion or infanticide within marriage (Harris, 1982; Rawson, 1986). This opinion is supported by archaeological evidence that Roman towns and cities lost population between the 2nd and 5th centuries, with no signs of population growth elsewhere. Using this evidence, Latouche (1947) argued that birth rates were falling throughout the Empire by the 3rd century. Nonetheless, other historians tend to be dismissive, saying that the contemporary observers in question had pro-family biases.

There is agreement on one point: infantile mortality was high, particularly in urban areas. Even a modest fertility decline would have led to a shrinking population (Frost, 2010b). Just to keep the population stable, each Roman mother would have had to bear at least five children (Parkin, 1992).

Conditions were better for growth just outside the Empire, where people enjoyed increased opportunities for trade without the cultural influences that tend to delay family formation and reduce fertility (Wells, 1999, p. 225). The result would have been an increasing pressure of population on the Empire's borders.

Controlled immigration: beginnings of the mass influx

The fall of Rome evokes horrific images of death and plunder. At first, however, the barbarians came peacefully, being recruited as soldiers and rewarded with land grants at the end of their military service (Goffart, 1980). It was with this outcome in mind that the Goths, fleeing the advance of the Huns in the 4th century, showed up along the Danube and begged to be allowed in. But this time the influx would be much greater, perhaps in the hundreds of thousands. They were nonetheless allowed in, and the Emperor's entourage saw this influx as business as usual.

Optimism among the elites

By that time, Rome's capacity to assimilate was at its height. Ethnic and regional identities were dissolving throughout the Empire and being replaced by a common identity of Roman civilization or humanitas. This broader identity seemed to be spreading even beyond the Empire:

The most profound effect of the interactions was to spread Roman goods, practices, and values beyond the provinces out to regions far removed from the territories conquered by Rome. When auxiliary soldiers returned home to regions such as Denmark or Poland, they brought with them not only their weapons and perhaps Roman bronze vessels and ornate pottery, but also personal familiarity with large-scale political organization, cities, writing, and all of the myriad other features that distinguished Roman civilization from the cultures of the peoples of northern Europe. (Wells, 1999, p. 225)

Christianity was likewise making inroads. For all these reasons, the northern barbarians didn't form a rival civilization like the Persian Empire to the east. They were merely disparate tribes being drawn economically and culturally into Rome's orbit and apparently destined to become future Romans. This should be kept in mind when we read about the optimism of the Emperor's entourage, who considered the Gothic influx to be a godsend of future soldiers and loyal subjects (Pohl, 1997, p. 4). Goffart (1980, p. 35) is not far off the mark when he states, "what we call the fall of the Western Roman Empire was an imaginative experiment that got a little out of hand."

Why things went wrong

To some degree, optimism was justified. Large numbers of barbarians had become useful citizens, particularly soldiers. But past success is no guarantee against future failure. First, a demographic pressure cooker was developing beyond the Empire's borders, and many more barbarians would soon follow the example of the Goths. Second, even as longtime Roman soldiers, they often felt greater loyalty to their own people than to abstract principles of humanitas. Third, many had trouble accepting the Roman idea that only the State may use violence.

Barbarians considered violence to be legitimate. In their eyes, every adult male had the right to use violent means when and if appropriate, even to the point of committing murder. In barbarian society, a victim of violence could go to a court of law, but the court's decision had to be enforced by the victim and his kinsmen. In short, no one had an inherent right to life and property. That right had to be continually earned through one's ability to defend oneself and rally support from friends and family (Frost, 2010a). 

Things were very different within the Empire, as summed up by the term Pax Romana. Only the State had the right to use violence, and people who usurped that right were branded as bandits and treated as such. It was this pacification of social relations that made possible the creation of a large complex society where people could live, trade, and come and go in relative peace.

The barbarian influx would destroy the Pax Romana. If we take the case of the Goths, so many were allowed to enter that the Empire lacked the means to feed them. The resulting famine pushed them to plunder towns and villages for food. At that point, they saw with their own eyes the defenselessness of the average Roman, who in any altercation would not defend himself and would typically flee.

The Romans did have a system of collective defense. By the 4th century, there was an extensive network of walls, forts, and watchtowers along the border, as well as defense in depth—legions stationed farther behind to contain any incursions. But this system failed to allow for a situation where large numbers of barbarians would be invited to cross the militarized border zone with no opposition whatsoever. At that point, they entered the so-called 'civil zone,' where defenses were much weaker.

The resulting crisis tended to feed on itself. When large numbers of barbarians were invited in, even more decided to invite themselves. The border ceased to exist. There was no longer any barrier between the barbaric outer world and the pacified Roman world, which was home to millions of people who didn't know how to defend themselves and who had not done so for generations.

And so the inevitable happened. The barbarians didn't wish to destroy Roman society—they just wanted to help themselves to its wealth—-but their very presence made the survival of Roman society impossible. No, they didn't completely destroy the heritage of Rome. They came to plunder, not to destroy; moreover, they were already semi-Roman and semi-Christian, and in time the kingdoms they founded would preserve some of that heritage. But the Empire did collapse, as a French historian has wryly pointed out:

For the decisive point is that Rome had shown its weakness by admitting peoples onto its territory whom it had been unable to subordinate and whose presence it had regularized without having vanquished them in the field. Contrary to what is commonly said today, the invasions really did happen. The Barbarians were in no way "invited" to settle in the empire. They entered in large numbers by immigration and also, at least in equal numbers, by violent invasion, by piercing the defense lines, plundering the cities, and massacring people as much in Italy and Greece as in Gaul, Spain, and Africa. (Voisin, 2014)

And now the differences

While the fall of Rome resembles the current crisis, there are differences. First, the demographic imbalance between Romans and barbarians was hardly comparable to the one that now exists between Europeans, on the one hand, and Muslims and Africans, on the other. When the Roman Empire collapsed, barbarians replaced the native population in only a few areas: England, Flanders, southern Germany, parts of Switzerland, and Austria (furthermore, the case of England is disputed by some historians). Elsewhere, they were no more than 5 to 10% of the local population. The population replacement now under way—which is merely in its initial stages—promises to be much greater.

Second, the peoples of Africa and the Muslim world may covet Europe's higher standard of living, but they don't see themselves as future Europeans. They see themselves as Africans and Muslims, and that's not going to change. The difference is crucial. Whereas Europe was still European when the Dark Ages ended, it may be something else when this is all over.

The outcome will depend on what you do or fail to do. When people look on and say nothing, they hand over the keys of history to those who have no such inhibitions.

References

Frost, P. (2010a). The Roman State and genetic pacification, Evolutionary Psychology, 8(3), 376-389.

Frost, P. (2010b). Are empires bad for your health? Evo and Proud, January 14 http://evoandproud.blogspot.ru/2010/01/are-empires-bad-for-your-health.html

Goffart, W. (1980). Barbarians and Romans A.D. 418-584. The Techniques of Accommodation, Princeton University Press.
https://books.google.ru/books?id=_oooA4QiDY8C&printsec=frontcover&hl=ru#v=onepage&q&f=false

Harris, W.V. (1982). The theoretical possibility of extensive infanticide in the Graeco-Roman world, The Classical Quarterly, 32, 114-116.
http://www.jstor.org/stable/638743?seq=1#page_scan_tab_contents

Latouche, R. (1947). Aspect démographique de la crise des grandes invasions, Population, 2, 681-690.
http://www.persee.fr/web/revues/home/prescript/article/pop_0032-4663_1947_num_2_4_1870

Parkin, T.G. (1992). Demography and Roman Society, Ancient Society and History, Baltimore: Johns Hopkins University Press.

Pohl, W. (1997). Kingdoms of the Empire. The Integration of Barbarians in Late Antiquity, Brill.
https://books.google.ru/books?id=f1eWQ8w9m7AC&printsec=frontcover&hl=ru#v=onepage&q&f=false

Rawson, B. (1986). The Roman Family, in B. Rawson (ed.) The Family in Ancient Rom, New Perspectives, Cornell University Press.
https://books.google.ru/books?id=85Gdul_43DEC&printsec=frontcover&hl=ru#v=onepage&q&f=false

Sailer, S. (2015). Civilization capitulates to barbarism at the Danube in "The Decline and Fall of the Roman Empire,"The Unz Review, September 7
http://www.unz.com/isteve/civilization-capitulates-to-barbarism-at-the-danube-in-the-decline-and-fall-of-the-roman-empire/

The New Zealand Herald. (2015). The big migrant passport scam, September 7 http://m.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=11509101 

Voisin, J.L. (2014). Ce que nous enseigne la chute de l'empire romain, Le Figaro, October 17
http://www.lefigaro.fr/vox/histoire/2014/10/17/31005-20141017ARTFIG00353-ce-que-nous-enseigne-la-chute-de-l-empire-romain.php 

Wells, P.S. (1999). The Barbarians Speak. How the Conquered Peoples Shaped Roman Europe, Princeton University Press.
https://books.google.ru/books?id=vru5XzGXkuAC&printsec=frontcover&hl=ru#v=onepage&q&f=false

The adaptive value of "Aw shucks!"

$
0
0

Solitude- Frederic Leighton (1830-1896)
 

In a mixed group, women become quieter, less assertive, and more compliant. This deference is shown only to men and not to other women in the group. A related phenomenon is the sex gap in self-esteem: women tend to feel less self-esteem in all social settings. The gap begins at puberty and is greatest in the 15-18 age range (Hopcroft, 2009).

Do women learn this behavior? Why, then, do they learn it just as easily in Western societies where constraints on female behavior are much weaker and typically stigmatized?

In U.S. society most of the formal institutional constraints on women have been removed, and ideologies of the inferiority of women are publicly frowned on. Sexual jealousy is also publicly disapproved, however much private expectation there may be of the phenomenon. Resources inequalities between men and women have also been reduced, although not eradicated. Certainly, male violence against women is still a reality and may play a role promoting deference behaviors in college-aged women. However, it seems unlikely that fear of physical violence is enough to explain why young women typically defer to men when involved in non-sex typed tasks in experimental settings. (Hopcroft, 2009)

Moreover, why would this behavior be learned mainly between 15 and 18 years of age?

[...] by many measures, girls at this age in the United States are doing objectively better than boys — they get better grades, have fewer behavioral and disciplinary problems, and are more likely to go to college than boys (Fisher 1999: 82). Qualitative studies also show the decline in female confidence and certainty at adolescence (Brown and Gilligan 1992). Brown and Gilligan's (1992) study was done in an elite private girls' school among girls who were likely to have every opportunity in life. Why would their self confidence be eroded at puberty? Certainly, there are few differences in resources between teenage boys and girls. Brown and Gilligan (1992) argue that our sexist culture strikes at girls during puberty, stripping girls of their self esteem. It seems odd that our patriarchal culture should wait until that precise moment to ensnare girls. (Hopcroft, 2009)

Female self-esteem seems to be hormonally influenced. It declines at puberty, reaches its lowest levels in late adolescence, gradually increases during adulthood, and peaks after menopause.

[...] evidence from many cultures [shows that] post-menopausal women often enjoy a status equal to that of men: they become in effect "honorary men." [...] Even in the most gender restrictive societies they are freed from menstrual taboos and purdah, often begin to inherit property and acquire wealth, and in general have increased freedom, status, power and influence in society. A recent experimental study of influence in small groups showed that older women (50 and older) do not defer to older men, and that older men do not display lack of deference to older women. (Hopcroft, 2009)

Female deference varies not only over a woman's lifetime but also from one woman to the next, i.e., some women are more predisposed than others. This variability may exist for one or more reasons:

- Not enough time has elapsed for selection to remove contrary predispositions (non-deference) from the gene pool.

- The selection pressure is relatively weak: contrary predispositions appear through mutation as fast as they are removed through selection.

- The strength or weakness of selection may vary among human populations. Gene flow may reintroduce contrary predispositions from populations where the selection pressure against them is relatively weak.

- There may be frequency-dependent selection. Non-deferring women may be better liked when less common.

Sexual selection?

For all these reasons, evolutionary psychologist Rosemary Hopcroft (2009) argues that female deference is an innate predisposition and not a learned behavior. It has become widespread because sexual selection has favored deferential women. When women compete on the mate market, success goes to the more deferential ones.

One might point out that deferential behavior would be advantageous not only at the time of mating but also later—during pregnancy and infant care. So, strictly speaking, the selection pressure wouldn’t be just sexual selection.

But Hopcroft's argument is vulnerable to a more serious objection: sexual selection of females is the exception and not the rule in most animal species, especially mammals. The males are the ones that have to compete for mates. This reflects differing contributions to procreation, the female being saddled with the tasks of pregnancy, nursing, and early infant care. Meanwhile, the male is usually free to go back on the mate market, with the result that mateable males outnumber mateable females at any one time.

Hopcroft knows this but argues that the human species is a special case because "human fathers often invest heavily in their children." But often they don't. What about societies where men do very little to raise their offspring? This point doesn't disprove Hopcroft's argument. In fact, it may provide a way to prove it, i.e., female deference should be stronger where paternal investment is higher.

If we look at hunter-gatherers, paternal investment tends to follow a north-south cline. It's low in the tropical zone where women gather food year-round and can thus provide for themselves and their children with little male assistance. It's higher farther away from the equator, where winter limits food gathering and makes women dependent on food that men provide through hunting. Paternal investment is highest in the Arctic: almost all food is provided by men, and women specialize in tasks unrelated to food procurement (garment making, shelter building, meat processing).

This north-south cline was maintained and in some cases accentuated when hunting and gathering gave way to farming. In the tropical zone, farming developed out of female food gathering and thus became women's work, as is still the case in sub-Saharan Africa and Papua New Guinea. This sexual division of labor also explains why tropical farmers preferred to domesticate plants for food production. Only one animal species, the guinea fowl, has been domesticated in sub-Saharan Africa, and it was apparently domesticated by women. All other forms of livestock have come from elsewhere.

How universal is female deference?

Female deference should therefore vary within our species. In particular, it should correlate with the degree of paternal investment in offspring and, relatedly, the intensity of female-female competition for mates. This doesn't mean that women are actually more deferential in societies where men are providers. It simply means that they create an impression of deference, while continuing to do much of the real decision-making.

This issue is sidestepped by Hopcroft, who speaks only of 'women' and 'men'—as if all human groups show the same pattern of female deference. She cites many studies to prove her point, but this literature is overwhelmingly based on Euro-American or European participants. There is one study on African Americans, but it was limited to boys and girls 11 to 14 years old (Weisfeld et al., 1982).

In fact, this presumed universality of female deference was already disproven by a study published two years earlier:

Much feminist literature has described the relative silence of girls in classrooms and a concomitant drop in self-esteem for girls in their early teens (Sadker & Sadker, 1994; American Association of University Women, 1992). But other work has noted that Black girls maintain their self-esteem and their classroom "voice" into adolescence despite the fact that they may feel neglected in education (Orenstein, 1994; Taylor et al., 1995). (Morris, 2007)

Over a period of two years, Morris (2007) studied African American girls in grades 7 and 9 of an American middle school referred to as "Mathews." The students were 46% African American and the teachers two-thirds African American.

He found that African American girls seemed to feel little inhibition in the presence of boys:

Indeed, at Matthews I often observed girls—particularly Black girls—dominating classroom discussion.

[...] I noticed this active participation of girls to a greater extent in English classrooms, particularly when, as in this example, the subject concerned gender issues or relationships. However, the topic in this example also concerned computers and technology, areas more commonly dominated by boys. Furthermore, girls at Matthews, especially Black girls, spoke out to ask and answer questions in science and math classes as well, although to a lesser extent than in English and history classes. This willingness of African American girls to compete and stand up to others also emerged in their non-academic interactions with boys.

[...] Black girls at Matthews often challenged physical contact initiated by boys by hitting and chasing them back. They did not yield to and accept this behavior from boys, nor did they tend to seek adult authority to protect themselves and punish the boys.

[...] Thus, most African American girls in my observations did not hesitate to speak up in classrooms, and stand up to boys physically. Few Black girls I observed created disruptions in classrooms, but most consistently competed with boys and other girls to gain teachers' positive attentions.

[..] I observed this outspokenness at Matthews. Black girls there appeared less restrained by the dominant, White middle-class view of femininity as docile and compliant, and less expectant of male protection than White girls in other educational research.

These observations were consistent with those of the teachers, who generally described African American girls as being confrontational, loud, and unladylike:

Teachers, particularly women, often scolded Black girls for supposedly subverting their authority in the classroom. 

[...] By far the most common description and criticism of African American girls by all teachers at Matthews was that they were too "loud."

[...] For many adults at Matthews, the presumed loud and confrontational behavior of African American girls was viewed as a defect that compromised their very femininity. This emerged most clearly in educators castigating Black girls to behave like "ladies."

Morris attributed this behavioral pattern to America's history of slavery and race relations. It would be useful to examine comparable data from sub-Saharan Africa. Do African women show less deference to men in mixed-gender settings?

According to a study of Akan society in Ghana, wives traditionally deferred to their husbands, but such deference was less common than in European society because social interactions were less frequent between husband and wife, being limited to certain areas of family life:

Traditional norms stipulated, for example, that the wife should not eat with the husband; that she alone must carry the foodstuffs from the farm; take water for the husband to the bathroom; sweep the compound; do the cooking; clean her husband's penis after sexual intercourse; and show deference to him in speech and action. (van der Geest, 1976)

Husbands and wives seldom made decisions jointly:

Joint decision-making is believed to be a departure from the past when decisions were made in a much more autocratic way by the husband alone or when spouses decided over their own matters separately (van der Geest, 1976).

Things were very different in mixed-gender settings outside the family. In the larger community, African women of all ages showed little deference to men, the situation being similar to that of older women in European societies.

Despite these outward rules, however, women held considerable power and commanded wide respect. They played a role in traditional politics and religion and were nearly always economically independent of their husbands. Moreover, women enjoyed a high degree of freedom to enter and to terminate marital unions, and in the matrilineal society of the Akan they were the focal points of descent lines. (van der Geest, 1976)

It is unclear to what degree modernization has changed these social dynamics. Van der Geest (1976) found much interest among younger Akan in the European model of family life, i.e., husband and wife eating and socializing together, and making decisions together. His own study, however, failed to find a significant difference between older and younger Akan in this respect. He concluded that the elite were moving toward European models of behavior, but not the majority of the population:

There are indications that—contrary to the situation in elite circles—marriage in lower socioeconomic groups remains an institution of secondary importance. Spouses have relatively low expectations of their marriage partners and of marriage in general. Men are often reluctant or unable to provide sufficient financial support for their families, and not infrequently women bear the burden of parenthood alone. [...] Wives remain more attached to their families of origin than to their partners, and in almost half of all cases husband and wife do not even constitute a residential unit. The relatively low status of marriage in Kwahu is perhaps best reflected in the high incidence of divorce and extramarital sex. (van der Geest, 1976)

This is consistent with findings from other studies. The pair bond is relatively weak in sub-Saharan Africa. Husband and wife tend to feel greater attachment to their respective kin. The husband is more certain that his sister's offspring are his blood relatives, whereas the wife sees her mother, sisters, and other female relatives as more reliable sources of child care.

Poewe found in her fieldwork that the marriage institution was highly flexible and discouraged strong, intense, or lasting solidarity between husband and wife. The male in these matrilineal societies did not produce for his progeny or for himself, but usually for a matrician with whom he might or might not reside. His role, as husband, was to sexually satisfy and impregnate his wife and to take care of her during her pregnancies, but under no circumstances should a man be the object of "exclusive emotional investment or focus of attention. Instead, women are socialized to invest their emotions and material wealth in their respective matrilineages." (Saidi, 2010, p. 16).

For this reason, European outsiders see parental neglect of children where Africans see no neglect at all—simply another system of child care. As Africans move to other parts of the world, they tend to recreate the African marriage system in their host countries by using local people and institutions as "surrogate kin." This is the case in England, where young African couples often place their children in foster homes:

The foster parents interpret the infrequent visiting of their wards'"real" parents as signs of parental neglect and become strongly attached to the foster children. This sometimes results in legal suits for transfer of custody to the foster parents (Ellis 1977). Meanwhile, the African parents make no comparable assumption that the delegation of care means they have surrendered formal rights in children. They consider that by having made safe and reliable arrangements for the care of children and by regular payment of fees, they are dispatching their immediate responsibility. (Draper, 1989, p.164)

In recent years, there has been much talk of an "adoption crisis" in Africa, where millions of children are not being raised by both parents and thus purportedly need to be placed in Western homes. Yet this situation is far from new. In fact, it's unavoidable in a culture where women cannot count on male assistance and have to make other arrangements:

In most African communities, the concept of "adoption" does not exist in the western sense. Children are fostered, a prevalent, culturally sanctioned procedure whereby natal parents allow their children to be reared by adults other than the biological parent [35] [36]. Child fostering is a reciprocal arrangement and contributes to mutually recognised benefits for both natal and fostering families [37]. In Tanzania, less than one quarter of children being fostered by relatives other than their biological parent were orphans. (Foster and Williamson, 2000).

Conclusion

Evolutionary psychologists believe that all human populations share the same genetic influences on behavior. They defend this belief by pointing to the complexity of behavior and the presumably long time it would take for corresponding genetic influences to evolve coherently from scratch. But why do they have to evolve from scratch? Evolution usually proceeds through minor modifications to what already exists. This is no less true for genetic determinants of behavior. For instance, an innate mental algorithm may be partially or completely deactivated. Or its range of targets may be broadened. Or it may deactivate more slowly with increasing age.

To the extent that human groups differ genetically in mental makeup, the differences are not due to some groups having completely new mental algorithms. Instead, the differences are due to the same algorithms being modified in various ways, often subtly so. For example, learning is primarily an infant behavior that becomes more difficult with increasing age. People may differ in learning capacity not because their learning algorithms differ but because these algorithms remain fully active for a longer time in some people than in others.

Another example may be female deference. In early modern humans, women tended to feel deferential in the presence of men, but this tendency was weak because a woman's interactions with her husband were infrequent and less important for her survival and the survival of her children. This is still the case in human groups that never left the tropical zone.

As humans spread beyond the tropics, this behavioral tendency became more easily triggered, particularly during the ages of 15 to 18 when young women entered the mate market. This evolutionary change came about because women in non-tropical environments were more dependent on men for food, particularly in winter. Women were, so to speak, in a weaker bargaining position than men, first of all on the mate market and later during pregnancy and infant care.

References 

Brown, L.M., and C. Gilligan. (1992). Meeting at the Crossroads: Women's Psychology and Girls' Development, Harvard University Press.
http://fap.sagepub.com/content/3/1/11.short 

Draper, P. (1989). African marriage systems: Perspectives from evolutionary ecology, Ethology and Sociobiology, 10, 145-169.
http://www.sciencedirect.com/science/article/pii/0162309589900174

Fisher, H. (1999). The First Sex, Random House. 

Foster, G., and J. Williamson. (2000). A review of current literature of the impact of HIV/AIDS on children in sub-Saharan Africa, AIDS 2000, 14: S275-S284.
http://www.hsrc.ac.za/uploads/pageContent/1670/AreivewofcurrentliteratureontheimpactoforphansinAfrica.pdf 

Hopcroft, R.L. (2009). Gender inequality in interaction - An evolutionary account, Social Forces, 87, 1-28.
https://www.researchgate.net/publication/236707130_Gender_Inequality_in_Interaction__An_Evolutionary_Account 

Morris, E.W. (2007). "Ladies" or "Loudies"? Perceptions and experiences of black girls in classrooms, Youth & Society, 20, 1-26.
http://www.researchgate.net/profile/Edward_Morris7/publication/258200296_Ladies_or_Loudies_Perceptions_and_Experiences_of_Black_Girls_in_Classrooms/links/54be6b4e0cf218d4a16a60ac.pdf 

Saidi, C. (2010). Women's Authority and Society in Early East-Central Africa, University of Rochester Press.
https://books.google.ru/books?id=_dQcIsFvkfwC&printsec=frontcover&hl=ru#v=onepage&q&f=false 

van der Geest, S. (1976). Role relationships between husband and wife in rural Ghana, Journal of Marriage and the Family, 38, 572-578.
http://sjaakvandergeest.socsci.uva.nl/pdf/ghana/kwahu_marriage.pdf 

Weisfeld, C.C., G.E. Weisfeld, and J.W. Callaghan. (1982). Female inhibition in mixed-sex competition among young adolescents, Ethology and Sociobiology, 3, 29-42.

No, blacks aren't all alike. Who said they were?

$
0
0

 
In 1915, Paul Robeson became the third African American ever enrolled at Rutgers College, being one of four students selected for its Cap and Skull honor society. His father was of Igbo descent (Wikicommons)

 

Chanda Chisala has written another piece on IQ and African immigrants to the UK:

One of the biggest problems I had with the commenters were readers who apparently were only exposed to the statistical concept of Regression to the Mean from outside the IQ debate. [...]. The problem is not that the black immigrant children were not regressing to the point of equaling their source population mean IQ (that's also not what hereditarians predict either), but that they were clearly not even moving (or being pulled) towards that extremely low IQ, as hereditarians predict.

The correct term is not "regression to the mean." It's "non-inheritance of acquired characteristics." In other words, each person has a single genotype and a range of possible phenotypes. A culture can push its members to either limit of this range, thus creating a phenotype unlike that of other people with the same genetic endowment. But this phenotype has to be recreated with each succeeding generation. For instance, there used to be a Chinese custom of binding a girl's foot to make it four inches long and of limited use for walking. When the custom was outlawed, the next generation of women had normal feet. The phenotype bounced back to its initial form, so to speak, much like an elastic band when you stop stretching it (see note 1).

Regression to the mean is something else. It happens because of genetic change. For instance, a man with above-average IQ will likely marry a woman with above-average IQ. But only part of their above-averageness is genetic. The rest is due to favorable circumstances. Or simply luck. So their children's IQ will likely be a bit closer to the mean of the overall population. That second generation will in turn marry people with similar IQs. And their children will likewise be closer still to the population mean. Eventually, several generations later, the descendants of that original couple will have a mean IQ that matches the population mean.

That's regression to the mean. It's a multigenerational genetic change. It's not what happens when genes stay constant and culture changes.

Chanda is really talking about what happens when a culture stops pushing people to excel. The phenotype reverts to its usual state and the pressure to excel comes only from within. This is a legitimate argument, and it may have great explanatory value. When people from certain cultures move to Western countries, the second and third generations do a lot worse than the first generation over many indicators—academic achievement, crime rates, family stability, etc. This is a frequent outcome when people move from an environment where behavior is tightly controlled by family and community to one where behavior is much more self-controlled.

Such social atomization is less toxic for people of Northwest European descent because they have adapted to it over a longer time. For at least the past millennium, they have had weaker kinship ties and stronger tendencies toward individualism than any other human population. This cultural environment has favored individuals who rely less on external means of behavior control and more on internal means, specifically guilt proneness and affective empathy (Frost, 2014).

But that isn't Chanda's argument. That's the argument he attributes to something called "the HBD position." In reality, there are at least three HBD positions:

1. African immigrants to the UK perform better than whites academically because they are a select group, either because they have elite backgrounds or because they tend to be more motivated than the people who stay behind.

2. African immigrants perform better than whites academically, but this academic performance is weakly linked to the heritable component of IQ, especially in modern Britain. Teachers tend to "over-reward" black students who satisfy basic requirements (regular attendance, assignments turned in on time, non-disruptive behavior, etc.). African parents also invest in private tutoring to improve exam results.

3. Most African immigrants perform worse than whites academically. Only certain African groups excel, notably the Igbo of Nigeria. Igbo excellence is due to their specific evolutionary history and cannot be generalized to all sub-Saharan Africans.

Are African immigrants better than the Africans left behind?

Chanda attacks the first argument, saying that the average African immigrant is very average:

I actually know that the average African immigrants to the UK from any nation or tribe are not from the African elite class, economically or intellectually (even if there is a small segment from the super-professional class)

He also points to the example of African American families. The children of middle-class and even upper-class African Americans do worse on IQ tests than the children of lower-class Euro-American families. So even if you select from the black elite, the next generation will still underperform whites.

One could counter that the African American middle class largely works for the government. In Africa, the middle class is more likely to be self-made men and women. Also, a selection effect may exist despite the averageness of most African immigrants to the UK. Even if most are average, it may be that fewer are below-average. Below a certain level of ability, many Africans may not bother to emigrate.

Fuerst (2014) has studied this question and found that black immigrants to the U.S. have a mean IQ that is one third of a standard deviation above the mean IQ of their home countries. So there is a selection effect. But it seems too weak to explain the difference in IQ—more than one standard deviation and possibly two—between African immigrants to the UK and Africans back home, unless one assumes that migration to the UK is a lot more selective than migration to the US.

What does the GCSE actually measure?

We now come to the second explanation. It is assumed in this debate that the GCSE (General Certificate of Secondary Education) is a good proxy for IQ, which in turn is a proxy for the heritable component of intelligence. Is this true? Or does the GCSE largely measure something that is culturally acquired rather than heritable? Perhaps something as simple as showing up for class, doing one's assignments, or having a private tutor. This point is raised by one commenter:

[...] black Africans in London, even if poor and living in social housing, hire private tutors for their children. White British do not, especially the working class. This much better explains the GCSE results, a very tuition friendly test [...]

Furthermore, many African immigrants may be targeting those exams they can do best on and avoiding those they are less sure about:

[...] one needs to know how many children from each racial group take the exams. For example, the pass rate for Higher Mathematics is very high, not because the exams are easy, but because they are hard, and frighten off most applicants.

Interestingly, Chanda replies to this GCSE skepticism by pointing out that the same "Nigerians" (Igbos) who do well on the GCSE also do well in Nigeria:

For example, the subgroups within the Nigerian group that are the best in Nigeria or even in the US etc are also the best on the GCSEs. Also, the Traveller white (or whatever precise race) groups are placed by the GCSEs exactly where you would expect to find them.

The Igbo factor

This brings us to the third explanation. It's the one I favor, although the other two probably play a role. African excellence in the UK seems largely driven by a single high-performing people: the Igbo of southeastern Nigeria. Let's begin with the example of Harold Ekeh, whom Chanda describes in glowing terms:

Harold Ekeh showing off his acceptance letters to all 8 Ivy League Schools. He was born in Nigeria and migrated with his parents at age 8.

Ekeh is an Igbo name, and the Igbo (formerly known as Ibo) have a long history of academic success within Nigeria (Frost, 2015). Chanda himself referred to this success in his first article:

The superior Igbo achievement on GCSEs is not new and has been noted in studies that came before the recent media discovery of African performance. A 2007 report on "case study" model schools in Lambeth also included a rare disclosure of specified Igbo performance (recorded as Ibo in the table below) and it confirms that Igbos have been performing exceptionally well for a long time (5 + A*-C GCSEs); in fact, it is difficult to find a time when they ever performed below British whites. (Chisala, 2015a)

This superior achievement was widely known in Nigeria by the time of independence:

All over Nigeria, Ibos filled urban jobs at every level far out of proportion to their numbers, as laborers and domestic servants, as bureaucrats, corporate managers, and technicians. Two-thirds of the senior jobs in the Nigerian Railway Corporation were held by Ibos. Three-quarters of Nigeria's diplomats came from the Eastern Region. So did almost half of the 4,500 students graduating from Nigerian universities in 1966. The Ibos became known as the "Jews of Africa," despised—and envied—for their achievements and acquisitiveness. (Baker, 1980)

The term "Jews of Africa" recurs often in the literature. Henry Kissinger used it back in the 1960s:

The Ibos are the wandering Jews of West Africa — gifted, aggressive, Westernized; at best envied and resented, but mostly despised by the mass of their neighbors in the Federation. (Kissinger, 1969) 

To what degree is African success Igbo success? If we go back to Chanda's first article, we see that high African achievers are overwhelmingly "Nigerians" (Chisala, 2015a). This is evident in a chart that lists mean % difference from the mean English GCSE score in 2010-2011 by ethnicity:

Nigerian: +21.8
Ghanaian: +5.5
Sierra Leone: +1.4
Somali: -23.7
Congolese: -35.3

Clearly, high academic achievement is due to something that is very much present in Nigeria, a little bit in Ghana, and not at all in Somalia and Congo. Could this something be the Igbo? The Igbo make up 18% of Nigeria's population and form a large diaspora elsewhere in West Africa and farther afield. In fact, they seem to be disproportionately represented in overseas Nigerian communities, making up most of the Nigerian community in Japan and a large portion of China's Nigerian community (Wikipedia, 2015). Statistics are unfortunately lacking for the UK.

Conclusion

What happens when we remove Igbo students from the GCSE results? How well do the other Africans do? To some degree, Chanda answered that question in his first article. African excellence seems to be overwhelmingly Igbo excellence.

So why doesn't he speak of Igbo excellence? Probably because he assumes that all sub-Saharan Africans are fundamentally the same. Or maybe he assumes that all humans are fundamentally the same. Both assumptions are wrong, and neither can be construed as an "HBD position." 

We are all genetically different, even within our own families. So why the surprise that different African peoples are ... different? The Igbo have for a long time specialized in a trading lifestyle that favors a certain mental toolkit: future time orientation; numeracy, and abstract reasoning. This is gene-culture coevolution. When circumstances push people to excel in a certain way, there will be selection for people who can naturally excel in that way, without the prodding of circumstances. And it doesn't take eons of time for such evolution to work.
 
Will we hear more about the Igbo in this debate? Probably not. There is a strong desire, especially in the United Kingdom, to show that blacks are converging toward white norms of behavior, including academic performance. There is indeed some convergence, but almost all of it can be traced to the growing numbers of high-performing "Nigerians" (Igbos) and the growing numbers of biracial children (the census now has a mixed-race category, but most biracial people still self-identify as "black"). In the UK, 55% of Black Caribbean men and 40% of Black Caribbean women have a partner from another ethnic background. It's very likely that half of all "black" children in the UK are at least half-white by ancestry (Platt, 2009, p. 7).

Nor is it likely that we'll hear more about the Igbo from Chanda. As he sees it, the debate should be over. The academic excellence of Igbo students proves that the black/white IQ gap in the U.S. cannot have a genetic basis:

[It is not] a function of global racial evolution (Sub-Saharan African genes versus European genes), as most hereditarians believe, especially those who identify with the Human Biodiversity or HBD intellectual movement (generally known as "scientific racism" in academic circles, but we are avoiding such unkind terms).

Thank you, Chanda, for avoiding unkind terms. Well, I know a bit about HBD. The term was coined by Steve Sailer in the late 1990s for an email discussion group that included myself and various academics who may or may not want their names disclosed. It's hard to generalize but we were all influenced by findings that genetic evolution didn’t slow down as cultural evolution speeded up in our species. In fact, the two seemed to feed into each other. This is why genetic evolution accelerated over 100-fold about 10,000 years ago when humans began to abandon hunting and gathering for farming, which in turn led to ever more diverse societies. Our ancestors thus adapted much more to their cultural environments than to their natural environments. These findings were already circulating within our discussion group before being written up in a paper by Hawks et al. (2007) and later in a book by Greg Cochran and Henry Harpending (2009).

Yes, previously it was thought that genetic evolution slowed to a crawl with the advent of culture. Therefore, groups like the Igbo couldn't possibly differ genetically from other sub-Saharan Africans, at least not for anything culture-related. But that kind of thinking wasn't HBD or even racialist. It was simply the old anthropological narrative, and it's still accepted by many anthropologists, most of whom aren't "scientific racists."

Oh sorry, I forgot we promised to avoid that term.

Note

(1) Of course, if the cultural pressure is maintained long enough, there may be selection for individuals who naturally produce the new phenotype—with no prodding and pushing. Let’s suppose that foot binding had never been outlawed in China. Through chance mutations, some Chinese women might be born with tiny feet, and their descendants would become more and more numerous because of their better life prospects. So what began as a new phenotype could end up becoming a new genotype. Culture pushes the limits of phenotypic plasticity, and then favors genotypes that don't have to be pushed. That's gene-culture coevolution.

References 

Baker, P.H. (1980). Lurching toward unity, The Wilson Quarterly, 4, 70-80
http://archive.wilsonquarterly.com/sites/default/files/articles/WQ_VOL4_W_1980_Article_01_2.pdf 

Chisala, C. (2015b). Closing the Black-White IQ gap debate. Part I, The Unz Review, October 5
http://www.unz.com/article/closing-the-black-white-iq-gap-debate-part-i/ 

Chisala, C. (2015a). The IQ gap is no longer a black and white issue, The Unz Review, June 25
http://www.unz.com/article/the-iq-gap-is-no-longer-a-black-and-white-issue/

Cochran, G. and H. Harpending. (2009). The 10,000 Year Explosion: How Civilizations Accelerated Human Evolution, Basic Books, New York. 

Frost, P. (2015). The Jews of West Africa? Evo and Proud, July 4
http://evoandproud.blogspot.ca/2015/07/the-jews-of-west-africa.html 

Frost, P. (2014). How universal is empathy? Evo and Proud, June 28
http://evoandproud.blogspot.ca/2014/06/how-universal-is-empathy.html 

Fuerst, J. (2014). Ethnic/race differences in aptitude by generation in the United States: An exploratory meta-analysis, June 29, Open Differential Psychology
http://openpsych.net/ODP/wp-content/uploads/2014/07/U.S.-Ethnic-Race-Differences-in-Aptitude-by-Generation-An-Exploratory-Meta-analysis-John-Fuerst-2014-07262014FINAL.pdf 

Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007). Recent acceleration of human adaptive evolution, Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.
http://harpending.humanevo.utah.edu/Documents/accel_pnas_submit.pdf 

Kissinger, H.A. (1969). Memorandum, January 28. U.S. Department of State Archive
http://2001-2009.state.gov/r/pa/ho/frus/nixon/e5/55258.htm 

Platt, L. (2009). Ethnicity and family. Relationships within and between ethnic groups: An analysis using the Labour Force Survey. Equality and Human Rights Commission.
http://www.equalityhumanrights.com/sites/default/files/documents/raceinbritain/ethnicity_and_family_report.pdf 
Wikipedia (2015). Igbo people
https://en.wikipedia.org/wiki/Igbo_people

The end of Indian summer

$
0
0

 
Antifas, Switzerland (Wikicommons). Today, antifas are becoming an extrajudicial police, just as human rights commissions are becoming a parallel justice system.

 

Until three years ago, Canada’s human rights commissions had the power to prosecute and convict individuals for "hate speech." This power was taken away after two high-profile cases: one against the magazine Maclean's for printing an excerpt from Mark Steyn's book America Alone; and the other against the journalist Ezra Levant for publishing Denmark’s satirical cartoons of the prophet Mohammed. Both cases were eventually dismissed, largely because the accused were well known and popular. As Mark Steyn observed:

[...] they didn't like the heat they were getting under this case. Life was chugging along just fine, chastising non-entities nobody had ever heard about, piling up a lot of cockamamie jurisprudence that inverts the principles of common law, and nobody paid any attention to it. Once they got the glare of publicity from the Maclean's case, the kangaroos decided to jump for the exit. I've grown tired of the number of Canadian members of Parliament who've said to me over the last best part of a year now, "Oh, well of course I fully support you, I'm fully behind you, but I'd just be grateful if you didn't mention my name in public.” (Brean, 2008)

Despite the dismissals, both cases had a chilling effect on Canadian journalism. Maclean's made this point in a news release:

Though gratified by the decision, Maclean'scontinues to assert that no human rights commission, whether at the federal or provincial level, has the mandate or the expertise to monitor, inquire into, or assess the editorial decisions of the nation's media. And we continue to have grave concerns about a system of complaint and adjudication that allows a media outlet to be pursued in multiple jurisdictions on the same complaint, brought by the same complainants, subjecting it to costs of hundreds of thousands of dollars, to say nothing of the inconvenience. (Maclean's, 2008)

This situation had come about gradually in Canada. At first, human rights commissions fought discrimination only in employment and housing, and there was strong resistance to prosecution of people simply for their ideas. This situation changed from the 1970s onward. Human rights took the place in society that formerly belonged to religion, and human rights advocates acquired the immunity from criticism that formerly belonged to the clergy. Discrimination was no longer wrong in certain cases and under certain circumstances. It became evil, and people who condoned it in any form and for any reason were likewise evil.

This view of reality progressively transformed human rights commissions. On the one hand, they were given an ever longer list of groups to protect. On the other, their scope of action grew larger, expanding to include not only the job and housing markets but also the marketplace of ideas. Their power increased until they became a parallel justice system, the key difference being that they denied the accused certain rights that had long existed in traditional courts of law, particularly the presumption of innocence and the right to know one’s accuser. All of this was made possible by section 13 of the Human Rights Act (1977):

Section 13 ostensibly banned hate speech on the Internet and left it up to the quasi-judicial human rights commission to determine what qualified as "hate speech." But, unlike a court, there was no presumption of innocence of those accused of hate speech by the commission. Instead, those accused had to prove their innocence. (Akin, 2013)

In 2012, the House of Commons repealed section 13. The ensuing three years brought a return to normal and a dissipation of the chill that had descended on Canadian journalists and writers.

Today, our Indian summer is coming to an end. In Alberta, the human rights commission is pushing to see how far it can go, and Ezra Levant is again being prosecuted:

This October I will be prosecuted for one charge of being "publicly discourteous or disrespectful to a Commissioner or Tribunal Chair of the Alberta Human Rights Commission" and two charges that my "public comments regarding the Alberta Human Rights Commission were inappropriate and unbecoming and that such conduct is deserving of sanction."

Because last year I wrote a newspaper editorial calling Alberta's human rights commission "crazy". (Levant, 2015)

Last month in Quebec, the government passed a bill that greatly expands the powers of its human rights commission to prosecute "hate."

Bill 59, introduced by Quebec Premier Philippe Couillard's Liberal government, would make it illegal to promote hate speech in Quebec, without defining what hate speech is. Despite this, it would expand the definition of hate speech to include "political convictions" for any speech deemed by Quebec's human rights bureaucracy to promote "fear of the other", an absurdly vague term which could easily lead to prosecutorial abuses.

Bill 59 would empower Quebec's human rights commission to investigate anonymous complaints, or to launch investigations on its own, without any complaint, culminating in charges before Quebec's Human Rights Tribunal. The tribunal would be able to impose fines of up to $10,000 for first offenders, $20,000 for repeat offenders. Those found to have violated the legislation would be named and shamed on a publicly accessible list of offenders, maintained by the government. (Editorial, 2015)

The new law also casts a wider net by defining two forms of complicity in hate speech, direct and indirect:

Engaging in or disseminating the types of speech described in section 1 is prohibited.

Acting in such a manner as to cause such types of speech to be engaged in or disseminated is also prohibited. (Gouvernement du Québec, 2015)

"Hate speech" is supposedly defined in section 1 of Bill 59, but this section merely repeats the same term:

The Act applies to hate speech and speech inciting violence that are engaged in or disseminated publicly and that target a group of people sharing a characteristic identified as prohibited grounds for discrimination under section 10 of the Charter of human rights and freedoms (chapter C-12).(Gouvernement du Québec, 2015)
 
In short, "hate speech" will be defined by the Quebec Human Rights Commission, the only limitation being that the speech must target a protected group.

How did this piece of legislation come to be? It had been sold to the public as a means to fight Islamist terrorism and, as such, gained the support of many people, including right-wing politicians who thought its “ant-hate” language was just window dressing to make it more palatable. In its final form, however, there are no references at all to Islamism or terrorism. As columnist Joanne Marcotte points out:

Nowhere in the bill is this goal mentioned. It doesn't seem that this is the intention of the Liberal Party, which is perhaps more concerned about a supposedly Islamophobic current of opinion than about the pressure that radical religious fundamentalists are exerting on our values of individual freedom.

Indeed, no mention of the following words appear in the bill: fundamentalism, fundamentalist, radicalism, radicalization, terrorism, religious (as in "religious fundamentalism").

So it isn't surprising that only two groups to date have supported the bill: The Canadian Muslim Forum and the Muslim Council of Montreal. (Marcotte, 2015)

As Joanne Marcotte notes ironically, this bill was pushed through by a center-right government that claims to believe in individual freedom. Even more ironically, the strongest support for the new law comes from the far left. A demonstration in Montreal against Bill 59 was broken up by a hundred antifas. The police were there but not one antifa was arrested (Kamel, 2015).

This is a growing trend in Western countries: a strange alliance between center-right regimes and far-left antifas. For all intents and purposes, the latter are becoming an extrajudicial police, just as human rights commissions are becoming a parallel justice system. 

Conclusion

After a brief lull, a new offensive has begun against "hate speech" in Canada. Quebec is leading the way with legislation that is not only punitive but also broadly-worded. Hate speech is whatever the human rights commission considers to be hate speech.

Outside Quebec, existing laws are likewise being interpreted more punitively and more broadly, as seen in the prosecution of Ezra Levant for "disrespectful" speech. This trend may lead to new legislation in other provinces and perhaps at the federal level, especially if the Liberal Party takes power on October 19.

Although the Liberal Party of Canada is legally distinct from the Liberal Party of Quebec, the two work together and cater to the same clientele. The major difference is that the former defines itself as center-left and the latter as center-right. In practice, the difference is trivial, "left" and "right" referring more and more to the same ideology. Today, the left pushes for cultural globalism (multiculturalism, antiracism), while the right pushes for economic globalism (outsourcing to low-wage countries, insourcing of low-wage labor).

Quebec's Bill 59 may thus become a template for federal legislation. The Liberal leader, Justin Trudeau, has in fact promised to amend the Human Rights Act while not spelling out his plans, other than to say he will recognize transgendered individuals as a protected group.

So will I be packing my bags and going south of the border? No, I love my country too much and, frankly, I don’t envy Americans. The U.S. doesn’t have anti-hate laws because it doesn’t need them. Most Americans have fully internalized the antiracist ethos and can be counted on to be willing partners in their own dispossession.

The situation is different in Canada, especially in Quebec: the new ethos is more recent, has a weaker hold on people, and cannot be counted on “to do its job.” This is why we have legislation like Bill 59. It’s a sign of weakness, not of strength.

References 

Akin, D. (2013). Hate speech provision in Human Rights Act struck down, The Toronto Sun, June 26.
http://www.torontosun.com/2013/06/26/hate-speech-provision-in-human-rights-act-struck-down

Brean, J. (2008). Maclean's wins third round of hate fight, National Post, October 11 

Editorial (2015). Quebec's Bill 59 attacks free speech, The Toronto Sun, September 4
http://www.torontosun.com/2015/09/04/quebecs-bill-59-attacks-free-speech 

Gouvernement du Québec (2015). Bill no. 59: An Act to enact the Act to prevent and combat hate speech and speech inciting violence and to amend various legislative provisions to better protect individuals, Assemblée Nationale du Québec.
http://www.assnat.qc.ca/en/travaux-parlementaires/projets-loi/projet-loi-59-41-1.html

Kamel, Z. (2015). Blows exchanged between anti-Bill 59 and anti-fascist demos. No arrests made despite physical altercations, The Link, September 28
http://thelinknewspaper.ca/article/blows-exchanged-between-anti-bill-59-and-anti-fascist-demos 

Levant, E. (2015). I'm being prosecuted for calling human rights commissions "crazy,"Stand with Ezra
http://www.standwithezra.ca/?utm_campaign=mrg_before_july&utm_medium=email&utm_source=standwithezra 

Maclean's. (2008). Maclean's responds to recent decision from the Canadian Human Rights Commission, June 26, News Release
http://archive.newswire.ca/fr/story/210185/maclean-s-responds-to-recent-decision-from-the-canadian-human-rights-commission 

Marcotte, J. (2015). Projet de loi 59: liberticide, dangereux, inutile, Le Huffington Post, September 21
http://quebec.huffingtonpost.ca/joanne-marcotte/projet-loi-59-liberticide-dangereux-propos-haineux-liberte-expression_b_8159532.html

Polygyny makes men bigger, tougher ... and meaner

$
0
0

 
Hadza men are smaller, less robust, and less aggressive than the more polygynous Datoga (Wikicommons - Idobi).

 

Humans differ in paternal investment—the degree to which fathers help mothers care for their offspring. They differ in this way between individuals, between populations, and between stages of cultural evolution.

During the earliest stage, when all humans were hunter-gatherers, men invested more in their offspring with increasing distance from the equator. Longer, colder winters made it harder for women to gather food for themselves and their children. They had to rely on meat from their hunting spouses. Conversely, paternal investment was lower in the tropics, where women could gather food year-round and provide for themselves and their children with little male assistance.

This sexual division of labor influenced the transition to farming. In the tropics, women were the main providers for their families as gatherers of fruits, berries, roots, and other wild plant foods. They were the ones who developed farming, thereby biasing it toward domestication of wild plants.

This may be seen in sub-Saharan Africa, where farming arose near the Niger's headwaters and gave rise to the Sudanic food complex—a wide range of native crops now found throughout the continent (sorghum, pearl millet, cow pea, etc.) and only one form of livestock, the guinea fowl (Murdock, 1959, pp. 44, 64-68). Many wild animal species could have been domesticated for meat production, but women were much less familiar with them. Men knew these species as hunters but had little motivation to domesticate them. Why should they? Women were the main providers. 

And so women shouldered even more the burden of providing for themselves and their offspring. Men in turn found it easier to go back on the mate market and get second or third wives. Finally, men had to compete against each other much more for fewer unmated women.
 
There was thus a causal chain: female dominance of farming => female reproductive autonomy => male polygyny => male-male rivalry for access to women. Jack Goody (1973) in his review of the literature says: "The desire of men to attract wives is seen as correlated with the degree of women's participation in the basic productive process." The more women produce, the lower the cost of polygyny.

In sub-Saharan Africa, the cost was often negative. Goody quotes a 17th century traveler on the Gold Coast: the women till the ground "whilst the man only idly spends his time in impertinent tattling (the woman's business in our country) and drinking of palm-wine, which the poor wives are frequently obliged to raise money to pay for, and by their hard labour maintain and satisfie these lazy wretches their greedy thirst after wines."

Goody cites data from southern Africa showing that the polygyny rate fell when the cost of polygyny rose:

In Basutoland one in nine husbands had more than one wife in 1936; in 1912, it was one in 5.5 (Mair 1953: 10). Hunter calculates that in 1911 12 per cent of Pondo men were plurally married and the figure was slightly lower in 1921. In 1946, the Tswana rate was 11 per cent; according to a small sample collected by Livingstone in 1850 it was 43 per cent. The figures appear to have changed drastically over time and the reasons are interesting. 'The large household is now not a source of wealth, but a burden which only the rich can bear' (Mair 1953: 19). Not only is there a specific tax for each additional wife, but a man's wives now no longer give the same help in agriculture that they did before. One reason for this is that the fields are ploughed rather than hoed. Among the Pondo, 'the use of the plough means that the amount of grain cultivated no longer depends on women's labour'(Goody, 1973)

Although polygynous marriage has become less common in southern Africa, polygynous behavior seems as frequent as ever. To a large degree, polygynous marriage has given way to more transient forms of polygyny: prostitution and other informal arrangements. Goody also notes that polygyny rates have remained high in the Sahel, where pastoralism has nonetheless increased male participation in farming. He gives the example of Ghana. Polygyny rates are about the same in the north and the south, yet in the north men participate much more in farming.

So what is going on? Goody concludes that "female farming and polygyny are clearly associated in a general way" but ultimately the "reasons behind polygyny are sexual and reproductive rather than economic and productive." It would be more parsimonious to say that the polygyny rate increases when the cost of providing for a woman and her children decreases for men. Over time, low-cost polygyny selects for men who are more motivated to exploit sexual opportunities. This new mindset influences the subsequent course of gene-culture coevolution.

Such gene-culture coevolution has gone through four stages in the evolutionary history of sub-Saharan Africans:

First stage

Tropical hunter-gatherers were already oriented toward low paternal investment. Men had a lesser role in child rearing because year-round food gathering provided women with a high degree of food autonomy. Women were thus selected for self-reliance and men for polygyny. Pair bonding was correspondingly weak in both sexes.

Second stage

This mindset guided tropical hunter-gatherers in their transition to farming. In short, female-dominated food gathering gave way to female-dominated horticulture—hoe farming of various crops with almost no livestock raising. Women became even more autonomous, and men even more polygynous. There was thus further selection for a mindset of female self-reliance, male polygyny, and weak pair bonding.

Third stage

A similar process occurred with the development of trade. Female-dominated horticulture tended to orient women, much more than men, toward the market economy. This has particularly been so in West Africa, where markets are overwhelmingly run by women. Trade has thus become another means by which African women provide for themselves and their children.

Fourth stage

Female-dominated horticulture has given way to male-dominated pastoralism in some regions, such as the Sahel. Despite higher male participation in farming, the pre-existing mindset has tended to maintain high polygyny rates. We see a similar tendency in southern Africa, where polygyny rates have fallen over the past century, and yet polygynous behavior persists in the form of prostitution and less formal sexual arrangements.

The Hadza and the Datoga

Mode of subsistence, mating system, and mindset are thus interrelated. These interrelationships are discussed by Butovskaya et al. (2015) in their study of two peoples in Tanzania: the largely monogamous Hadza (hunter-gatherers) and the highly polygynous Datoga (pastoralists). In their review of previous studies, the authors note:

In hunter-gatherer societies, such as the monogamous Hadza of Tanzania (Africa), men invest more in offspring than in small-scale pastoralist societies, such as the polygynous Datoga of Tanzania [12-14]. Polygyny and between-group aggression redirect men's efforts from childcare toward investment in male-male relationships and the pursuit of additional mates [15]. When men participate in childcare, their testosterone (T) level decreases [15-18]. Muller et al. [19] found that, among the monogamous, high paternally investing Hadza, T levels were lower for fathers than for non-fathers. This effect was not observed among the polygynous, low paternally investing Datoga. (Butovskaya et al., 2015).

Butovskaya et al. (2015) confirmed these previous findings in their own study:

Datoga males reported greater aggression than Hadza men—a finding in line with previous reports [29,30]. It is important to mention several striking differences between these two cultures. There is a negative attitude toward aggression among the Hadza but not among the Datoga. In situations of potential aggression, the Hadza prefer to leave [30]. In contrast, aggression is an instrument of social control—both within the family and in outgroup relations in Datoga society. Datoga men are trained to compete with each other and to act aggressively in particular circumstances [30]

The authors also confirmed differences in reproductive behavior between the two groups: 

Our research indicates a difference in the number of children in Hadza and Datoga men achieved after the age of 50. This may be interpreted as differences attributable to different life trajectories and marriage patterns. Beginning in early childhood, boys in the two societies are subjected to different social and environmental pressures (e.g., it is typical for Datoga parents to punish children for misbehavior, while parental violence is much less typical for Hadza parents). Hadza men start reproducing in the early 20s, but their reproductive success later in life is associated with their hunting skills [15]. In the Datoga, men marry later, typically in their 30s. Male status and, consequently, social and reproductive success in the Datoga are positively correlated with fighting abilities and risk-taking in raiding expeditions among younger men, and with wealth, dominance, and social skills among older men. In the Datoga, as in other patrilineal societies, fathers do not invest directly in child care, but children do benefit from their father's investment in the form of wealth and social protection, as well as various services provided by father's patrilineal male relatives [56]. In polygynous societies, spending resources on attracting additional wives may be more beneficial [40,57,58]. It would be difficult for some men to invest directly in providing for all their children, given that men with multiple wives can father a considerable number of children, and that households with wives may be located at substantial distance from one another.

This behavioral difference seems to be mediated by differing levels of androgens, such as testosterone:

The effect of androgens, such as T, operates through stimulation of androgen receptors [21-23]. The androgen receptor (AR) gene contains a polymorphic and functional locus in exon 1, comprising two triplets (CAG and GGN). This locus supports a regulatory function that responds to T, with fewer CAG repeat clusters being more effective in transmitting the T signal [22]. Moreover, the length of the GGN repeat predicts circulating and free T in men.

At the androgen receptor gene, the authors found fewer CAG repeats in the Datoga than in the Hadza. The number of repeats was also more variable in the Datoga. The Datoga's higher and more variable polygyny rates thus seem to correlate with higher and more variable levels of testosterone.

The authors also wished to see whether these differing levels of testosterone correlate with differing levels of aggressiveness. To this end, they interviewed the Hadza and Datoga participants:

They were asked to provide information including their age, sex, marital status, number of children, ethnicity and aggression history (especially fights with other tribal members). All questions were read aloud in one-to-one dialogues and further explanations were provided, if necessary. Self-reported aggression was assessed with the Buss-Perry Aggression Questionnaire (BPAQ; [48]). The BPAQ includes 29 statements, grouped into four subscales—physical aggression (9 items), verbal aggression (5 items), anger (7 items), and hostility (8 items)—answered on aLikert scale anchored by 1 (extremely uncharacteristic of me) and 5 (extremely characteristic of me).

Total aggression was found to correlate negatively with CAG repeat number. Age group did not predict aggression.

More polygyny = stronger sexual selection of men

Finally, the authors suggest that Datoga men, with their higher polygyny rate and fiercer competition for access to women, have undergone greater sexual selection. They have thus become bigger and more masculine than Hadza men. Although this selection pressure also exists among the Hadza, the driving force of sexual selection has been weaker because Hadza men are more monogamous and less sexually competitive:

Our findings are in concordance with other research, demonstrating that even among the relatively egalitarian Hadza there is selection pressure in favor of more masculine men [59-62]. At the same time, preference for more masculine partners, with greater height and body size, is culturally variable and influenced by the degree of polygyny, local ecology, and other economic and social factors [59-62]. Many Datoga women commented that they would like to avoid taller and larger men as marriage partners, as they may be dangerously violent [44,62]. Only 2% of Hadza women listed large body size as an attractive mate characteristic [63]. Hadza marriages in which the wife is taller than the husband are common, and as frequent as would be expected by chance [64]. (Butovskaya et al., 2015)

This is consistent with what we see in nonhuman polygynous species. Successful males tend to be the ones that are better not only at attracting the opposite sex but also at fighting off rivals. They thus become bigger, tougher, and meaner.

This is also consistent with what we see generally in the highly polygynous farming peoples of sub-Saharan Africa. They and their African-American descendants exceed European-descended subjects in weight, chest size, arm girth, leg girth, muscle fiber properties, and bone density (Ama et al., 1986; Ettinger et al.,1997; Himes, 1988; Hui et al., 2003; Pollitzer and Anderson, 1989; Todd and Lindala, 1928; Wagner and Heyward, 2000; Wolff and Steggerda, 1943; Wright et al., 1995).

References 

Ama, P.F.M., J.A. Simoneau, M.R. Boulay, O. Serresse, G. Thériault, and C. Bouchard. (1986). Skeletal muscle characteristics in sedentary Black and Caucasian males, Journal of Applied Physiology, 61, 1758-1761.
http://www.educadorfisicoadinis.com.br/download/artigos/Skeletal%20muscle%20characteristics%20in%20sedentary%20black%20and%20Caucasian%20males.pdf 

Butovskaya M.L., O.E. Lazebny, V.A. Vasilyev, D.A. Dronova, D.V. Karelin, A.Z.P. Mabulla, et al. (2015). Androgen receptor gene polymorphism, aggression, and reproduction in Tanzanian foragers and pastoralists. PLoS ONE 10(8): e0136208. 
https://www.researchgate.net/publication/281170838_Androgen_Receptor_Gene_Polymorphism_Aggression_and_Reproduction_in_Tanzanian_Foragers_and_Pastoralists 

Ettinger, B., S. Sidney, S.R. Cummings, C. Libanati, D.D. Bikle, I.S. Tekawa, K. Tolan, and P. Steiger. (1997). Racial differences in bone density between young adult black and white subjects persist after adjustment for anthropometric, lifestyle, and biochemical differences, Journal of Clinical Endocrinology & Metabolism, 82, 429-434.
http://press.endocrine.org/doi/abs/10.1210/jcem.82.2.3732 

Goody, J. (1973). Polygyny, economy and the role of women, in J. Goody (ed.) The Character of Kinship, Cambridge: Cambridge University Press.
https://books.google.ca/books?hl=fr&lr=&id=TFjf4mUHqv4C&oi=fnd&pg=PA175&dq=Polygyny,+economy+and+the+role+of+women&ots=Lf9w6wxY9D&sig=3d0RBnoMbGUd2OYqghzhwdsO3BA#v=onepage&q&f=false

Himes, J. H. (1988). Racial variation in physique and body composition, Canadian Journal of Sport Sciences, 13, 117-126.
http://europepmc.org/abstract/med/3293730 

Hui, S.L., L.A. DiMeglio, C. Longcope, M. Peacock, R. McClintock, A.J. Perkins, and C.C. Johnston Jr. (2003). Difference in bone mass between Black and White American children: Attributable to body build, sex hormone levels, or bone turnover? Journal of Clinical Endocrinology & Metabolism, 88, 642-649. 
https://www.researchgate.net/profile/Conrad_Johnston/publication/10911942_Difference_in_bone_mass_between_black_and_white_American_children_attributable_to_body_build_sex_hormone_levels_or_bone_turnover/links/548f36af0cf225bf66a7fdc1.pdf

Murdock, G.P. (1959). Africa. Its Peoples and Their Culture History, New York: McGraw-Hill. 

Pollitzer, W.S. and J.J. Anderson. (1989). Ethnic and genetic differences in bone mass: a review with a hereditary vs environmental perspective, American Journal of Clinical Nutrition, 50, 1244-1259
http://ajcn.nutrition.org/content/50/6/1244.short 

Todd, T.W., and A. Lindala. (1928). Dimensions of the body: Whites and American Negroes of both sexes, American Journal of Physical Anthropology, 12, 35-101.
http://onlinelibrary.wiley.com/doi/10.1002/ajpa.1330120104/abstract 

Wagner, D.R. and V.H. Heyward. (2000). Measures of body composition in blacks and whites: a comparative review, American Journal of Clinical Nutrition, 71, 1392-1402.
http://ajcn.nutrition.org/content/71/6/1392.short

Wolff, G. and M. Steggerda. (1943). Female-male index of body build in Negroes and Whites: An interpretation of anatomical sex differences, Human Biology, 15, 127-152.

Wright, N.M., J. Renault, S. Willi, J.D. Veldhuis, J.P. Pandey, L. Gordon, L.L. Key, and N.H. Bell. (1995). Greater secretion of growth hormone in black than in white men: possible factor in greater bone mineral density-a clinical research center study, Journal of Clinical Endocrinology & Metabolism, 80, 2291-2297.
http://press.endocrine.org/doi/abs/10.1210/jcem.80.8.7543111 
Viewing all 353 articles
Browse latest View live