2525 stories
·
5 followers

Your Book Review: How Language Began

1 Share

[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]

I. THE GOD

You may have heard of a field known as "linguistics". Linguistics is supposedly the "scientific study of language", but this is completely wrong. To borrow a phrase from elsewhere, linguists are those who believe Noam Chomsky is the rightful caliph. Linguistics is what linguists study.

I'm only half-joking, because Chomsky’s impact on the study of language is hard to overstate. Consider the number of times his books and papers have been cited, a crude measure of influence that we can use to get a sense of this. At the current time, his Google Scholar page says he's been cited over 500,000 times. That’s a lot. 

It isn’t atypical for a hard-working professor at a top-ranked institution to, after a career’s worth of work and many people helping them do research and write papers, have maybe 20,000 citations (= 0.04 Chomskys). Generational talents do better, but usually not by more than a factor of 5 or so. Consider a few more citation counts:

Yes, fields vary in ways that make these comparisons not necessarily fair: fields have different numbers of people, citation practices vary, and so on. There is also probably a considerable recency bias; for example, most biologists don’t cite Darwin every time they write a paper whose content relates to evolution. But 500,000 is still a mind-bogglingly huge number. 

Not many academics do better than Chomsky citation-wise. But there are a few, and you can probably guess why:

  • Human-Genome-Project-associated scientist Eric Lander (685,000 = 1.37 Chomskys)

…well, okay, maybe I don’t entirely get Foucault’s number. Every humanities person must have an altar of him by their bedside or something.    

Chomsky has been called “arguably the most important intellectual alive today” in a New York Times review of one of his books, and was voted the world’s top public intellectual in a 2005 poll. He’s the kind of guy that gets long and gushing introductions before his talks (this one is nearly twenty minutes long). All of this is just to say: he’s kind of a big deal.

This is what he looks like. According to Wikipedia, the context for this picture is: 

“Noam Chomsky speaks about humanity's prospects for survival”

Since around 1957, Chomsky has dominated linguistics. And this matters because he is kind of a contrarian with weird ideas. 

Is language for communicating? No, it’s mainly for thinking: (What Kind of Creatures Are We? Ch. 1, pg. 15-16)

It is, indeed, virtual dogma that the function of language is communication. ... there is by now quite significant evidence that it is simply false. Doubtless language is sometimes used for communication, as is style of dress, facial expression and stance, and much else. But fundamental properties of language design indicate that a rich tradition is correct in regarding language as essentially an instrument of thought, even if we do not go as far as Humboldt in identifying the two. 

Should linguists care about the interaction between culture and language? No, that’s essentially stamp-collecting: (Language and Responsibility, Ch. 2, pg. 56-57)

Again, a discipline is defined in terms of its object and its results. Sociology is the study of society. As to its results, it seems that there are few things one can say about that, at least at a fairly general level. One finds observations, intuitions, impressions, some valid generalizations perhaps. All very valuable, no doubt, but not at the level of explanatory principles. … Sociolinguistics is, I suppose, a discipline that seeks to apply principles of sociology to the study of language; but I suspect that it can draw little from sociology, and I wonder whether it is likely to contribute much to it. … You can also collect butterflies and make many observations. If you like butterflies, that’s fine; but such work must not be confounded with research, which is concerned to discover explanatory principles of some depth and fails if it has not done so.

Did the human capacity for language evolve gradually? No, it suddenly appeared around 50,000 years ago after a freak gene mutation: (Language and Mind, third edition, pg, 183-184)

An elementary fact about the language faculty is that it is a system of discrete infinity, rare in the organic world. Any such system is based on a primitive operation that takes objects already constructed, and constructs from them a new object: in the simplest case, the set containing them. Call that operation Merge. Either Merge or some equivalent is a minimal requirement. With Merge available, we instantly have an unbounded system of hierarchically structured expressions. 

The simplest account of the “Great Leap Forward” in the evolution of humans would be that the brain was rewired, perhaps by some slight mutation, to provide the operation Merge … There are speculations about the evolution of language that postulate a far more complex process … A more parsimonious speculation is that they did not, and that the Great Leap was effectively instantaneous, in a single individual, who was instantly endowed with intellectual capacities far superior to those of others, transmitted to offspring and coming to predominate. At best a reasonable guess, as are all speculations about such matters, but about the simplest one imaginable, and not inconsistent with anything known or plausibly surmised. It is hard to see what account of human evolution would not assume at least this much, in one or another form.

I think all of these positions are kind of insane for reasons that we will discuss later. (Side note: Chomsky’s proposal is essentially the hard takeoff theory of human intelligence.)  

Most consequential of all, perhaps, are the ways Chomsky has influenced (i) what linguists mainly study, and (ii) how they go about studying it. 

Naively, since language involves many different components—including sound production and comprehension, intonation, gestures, and context, among many others—linguists might want to study all of these. While they do study all of these, Chomsky and his followers view grammar as by far the most important component of humans’ ability to understand and produce language, and accordingly make it their central focus. Roughly speaking, grammar refers to the set of language-specific rules that determine whether a sentence is well-formed. It goes beyond specifying word order (or ‘surface structure’, in Chomskyan terminology) since one needs to know more than just where words are placed in order to modify or extend a given sentence.

Consider a pair of sentences Chomsky uses to illustrate this point in Aspects of the Theory of Syntax (pg. 22), his most cited work:

      (1a) I expected John to be examined by a specialist.

      (2a) I persuaded John to be examined by a specialist.

The words “expected” and “persuaded” appear in the same location in each sentence, but imply different ‘latent’ grammatical structures, or ‘deep structures’. One way to show this is to observe that a particular way of rearranging the words produces a sentence with the same meaning in the first case (1a = 1b), and a different meaning in the second (2a != 2b):

      (1b) I expected a specialist to examine John.

      (2b) I persuaded a specialist to examine John.

In particular, the target of persuasion is “John” in the case of (2a), and “the specialist” in the case of (2b). A full Chomskyan treatment of sentences like this would involve hierarchical tree diagrams, which permit a precise description of deep structure.

You may have encountered the famous sentence: “Colorless green ideas sleep furiously.” It first appeared in Chomsky’s 1957 book Syntactic Structures, and the point is that even nonsense sentences can be grammatically well-formed, and that speakers can quickly assess the grammatical correctness of even nonsense sentences that they’ve never seen before. To Chomsky, this is one of the most important facts to be explained about language.

A naive response to Chomsky’s preoccupation with grammar is: doesn’t real language involve a lot of non-grammatical stuff, like stuttering and slips of the tongue and midstream changes of mind? Of course it does, and Chomsky acknowledges this. To address this point, Chomsky has to move the goalposts in two important ways. 

First, he famously distinguishes competence from performance, and identifies the former as the subject of any serious theory of language: (Aspects of the Theory of Syntax, Ch. 1, pg. 4)

The problem for the linguist, as well as for the child learning the language, is to determine from the data of performance the underlying system of rules that has been mastered by the speaker-hearer and that he puts to use in actual performance. Hence, in the technical sense, linguistic theory is mentalistic, since it is concerned with discovering a mental reality underlying actual behavior. Observed use of language or hypothesized dispositions to respond, habits, and so on, may provide evidence as to the nature of this mental reality, but surely cannot constitute the actual subject matter of linguistics, if this is to be a serious discipline. 

Moreover, he claims that grammar captures most of what we should mean when we talk about speakers’ linguistic competence: (Aspects of the Theory of Syntax, Ch. 1, pg. 24)

A grammar can be regarded as a theory of a language; it is descriptively adequate to the extent that it correctly describes the intrinsic competence of the idealized native speaker. 

Another way Chomsky moves the goalposts is by distinguishing E-languages, like English and Spanish and Japanese, from I-languages, which only exist inside human minds. He claims that serious linguistics should be primarily interested in the latter. In a semi-technical book summarizing Chomsky’s theory of language, Cook and Newson write: (Chomsky’s Universal Grammar: An Introduction, pg. 13)

E-language linguistics … aims to collect samples of language and then describe their properties. … I-language linguistics, however, is concerned with what a speaker knows about language and where this knowledge comes from; it treats language as an internal property of the human mind rather than something external … 

Not only should linguistics primarily be interested in studying I-languages, but to try and study E-languages at all may be a fool’s errand:  (Chomsky’s Universal Grammar: An Introduction, pg. 13)

Chomsky claims that the history of generative linguistics shows a shift from an E-language to an I-language approach; ‘the shift of focus from the dubious concept of E-language to the significant notion of I-language was a crucial step in early generative grammar’ (Chomsky, 1991b, pg. 10). … Indeed Chomsky is extremely dismissive of E-language approaches: ‘E-language, if it exists at all, is derivative, remote from mechanisms and of no particular empirical significance, perhaps none at all’ (Chomsky, 1991b, pg. 10).1

I Am Not A Linguist (IANAL), but this redefinition of the primary concern of linguistics seems crazy to me. Is studying a language like English as it is actually used really of no particular empirical significance? 

And this doesn’t seem to be a one-time hyperbole, but a representative claim. Cook and Newson continue: (Chomsky’s Universal Grammar: An Introduction, pg. 14)

The opposition between these two approaches in linguistics has been long and acrimonious, neither side conceding the other’s reality. … The E-linguist despises the I-linguist for not looking at the ‘real’ facts; the I-linguist derides the E-linguist for looking at trivia. The I-language versus E-language distinction is as much a difference of research methods and of admissible evidence as it is of long-term goals.

So much for what linguists ought to study. How should they study it? 

The previous quote gives us a clue. Especially in the era before Chomsky (BC), linguists were more interested in description. Linguists were, at least in one view, people who could be dropped anywhere in the world, and emerge with a tentative grammar of the local language six months later. (A notion like this is mentioned early in this video.) Linguists catalog the myriad of strange details about human languages, like the fact that some languages don’t appear to have words for relative directions, or “thank you”, or “yes” and “no”

After Chomsky's domination of the field (AD), there were a lot more theorists. While you could study language by going out into the field and collecting data, this was viewed as not the only, and maybe not even the most important, way to work. Diagrams of sentences proliferated. Chomsky, arguably the most influential linguist of the past hundred years, has never done fieldwork. 

In summary, to Chomsky and many of the linguists working in his tradition, the scientifically interesting component of language is grammar competence, and real linguistic data only indirectly reflects it. 

All of this matters because the dominance of Chomskyan linguistics has had downstream effects in adjacent fields like artificial intelligence (AI), evolutionary biology, and neuroscience. Chomsky has long been an opponent of the statistical learning tradition of language modeling, essentially claiming that it does not provide insight about what humans know about languages, and that engineering success probably can’t be achieved without explicitly incorporating important mathematical facts about the underlying structure of language. Chomsky’s ideas have motivated researchers to look for a “language gene” and “language areas” of the brain. Arguably, no one has yet found either—but more on that later. 

How Chomsky attained this stranglehold on linguistics is an interesting sociological question, but not our main concern in the present work2. The intent here is not to pooh-pooh Chomsky, either; brilliant and hard-working people are often wrong on important questions. Consider that his academic career began in the early 1950s—over 70 years ago!—when our understanding of language, anthropology, biology, neuroscience, and artificial intelligence, among many other things, was substantially more rudimentary. 

Where are we going with this? All of this is context for understanding the ideas of a certain bomb-throwing terrorist blight on the face of linguistics: Daniel Everett. How Language Began is a book he wrote about, well, what language is and how it began. Everett is the anti-Chomsky.

II. THE MISSIONARY

We all love classic boy-meets-girl stories. Here’s one: boy meets girl at a rock concert, they fall in love, the boy converts to Christianity for the girl, then the boy and girl move to the Amazon jungle to dedicate the rest of their lives to saving the souls of an isolated hunter-gatherer tribe. 

Daniel Everett is the boy in this story. The woman he married, Keren Graham, is the daughter of Christian missionaries and had formative experiences living in the Amazon jungle among the Sateré-Mawé people. At seventeen, Everett became a born-again Christian; at eighteen, he and Keren married; and over the next few years, they started a family and prepared to become full-fledged missionaries like Keren’s parents.

First, Everett studied “Bible and Foreign Missions” at the Moody Bible Institute in Chicago. After finishing his degree in 1975, the natural next step was to train more specifically to follow in the footsteps of Keren’s parents. In 1976, he and his wife enrolled in the Summer Institute of Linguistics (SIL) to learn translation techniques and more viscerally prepare for life in the jungle: 

They were sent to Chiapas, Mexico, where Keren stayed in a hut in the jungle with the couple’s children—by this time, there were three—while Everett underwent grueling field training. He endured fifty-mile hikes and survived for several days deep in the jungle with only matches, water, a rope, a machete, and a flashlight.

Everett apparently had a gift for language-learning. This led SIL to invite Everett and his wife to work with the Pirahã people (pronounced pee-da-HAN), whose unusual language had thwarted all previous attempts to learn it. In 1977, Everett’s family moved to Brazil, and in December they met the Pirahã for the first time. As an SIL-affiliated missionary, Everett’s explicit goals were to (i) translate the Bible into Pirahã, and (ii) convert as many Pirahã as possible to Christianity. 

But Everett’s first encounter with the Pirahã was cut short for political reasons: (Don’t Sleep There Are Snakes, Ch. 1, pg. 13-14)

In December of 1977 the Brazilian government ordered all missionaries to leave Indian reservations. … Leaving the village under these forced circumstances made me wonder whether I’d ever be able to return. The Summer Institute of Linguistics was concerned too and wanted to find a way around the government’s prohibition against missionaries. So SIL asked me to apply to the graduate linguistics program at the State University of Campinas (UNICAMP), in the state of São Paulo, Brazil. It was hoped that UNICAMP would be able to secure government authorization for me to visit the Pirahãs for a prolonged period, in spite of the general ban against missionaries. … My work at UNICAMP paid off as SIL hoped it would.

Everett became a linguist proper sort of by accident, mostly as an excuse to continue his missionary work. But he ended up developing a passion for it. In 1980, he completed Aspects of the Phonology of Pirahã, his master’s thesis. He continued on to get a PhD in linguistics, also from UNICAMP, and in 1983 finished The Pirahã Language and Theory of Syntax, his dissertation. He continued studying the Pirahã and working as an academic linguist after that. In all, Everett spent around ten years of his life living with the Pirahã, spread out over some thirty-odd years. As he notes in Don’t Sleep, There Are Snakes: (Prologue, pg. xvii-xviii)

I went to the Pirahãs when I was twenty-six years old. Now I am old enough to receive senior discounts. I gave them my youth. I have contracted malaria many times. I remember several occasions on which the Pirahãs or others threatened my life. I have carried more heavy boxes, bags, and barrels on my back through the jungle than I care to remember. But my grandchildren all know the Pirahãs. My children are who they are in part because of the Pirahãs. And I can look at some of those old men (old like me) who once threatened to kill me and recognize some of the dearest friends I have ever had—men who would now risk their lives for me. 

Everett interviewing some Pirahã people. (source)

Everett did eventually learn their language, and it’s worth taking a step back to appreciate just how hard that task was. No Pirahã spoke Portuguese, apart from some isolated phrases they used for bartering. They didn’t speak any other language at all—just Pirahã. How do you learn another group’s language when you have no languages in common? The technical term is monolingual fieldwork. But this is just a fancy label for some combination of pointing at things, listening, crude imitation, and obsessively transcribing whatever you hear. For years

It doesn’t help that the Pirahã language seems genuinely hard to learn in a few different senses. First, it is probably conventionally difficult for Westerners to learn since it is a tonal language (two tones: high and low) with a small number of phonemes (building block sounds) and a few unusual sounds3. Second, there is no written language. Third, the language has a variety of ‘channels of discourse’, or ways of talking specialized for one or another cultural context. One of these is ‘whistle speech’; Pirahãs can communicate purely in whistles. This feature appears to be extremely useful during hunting trips: (Don’t Sleep, There Are Snakes, Ch. 11, pg. 187-188)

My first intense contact with whistle speech came one day when the Pirahãs had given me permission to go hunting with them. After we’d been walking for about an hour, they decided that they weren’t seeing any game because I, with my clunking canteens and machete and congenital clumsiness, was making too much noise. “You stay here and we will be back for you later.” Xaikáibaí said gently but firmly. …

As I tried to make the best of my solitary confinement, I heard the men whistling to one another. They were saying, “I’ll go over there; you go that way,” and other such hunting talk. But clearly they were communicating. It was fascinating because it sounded so different from anything I had heard before. The whistle carried long and clear in the jungle. I could immediately see the importance and usefulness of this channel, which I guessed would also be much less likely to scare away game than the lower frequencies of the men’s normal voices. 

Fourth, important aspects of the language reflect core tenets of Pirahã culture in ways that one might not a priori expect. Everett writes extensively about the ‘immediacy of experience principle’ of Pirahã culture, which he summarizes as the idea that: (Don’t Sleep, There Are Snakes, Ch. 7, pg. 132)

Declarative Pirahã utterances contain only assertions related directly to the moment of speech, either experienced by the speaker or witnessed by someone alive during the lifetime of the speaker.

One way the language reflects this is that the speaker must specify how they know something by affixing an appropriate suffix to verbs: (Don’t Sleep, There Are Snakes, Ch. 12, pg. 196)

Perhaps the most interesting suffixes, however (though these are not unique to Pirahã), are what linguists call evidentials, elements that represent the speaker’s evaluation of his or her knowledge of what he or she is saying. There are three of these in Pirahã: hearsay, observation, and deduction.

To see what these do, let’s use an English example. If I ask you, “Did Joe go fishing?” you could answer, “Yes, at least I heard that he did,” or “Yes, I know because I saw him leave,” or “Yes, at least I suppose he did because his boat is gone.” The difference between English and Pirahã is that what English does with a sentence, Pirahã does with a verbal suffix. 

Everett also convincingly links this cultural principle to the lack of Pirahã number words and creation myths. On the latter topic, Everett recalls the following exchange: (Don’t Sleep, There Are Snakes, Ch. 7, pg. 134)

I sat with Kóhoi once and he asked me, after hearing about my god, “What else does your god do?” And I answered, “Well, he made the stars, and he made the earth.” Then I asked, “What do the Pirahãs say?” He answered, “Well, the Pirahãs say that these things were not made.”

And all of this is to say nothing of the manifold perils of the jungle: malaria, typhoid fever, dysentery, dangerous snakes, insects, morally gray river traders, and periodic downpours. If Indiana Jones braved these conditions for years, we would consider his stories rousing adventures. Everett did this while also learning one of the most unusual languages in the world.

People on the bank of the Maici river. (source)

By the way, he did eventually sort of achieve his goal of translating the Bible. Armed with a solid knowledge of Pirahã, he was able to translate the New Testament’s Gospel of Mark. Since the Pirahã have no written language, he provided them with a recorded version, but did not get the reaction he expected: (Don’t Sleep, There Are Snakes, Ch. 17, pg. 267-268)

When we returned to the village, I recorded Mark’s gospel in my own voice for the Pirahãs to listen to. I then brought in a wind-up tape recorder to play the recording, and I taught the Pirahãs how to use it, which, surprisingly enough, some of the children did. Keren and I left the village and returned a few weeks later. The people were still listening to the gospel, with children cranking the recorder. I was initially quite excited about this, until it became clear that the only part of the book that they paid attention to was the beheading of John the Baptist. “Wow, they cut off his head. Play that again!”

One reaction to hearing the gospel caught Everett even more off-guard: (Don’t Sleep, There Are Snakes, Ch. 17, pg. 269)

"The women are afraid of Jesus. We do not want him."

"Why not?" I asked, wondering what had triggered this declaration.

"Because last night he came to our village and tried to have sex with our women. He chased them around the village, trying to stick his large penis into them."

Kaaxaóoi proceeded to show me with his two hands held far apart how long Jesus's penis was—a good three feet.

But the Pirahã had an even more serious objection to Jesus: (Don’t Sleep, There Are Snakes, Ch. 17, pg. 265-266)

Part of the difficulty of my task began to become clear to me. I communicated more or less correctly to the Pirahãs about my Christian beliefs. The men listening to me understood that there was a man named Hisó, Jesus, and that he wanted others to do what he told them.

"The Pirahã men then asked, "Hey Dan, what does Jesus look like? Is he dark like us or light like you?" I said, "Well, I have never actually seen him. He lived a long time ago. But I do have his words." "Well, Dan, how do you have his words if you have never heard him or seen him?"

They then made it clear that if I had not actually seen this guy (and not in any metaphorical sense, but literally), they weren't interested in any stories I had to tell about him. Period. This is because, as I now knew, the Pirahãs believe only what they see. Sometimes they also believe in things that someone else has told them, so long as that person has personally witnessed what he or she is reporting.

In the end, Everett never converted a single Pirahã. But he did even worse than converting zero people—he lost his own faith after coming to believe that the Pirahã had a good point. After keeping this to himself for many years, he revealed his loss of faith to his family, which led to a divorce and his children breaking contact with him for a number of years afterward. 

But Everett losing his faith in the God of Abraham was only the beginning. Most importantly for us, he also lost his faith in the God of Linguistics—Noam Chomsky. 

III. THE WAR

In 2005, Everett’s paper “Cultural constraints on grammar and cognition in Pirahã: Another look at the design features of human language” was published in the journal Cultural Anthropology. An outsider might expect an article like this, which made a technical observation about the apparent lack of a property called ‘recursion’ in the Pirahã language, to receive an ‘oh, neat’ sort of response. Languages can be pretty different from one another, after all. Mandarin lacks plurals. Spanish sentences can omit an explicit subject. This is one of those kinds of things.

But the article ignited a firestorm of controversy that follows Everett to this day. Praise for Everett and his work on recursion in Pirahã:

He became a pure charlatan, although he used to be a good descriptive linguist. That is why, as far as I know, all the serious linguists who work on Brazilian languages ignore him.

  • Noam Chomsky, MIT professor and linguist

You, too, can enjoy the spotlight of mass media and closet exoticists! Just find a remote tribe and exploit them for your own fame by making claims nobody will bother to check!

  • Andrew Nevins, UCL professor and linguist (Harvard professor at quote time)

I think he knows he’s wrong, that’s what I really think. I think it’s a move that many, many intellectuals make to get a little bit of attention. 

  • Tom Roeper, U. Mass. Amherst professor and linguist 

Everett is a racist. He puts the Pirahã on a level with primates. 

  • Cilene Rodrigues, PUC-Rio professor and linguist

Is Daniel Everett the village idiot of linguistics? 

Apparently he struck a nerve. And there is much more vitriol like this; see Pullum for the best (short) account of the beef I’ve found, along with sources for each quote except the last. On the whole affair, he writes:

Calling it a controversy or debate would be an understatement; it was a campaign of vengeance and career sabotage. 

I’m not going to rehash all of the details, but the conduct of many in the pro-Chomsky faction is pretty shocking. Highly recommended reading. Substantial portions of the books The Kingdom of Speech and Decoding Chomsky are also dedicated to covering the beef and related issues, although I haven’t read them. 

What’s going on? Assuming Everett is indeed acting in good faith, why did he get this reaction? As I said in the beginning, linguists are those who believe Noam Chomsky is the rightful caliph. Central to Chomsky’s conception of language is the idea that grammar reigns supreme, and that human brains have some specialized structure for learning and processing grammar. In the writing of Chomsky and others, this hypothetical component of our biological endowment is sometimes called the narrow faculty of language (FLN); this is to distinguish it from other (e.g., sensorimotor) capabilities relevant for practical language use.

A paper by Hauser, Chomsky, and Fitch titled “The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?” was published in the prestigious journal Science in 2002, just a few years earlier. The abstract contains the sentence:

We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language.

Some additional context is that Chomsky had spent the past few decades simplifying his theory of language. A good account of this is provided in the first chapter of Chomsky’s Universal Grammar: An Introduction. By 2002, arguably not much was left: the core claims were that (i) grammar is supreme, (ii) all grammar is recursive and hierarchical. More elaborate aspects of previous versions of Chomsky’s theory, like the idea that each language might be identified with different parameter settings of some ‘global’ model constrained by the human brain (the core idea of the so-called ‘principles and parameters’ formulation of universal grammar), were by now viewed as helpful and interesting but not necessarily fundamental.

Hence, it stands to reason that evidence suggesting not all grammar is recursive could be perceived as a significant threat to the Chomskyan research program. If not all languages had recursion, then what would be left of Chomsky’s once-formidable theoretical apparatus? 

Everett’s paper inspired a lively debate, with many arguing that he is lying, or misunderstands his own data, or misunderstands Chomsky, or some combination of all of those things. The most famous anti-Everett response is “Pirahã Exceptionality: A Reassessment” by Nevins, Pesetsky, and Rodrigues (NPR), which was published in the prestigious journal Language in 2009. This paper got a response from Everett, which led to an NPR response-to-the-response.

To understand how contentious even the published form of this debate became, I reproduce in full the final two paragraphs of NPR’s response-response:

We began this commentary with a brief remark about the publicity that has been generated on behalf of Everett's claims about Pirahã. Although reporters and other nonlinguists may be aware of some ‘big ideas’ prominent in the field, the outside world is largely unaware of one of the most fundamental achievements of modern linguistics: the three-fold discovery that (i) there is such a thing as a FACT about language; (ii) the facts of language pose PUZZLES, which can be stated clearly and precisely; and (iii) we can propose and evaluate SOLUTIONS to these puzzles, using the same intellectual skills that we bring to bear in any other domain of inquiry. This three-fold discovery is the common heritage of all subdisciplines of linguistics and all schools of thought, the thread that unites the work of all serious modern linguists of the last few centuries, and a common denominator for the field.

In our opinion, to the extent that CA and related work constitute a ‘volley fired straight at the heart’ of anything, its actual target is no particular school or subdiscipline of linguistics, but rather ANY kind of linguistics that shares the common denominator of fact, puzzle, and solution. That is why we have focused so consistently on basic, common-denominator questions: whether CA’s and E09’s conclusions follow from their premises, whether contradictory published data has been properly taken into account, and whether relevant previous research has been represented and evaluated consistently and accurately. To the extent that outside eyes may be focused on the Pirahã discussion for a while longer, we would like to hope that NP&R (and the present response) have helped reinforce the message that linguistics is a field in which robustness of evidence and soundness of argumentation matter.

Two observations here. First, another statement about “serious” linguistics; why does that keep popping up? Second, wow. That’s the closest you can come to cursing someone out in a prestigious journal. 

Polemics aside, what’s the technical content of each side’s argument? Is Pirahã recursive or not? Much of the debate appears to hinge on two things:

  • what one means by recursion

  • what one means by the statement “All natural human languages have recursion.”

Everett generally takes recursion to refer to the following property of many natural languages: one can construct sentences or phrases from other sentences and phrases. For example:

“The cat died.” -> “Alice said that [the cat died].” -> “Bob said that [Alice said that [the cat died.]]”

In the above example, we can in principle generate infinitely many new sentences by writing “Z said X,” where X is the previous sentence and Z is some name. For clarity’s sake, one should probably distinguish between different ways to generate new sentences or phrases from old ones; Pullum mentions a few in the context of assessing Everett’s Pirahã recursion claims:

Everett reports that there are no signs of no multiple coordination (It takes [skill, nerve, initiative, and courage]), complex determiners ([[[my] son’s] wife’s] family), stacked modifiers (a [nice, [cosy, [inexpensive [little cottage]]]]), or—most significant of all—reiterable clause embedding (I thought [ you already knew [that she was here ] ]). These are the primary constructions that in English permit sentences of any arbitrary finite length to be constructed, yielding the familiar argument that the set of all definable grammatical sentences in English is infinite.

Regardless of the details, a generic prediction should be that there is no longest sentence in a language whose grammar is recursive. This doesn’t mean that one can say an arbitrarily long sentence in real life4. Rather, one can say that, given a member of some large set of sentences, one can always extend it.

Everett takes the claim “All natural human languages have recursion.” to mean that, if there exists a natural human language without recursion, the claim is false. Or, slightly more subtly, if there exists a language which uses recursion so minimally that linguists have a hard time determining whether a corpus of linguistic data falsifies it or not, sentence-level recursion is probably not a bedrock principle of human languages. 

I found the following anecdote from a 2012 paper of Everett’s enlightening:

Pirahã speakers reject constructed examples with recursion, as I discovered in my translation of the gospel of Mark into the language (during my days as a missionary). The Bible is full of recursive examples, such as the following, from Mark 1:3:

‘(John the Baptist) was a voice of one calling in the desert…’

I initially translated this as:

‘John, the man that put people in the water in order to clean them for God, that lived in a place like a beach with no trees and that yelled for people to obey God’.

The Pirahãs rejected every attempt until I translated this as:

‘John cleaned people in the river. He lived in another jungle. The jungle was like a beach. It had no trees. He yelled to people. You want God!’

The non-recursive structure was accepted readily and elicited all sorts of questions. I subsequently realized looking through Pirahã texts that there were no clear examples involving either recursion or even embedding. Attempts to construct recursive sentences or phrases, such as ‘several big round barrels', were ultimately rejected by the Pirahãs (although initially they accepted them to be polite to me, a standard fieldwork problem that Jeanette Sakel and I discuss).

He does explicitly claim (in the aforementioned paper and elsewhere) that Pirahã probably has no longest sentence, which is about the most generic anti-recursion statement one can make. 

Chomsky and linguists working in his tradition sometimes write in a way consistent with Everett’s conception of recursion, but sometimes don’t. For example, consider this random 2016 blogpost I found by a linguist in training: 

For generative linguistics the recursive function is Merge, which combines two words or phrases to form a larger structure which can then be the input for further iterations of Merge. Any expression larger than two words, then, requires recursion, regardless of whether there is embedding in that expression. For instance the noun phrase “My favourite book” requires two iterations of Merge, (Merge(favourite, book)= [Favourite book], Merge(my, [favourite book])= [my [favourite book]]) and therefore is an instance of recursion without embedding.

To be clear, this usage of ‘recursion’ seems consistent with how many other Chomskyan linguists have used the term. And with all due respect to these researchers, I find this notion of recursion completely insane, because it would imply (i) any language with more than one word in its sentences has recursion, and that (ii) all sentences are necessarily constructed recursively. 

The first implication means that “All natural human languages have recursion.” reduces to the vacuously true claim that “All languages allow more than one word in their sentences.”5 The second idea is more interesting, because it relates to how the brain constructs sentences, but as far as I can tell this claim cannot be tested using purely observational linguistic data. One would have to do some kind of experiment to check the order in which subjects mentally construct sentences, and ideally make brain activity measurements of some sort. 

Aside from sometimes involving a strange notion of recursion, another feature of the Chomskyan response to Everett relates to the distinction we discussed earlier between so-called E-languages and I-languages. Consider the following exchange from a 2012 interview with Chomsky:

NS: But there are critics such as Daniel Everett, who says the language of the Amazonian people he worked with seems to challenge important aspects of universal grammar.

Chomsky: It can't be true. These people are genetically identical to all other humans with regard to language. They can learn Portuguese perfectly easily, just as Portuguese children do. So they have the same universal grammar the rest of us have. What Everett claims is that the resources of the language do not permit the use of the principles of universal grammar.

That's conceivable. You could imagine a language exactly like English except it doesn't have connectives like "and" that allow you to make longer expressions. An infant learning truncated English would have no idea about this: they would just pick it up as they would standard English. At some point, the child would discover the resources are so limited you can't say very much, but that doesn't say anything about universal grammar, or about language acquisition.

Chomsky makes claims like this elsewhere too. The argument is that, even if there were a language without a recursive grammar, this is not inconsistent with his theory, since his theory is not about E-languages like English or Spanish or Pirahã. His theory only makes claims about I-languages, or equivalently about our innate language capabilities. 

But this is kind of a dumb rhetorical move. Either the theory makes predictions about real languages or it doesn’t. The statement that some languages in the world are arguably recursive is not a prediction; it’s an observation, and we didn’t need the theory to make it. What does it mean for the grammar of thought languages to be recursive? How do we test this? Can we test it by doing experiments involving real linguistic data, or not? If not, are we even still talking about language?

To this day, as one might expect, not everyone agrees with Everett that (i) Pirahã lacks a recursive hierarchical grammar, and that (ii) such a discovery would have any bearing at all on the truth or falsity of Chomskyan universal grammar. Given that languages can be pretty weird, among other reasons, I am inclined to side with Everett here. But where does that leave us? We do not just want to throw bombs and tell everyone their theories are wrong. 

Does Everett have an alternative to the Chomskyan account of what language is and where it came from? Yes, and it turns out he’s been thinking about this for a long time. How Language Began is his 2017 offering in this direction. 

IV. THE BOOK

So what is language, anyway? 

Everett writes: (How Language Began, Ch. 1, pg. 15)

Language is the interaction of meaning (semantics), conditions on usage (pragmatics), the physical properties of its inventory of sounds (phonetics), a grammar (syntax, or sentence structure), phonology (sound structure), morphology (word structure), discourse conversational organizational principles, information, and gestures. Language is a gestalt—the whole is greater than the sum of its parts. That is to say, the whole is not understood merely by examining individual components.

Okay, so far, so good. To the uninitiated, it looks like Everett is just listing all of the different things that are involved in language; so what? The point is that language is more than just grammar. He goes on to say this explicitly: (How Language Began, Ch. 1, pg. 16)

Grammar is a tremendous aid to language and also helps in thinking. But it really is at best only a small part of any language, and its importance varies from one language to another. There are tongues that have very little grammar and others in which it is extremely complex.

His paradigmatic examples here are Pirahã and Riau Indonesian, which appears to lack a hierarchical grammar, and which moreover apparently lacks a clear noun/verb distinction. You might ask: what does that even mean? I’m not 100% sure, since the linked Gil chapter appears formidable, but Wikipedia gives a pretty good example in the right direction:

For example, the phrase Ayam makan (lit. 'chicken eat') can mean, in context, anything from 'the chicken is eating', to 'I ate some chicken', 'the chicken that is eating' and 'when we were eating chicken'

Is “chicken” the subject of the sentence, the object of the sentence, or something else? Well, it depends on the context. 

What’s the purpose of language? Communication: (How Language Began, Introduction, pg. 5)

Indeed, language changes lives. It builds society, expresses our highest aspirations, our basest thoughts, our emotions and our philosophies of life. But all language is ultimately at the service of human interaction. Other components of language—things like grammar and stories—are secondary to conversation. 

Did language emerge suddenly, as it does in Chomsky’s proposal, or gradually? Very gradually: (How Language Began, Introduction, pg. 7-8)

There is a wide and deep linguistic chasm between humans and all other species. … More likely, the gap was formed by baby steps, by homeopathic changes spurred by culture. Yes, human languages are dramatically different from the communication systems of other animals, but the cognitive and cultural steps to get beyond the ‘language threshold’ were smaller than many seem to think. The evidence shows that there was no ‘sudden leap’ to the uniquely human features of language, but that our predecessor species in the genus Homo and earlier, perhaps among the australopithecines, slowly but surely progressed until humans achieved language. This slow march taken by early hominins resulted eventually in a yawning evolutionary chasm between human language and other animal communication. 

So far, we have a bit of a nothingburger. Language is for communication, and probably—like everything else!—emerged gradually over a long period of time. While these points are interesting as a contrast to Chomsky, they are not that surprising in and of themselves. 

But Everett’s work goes beyond taking the time to bolster common sense ideas on language origins. Two points he discusses at length are worth briefly exploring here. First, he offers a much more specific account of the emergence of language than Chomsky does, and draws on a mix of evidence from paleoanthropology, evolutionary biology, linguistics, and more. Second, he pretty firmly takes the Anti-Chomsky view on whether language is innate: (Preface, pg. xv)

… I deny here that language is an instinct of any kind, as I also deny that it is innate, or inborn. 

These two points are not unrelated. Everett’s core idea is that language should properly be thought of as an invention rather than an innate human capability. You might ask: who invented it? Who shaped it? Lots of people, collaboratively, over a long time. In a word, culture. As Everett notes in the preface, “Language is the handmaiden of culture.

In any case, let’s discuss these points one at a time. First: the origins of language. There are a number of questions one might want to answer about how language began:

  • In what order did different language-related concepts and components emerge?

  • When did language proper first arise?

  • What aspects of human biology best explain why and how language emerged?

To Everett, the most important feature of language is not grammar or any particular properties of grammar, but the fact that it involves communication using symbols. What are symbols? (Ch. 1, pg. 17)

Symbols are conventional links to what they refer to. They … need not bear any resemblance to nor any physical connection to what they refer to. They are agreed upon by society.

There are often rules for arranging symbols, but given how widely they can vary in practice, Everett views such rules as interesting but not fundamental. One can have languages with few rules (e.g., Riau) or complex rules (e.g., German); the key requirement for a language is that symbols are used to convey meaning. 

Where did symbols come from? To address this question, Everett adapts a theory due to the (in his view underappreciated) American polymath Charles Sanders Peirce: semiotics, the theory of signs. What are signs? (Ch. 1, pg. 16)

A sign is any pairing of a form (such as a word, smell, sound, street sign, or Morse code) with a meaning (what the sign refers to). 

Everett, in the tradition of Peirce, distinguishes between various different types of signs. The distinction is based on (i) whether the pairing is intentional, and (ii) whether the form of the sign is arbitrary. Indexes are non-intentional, non-arbitrary pairings of form and meaning (think: dog paw print). Icons are intentional, non-arbitrary pairings of form and meaning (think: a drawing of a dog paw print). Symbols are intentional, arbitrary pairings (think: the word “d o g” refers to a particular kind of real animal, but does not resemble anything about it). 

Everett argues that symbols did not appear out of nowhere, but rather arose from a natural series of abstractions of concepts relevant to early humans. The so-called ‘semiotic progression’ that ultimately leads to symbols looks something like this:

indexes (dog paw print) -> icons (drawing of dog paw print) -> symbols (“d o g”) 

This reminds me of what little I know about how written languages changed over time. For example, many Chinese characters used to look a lot more like the things they represented (icon-like), but became substantially more abstract (symbol-like) over time:

Eight examples of how Chinese characters have changed over time. (source)

For a given culture and concept, the icon-to-symbol transition could’ve happened any number of ways. For example, early humans could’ve mimicked an animal’s cry to refer to it (icon-like, since this evokes a well-known physical consequence of some animal’s presence), but then gradually shifted to making a more abstract sound (symbol-like) over time. 

The index (non-intentional, non-arbitrary) to icon transition must happen even earlier. This refers to whatever process led early humans to, for example, mimic a given animal’s cry in the first place, or to draw people on cave walls, or to collect rocks that resemble human faces.  

Is there a clear boundary between indexes, icons, and symbols? It doesn’t seem like it, since things like Chinese characters changed gradually over time.  But Everett doesn’t discuss this point explicitly. 

Why did we end up with certain symbols and not others? Well, there’s no good a priori reason to prefer “dog” over “perro” or “adsnofnowefn”, so Everett attributes the selection mostly to cultural forces. Everett suggests these forces shape language in addition to practical considerations, like the fact that, all else being equal, we prefer words that are not hundreds of characters long, because they would be too annoying to write or speak.

When did language—in the sense of communication using symbols—begin? Everett makes two kinds of arguments here. One kind of argument is that certain feats are hard enough that they probably required language in this sense. Another kind of argument relates to how we know human anatomy has physically changed on evolutionary time scales.

The feats Everett talks about are things like traveling long distances across continents, possibly even in a directed rather than random fashion; manufacturing nontrivial hand tools (e.g., Oldowan and Mousterian); building complex settlements (e.g., the one found at Gesher Benot Ya'aqov); controlling fire; and using boats to successfully navigate treacherous waters. Long before sapiens arose, paleoanthropological evidence suggests that our predecessors Homo erectus did all of these things. Everett argues that they might have had language over one million years ago6.

This differs from Chomsky’s proposal by around an order of magnitude, time-wise, and portrays language as something not necessarily unique to modern humans. In Everett’s view, Homo sapiens probably improved on the language technology bestowed upon them by their erectus ancestors, but did not invent it.

Everett’s anatomy arguments relate mainly to the structure of the head and larynx (our ‘voice box’, an organ that helps us flexibly modulate the sounds we produce). Over the past two million years, our brains got bigger, our face and mouth became more articulate, our larynx changed in ways that gave us a clearer and more elaborate inventory of sounds, and our ears became better tuned to hearing those sounds. Here’s the kind of thing Everett writes on this topic: (Ch. 5, pg. 117)

Erectus speech perhaps sounded more garbled relative to that of sapiens, making it harder to hear the differences between words. … Part of the reason for erectus’s probably mushy speech is that they lacked a modern hyoid (Greek for ‘U-shaped’) bone, the small bone in the pharynx that anchors the larynx. The muscles that connect the hyoid to the larynx use their hyoid anchor to raise and lower the larynx and produce a wider variety of speech sounds. The hyoid bone of erectus was shaped more like the hyoid bones of the other great apes and had not yet taken on the shape of sapiens’ and neanderthalensis’ hyoids (these two being virtually identical).

Pretty neat and not something I would’ve thought about.

What aspects of biology best explain all of this? Interestingly, at no point does Everett require anything like Chomsky’s faculty of language; his view is that language was primarily enabled by early humans being smart enough to make a large number of useful symbol-meaning associations, and social enough to perpetuate a nontrivial culture. Everett thinks cultural pressures forced humans to evolve bigger brains and better communications apparatuses (e.g., eventually giving us modern hyenoid bones to support clearer speech), which drove culture to become richer, which drove yet more evolution, and so on.   

Phew. Let’s go back to the question of innateness before we wrap up. 

Everett’s answer to the innateness question is complicated and in some ways subtle. He agrees that certain features of the human anatomy evolved to support language (e.g., the pharynx and ears). He also agrees that modern humans are probably much better than Homo erectus at working with language, if indeed Homo erectus did have language. 

He mostly seems to take issue with the idea that some region of our brain is specialized for language. Instead, he thinks that our ability to produce and comprehend language is due to a mosaic of generally-useful cognitive capabilities, like our ability to remember things for relatively long times, our ability to form and modify habits, and our ability to reason under uncertainty. This last capability seems particularly important since, as Everett points out repeatedly, most language-based communication is ambiguous, and it is important for participants to exploit cultural and contextual information to more reliably infer the intended messages of their conversation partners. Incidentally, this is a feature of language Chomskyan theory tends to neglect7.

Can’t lots of animals do all those things? Yes. Everett views the difference as one of degree, not necessarily of quality. 

What about language genes like FOXP2 and putative language areas like Broca’s and Wernicke’s areas? What about specific language impairments? Aren’t they clear evidence of language-specific human biology? Well, FOXP2 appears to be more related to speech control—a motor task. Broca’s and Wernicke’s areas are both involved in coordinating motor activity unrelated to speech. Specific language impairments, contrary to their name, also involve some other kind of deficit in the cases known to Everett.

I have to say, I am not 100% convinced by the brain arguments. I mean, come on, look at the videos of people with Broca’s aphasia or Wernicke’s aphasia. Also, I buy that Broca’s and Wernicke’s areas (or whatever other putative language areas are out there) are active during non-language-related behavior, or that they represent non-language-related variables. But this is also true of literally every other area we know of in the brain, including well-studied sensory areas like the primary visual cortex. It’s no longer news when people find variable X encoded in region Y-not-typically-associated-with-X.

Still, I can’t dismiss Everett’s claim that there is no language-specific brain area. At this point, it’s hard to tell. The human brain is complicated, and there remains much that we don’t understand.

Overall, Everett tells a fascinatingly wide-ranging and often persuasive story. If you’re interested in what language is and how it works, you should read How Language Began. There’s a lot of interesting stuff in there I haven’t talked about, especially for someone unfamiliar with at least one of the areas Everett covers (evolution, paleoanthropology, theoretical linguistics, neuroanatomy, …). Especially fun are the chapters on aspects of language I don’t hear people talk about as much, like gestures and intonation. 

As I’ve tried to convey, Everett is well-qualified to write something like this, and has been thinking about these topics for a long time. He’s the kind of linguist most linguists wish they could be, and he’s worth taking seriously, even if you don’t agree with everything he says. 

V. THE REVELATIONS

I want to talk about large language models now. Sorry. But you know I had to do this.

Less than two years ago at the time of writing, the shocking successes of ChatGPT put many commentators in an awkward position. Beyond all the quibbling about details (Does ChatGPT really understand? Doesn’t it fail at many tasks trivial for humans? Could ChatGPT or something like it be conscious?), the brute empirical fact remains that it can handle language comprehension and generation pretty well. And this is despite the conception of language underlying it—language use as a statistical learning problem, with no sentence diagrams or grammatical transformations in sight—being somewhat antithetical to the Chomskyan worldview.

Chomsky has frequently criticized the statistical learning tradition, with his main criticisms seeming to be that (i) statistical learning produces systems with serious defects, and (ii) succeeding at engineering problems does not tell us anything interesting about how the human brain handles language. These are reasonable criticisms, but I think they are essentially wrong.

Statistical approaches succeeded where more directly-Chomsky-inspired approaches failed, and it was never close. Large language models (LLMs) like ChatGPT are not perfect, but they’re getting better all the time, and the onus is on the critics to explain where they think the wall is. It’s conceivable that a completely orthogonal system designed according to the principles of universal grammar could outperform LLMs built according to the current paradigm—but this possibility is becoming vanishingly unlikely.

Why do statistical learning systems handle language so well? If Everett is right, the answer is in part because (i) training models on a large corpus of text and (ii) providing human feedback both give models a rich collection of what is essentially cultural information to draw upon. People like talking with ChatGPT not just because it knows things, but because it can talk like them. And that is only possible because, like humans, it has witnessed and learned from many, many, many conversations between humans. 

Statistical learning also allows these systems to appreciate context and reason under uncertainty, at least to some extent, since both of these are crucial factors in many of the conversations that appear in training data. These capabilities would be extremely difficult to implement by hand, and it’s not clear how a more Chomskyan approach would handle them, even if some kind of universal-grammar-based latent model otherwise worked fairly well.

Chomsky’s claim that engineering success does not necessarily produce scientific insight is not uncommon, but a large literature speaks against it. And funnily enough, given that he is ultimately interested in the mind, engineering successes have provided some of our most powerful tools for interrogating what the mind might look like. 

The rub is that artificial systems engineered to perform some particular task well are not black boxes; we can look inside them and tinker as we please. Studying the internal representations and computations of such networks has provided neuroscience with crucial insights in recent years, and such approaches are particularly helpful given how costly neuroscience experiments (which might involve, e.g., training animals and expensive recording equipment) can be. Lots of recent computational neuroscience follows this blueprint: build a recurrent neural network to solve a task neuroscientists study, train it somehow, then study its internal representations to generate hypotheses about what the brain might be doing.

In principle, (open-source) LLMs and their internal representations can be interrogated in precisely the same way. I’m not sure what’s been done already, but I’m confident that work along these lines will become more common in the near future. Given that high-quality recordings of neural dynamics during natural language use are hard to come by, studying LLMs might be essential for understanding human-language-related neural computations.

When we peer inside language-competent LLMs, what will we find? This is a topic Everett doesn’t have much to say about, and on which Chomsky might actually be right. Whether we’re dealing with the brain or artificial networks, we can talk about the same thing at many different levels of description. In the case of the brain, we might talk in terms of interacting molecules, networks of electrically active neurons, or very many other effective descriptions. In the case of artificial networks, we can either talk about individual ‘neurons’, or some higher-level description that better captures the essential character of the underlying algorithm

Maybe LLMs, at least when trained on data from languages whose underlying rules can be parsimoniously described using universal grammar, effectively exploit sentence diagrams or construct recursive hierarchical representations of sentences using an operation like Merge. It’s still possible that formalisms like Chomsky’s provide a useful way of talking about what LLMs do, if anything like that is true. Such descriptions might be said to capture the ‘mind’ of an LLM, since from a physicalist perspective the ‘mind’ is just a useful way of talking about a complex system of interacting neurons.

Regardless of who’s right and who’s wrong, the study language is certainly interesting and we have a lot more to learn. Something Chomsky wrote in 1968 seems like an appropriate summary of the way forward: (Language and Mind, pg. 1)

I think there is more of a healthy ferment in cognitive psychology—and in the particular branch of cognitive psychology known as linguistics—than there has been for many years. And one of the most encouraging signs is that skepticism with regard to the orthodoxies of the recent past is coupled with an awareness of the temptations and dangers of premature orthodoxy, an awareness that, if it can persist, may prevent the rise of new and stultifying dogma.

It is easy to be misled in an assessment of the current scene; nevertheless, it seems to me that the decline of dogmatism and the accompanying search for new approaches to old and often still intractable problems are quite unmistakable, not only in linguistics but in all of the disciplines concerned with the study of mind.

1

Chomsky 1991b refers to “Linguistics and adjacent fields: a personal view”, a chapter of The Chomskyan Turn. I couldn’t access the original text, so this quote-of-a-quote will have to do.

2

Chomsky’s domination of linguistics is probably due to a combination of factors. First, he is indeed brilliant and prolific. Second, Chomsky’s theories promised to ‘unify’ linguistics and make it more like physics and other ‘serious’ sciences; for messy fields like linguistics, I assume this promise is extremely appealing. Third, he helped create and successfully exploited the cognitive zeitgeist that for the first time portrayed the mind as something that can be scientifically studied in the same way that atoms and cells can. Moreover, he was one of the first to make interesting connections between our burgeoning understanding of fields like molecular biology and neuroscience on the one hand, and language on the other. Fourth, Chomsky was not afraid to get into fights, which can be beneficial if you usually win.

3

One such sound is the bilabial trill, which kind of sounds like blowing a raspberry.

4

This reminds me of a math joke.

5

Why is this vacuously true? If, given some particular notion of ‘sentence’, the sentences of any language could only have one word at most, we would just define some other notion of ‘word collections’.

6

He and archaeologist Lawrence Barham provide a more self-contained argument in this 2020 paper.

7

A famous line at the beginning of Chomsky’s Aspects of the Theory of Syntax goes: “Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech community, who knows its language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of the language in actual performance.”

Read the whole story
francisga
20 hours ago
reply
Lafayette, LA, USA
Share this story
Delete

Bad News for Universal Basic Income

2 Shares
Several U.S. $100 bills | Photo by Pepi Stojanovski on Unsplash

The largest study into the real-world consequences of giving people an extra $1,000 per month, with no strings attached, has found that those individuals generally worked less, earned less, and engaged in more leisure time activities.

It's a result that seems to undercut some of the arguments for universal basic income (UBI), which advocates say would help lower- and middle-class Americans become more productive. The idea is that a UBI would reduce the financial uncertainty that might keep some people from pursuing new careers or entrepreneurial opportunities. Andrew Yang, the businessman and one-time Democratic presidential candidate who popularized the idea during his 2020 primary campaign, believes that a $1,000 monthly UBI would "enable all Americans to pay their bills, educate themselves, start businesses, be more creative, stay healthy, relocate for work, spend time with their children, take care of loved ones, and have a real stake in the future."

In theory, that sounds great. In reality, that's not what most people do, according to a working paper published this month.

The five researchers who published the paper tracked 1,000 people in Illinois and Texas over three years who were given $1,000 monthly gifts from a nonprofit that funded the study. The average household income for the study's participants was about $29,000 in 2019, so the monthly payments amounted to about a 40 percent increase in their income.

Relative to a control group of 2,000 people who received just $50 per month, the participants in the UBI group were less productive and no more likely to pursue better jobs or start businesses, the researchers found. They also reported "no significant effects on investments in human capital" due to the monthly payments.

Participants receiving the $1,000 monthly payments saw their income fall by about $1,500 per year (excluding the UBI payments), due to a two percentage point decrease in labor market participation and the fact that participants worked about 1.3 hours less per week than the members of the control group.

"You can think of total household income, excluding the transfers, as falling by more than 20 cents for every $1 received," wrote Eva Vivalt, a University of Toronto economist who co-authored the study, in a post on X. "This is a pretty substantial effect."

But if those people are working less, the important question to ask is how they spent the extra time—time that was, effectively, purchased by the transfer payments.

Participants in the study generally did not use the extra time to seek new or better jobs—even though younger participants were slightly more likely to pursue additional education. There was no clear indication that the participants in the study were more likely to take the risk of starting a new business, although Vivalt points out that there was a significant uptick in "precursors" to entrepreneurialism. Instead, the largest increases were in categories that the researchers termed social and solo leisure activities.

Some advocates for UBI might argue that the study shows participants were better off, despite the decline in working hours and earnings. Indeed, maybe that's the whole point?

"While decreased labor market participation is generally characterized negatively, policymakers should take into account the fact that recipients have demonstrated—by their own choices—that time away from work is something they prize highly," the researchers note in the paper's conclusion.

If you give someone $1,000 a month so they have more flexibility to live as they choose, there's nothing wrong with the fact that most people will choose leisure over harder work.

"So, free time is good [and] guaranteed income recipients use some of the money to free up time," argued Damon Jones, a professor at the University of Chicago's school of public policy, on X. "The results are bad if you want low-income people to be doing other things with their time, for example working."

Of course, if the money being used to fund a UBI program was simply falling from the sky, policy makers would have no reason to care about things like labor market effects and potential declines in productivity. If a program like this is costless, then the only goal is to see as many individuals self-actualize as much as possible. One person wants to learn new skills or start a business? Great! Others want to play video games all day? Awesome.

In reality, however, a UBI program is not costless and policy makers deciding whether to implement one must decide if the benefits will be worth the high price tag—Yang's proposal for a national UBI, for example, is estimated to cost $2.8 trillion annually.

That's why a study like this one matters, and why it's so potentially damaging to the case for a UBI. A welfare program—which is ultimately what this is—that encourages people to work less and earn less is not a successful public policy. Taxpayers should not be expected to fund an increase in individuals' leisure time, regardless of the mechanism used to achieve it.

In theory, substituting a UBI in place of the myriad, overlapping, and often inefficient welfare systems operated by the federal and state governments is an intriguing idea. In practice, this new study suggests those tradeoffs might not be as desirable.

The post Bad News for Universal Basic Income appeared first on Reason.com.

Read the whole story
francisga
1 day ago
reply
Lafayette, LA, USA
Share this story
Delete

FCC Will Cap the Cost of Prison Phone Calls

1 Share
Greyscale photo of a desk phone sitting on a metal table in front of a plexiglass window in a jail. | Wirestock | Dreamstime.com

The Federal Communications Commission (FCC) voted this week to lower the cost of keeping in touch with the incarcerated.

"The Federal Communications Commission today voted to end exorbitant phone and video call rates that have burdened incarcerated people and their families for decades," the agency announced in a Thursday press release. "The new call rates will be $0.06 per minute for prisons and large jails, $0.07 for medium jails, $0.09 for small jails, and $0.12 for very small jails, and as low as $0.11/minute for video calls—with a requirement that per-minute rates be offered." As a result, "the cost of a 15-minute phone call will drop to $0.90 from as much as $11.35 in large jails and, in small jails, to $1.35 from $12.10."

This affects more than just phone calls: "The new rules also, for the first time, address the exorbitant cost of video visitation calls, dropping those prices to less than a quarter of current prices and requiring per-minute rate options based on consumers' actual usage," per the announcement.

The FCC's three Democratic commissioners voted to approve the rules, as did Republican Commissioner Nathan Simington; Commissioner Brendan Carr, the remaining Republican, voted to approve the order in part and to concur in part.

In January 2023, President Joe Biden signed the Martha Wright-Reed Just and Reasonable Communications Act into law. The act clarified the FCC's authority to regulate the rates of in-state calls from prisons, after the U.S. Court of Appeals for the D.C. Circuit ruled in 2017 that under existing law, the agency could only regulate calls that crossed state lines.

Many of the over 1.2 million Americans incarcerated at any given time depend upon prison calls to maintain ties to friends and family. "Research shows frequent phone calls to family members increase jail safety, promote positive mental health outcomes, and help maintain connections with loved ones," wrote Nicole Loonstyn and Alice Galley of the Urban Institute in 2023.

Part of the reason is distance: "A majority of parents in both State (62%) and Federal (84%) prison were held more than 100 miles from their last place of residence," according to a 2000 report by the Bureau of Justice Statistics. When a loved one is incarcerated too far away, it's much easier to just pick up the phone.

But all too often, prison officials leverage their position to take advantage of desperate families. In a 2022 survey of four states, Peter Wagner and Wanda Bertram of the Prison Policy Initiative found that some prisons and jails charged as much as $8 for a 20-minute video call.

Earlier this year, two lawsuits accused Michigan sheriff's offices of banning in-person jail visits and forcing families to use phone calls and video chats, starting at $10 for a 25-minute video call. According to the lawsuits, the technology companies facilitating the calls enticed officials with over $200,000 per year, plus a 20 percent monthly commission from call revenue.

A 2015 Prison Policy Initiative report found that the trend of banning in-person contact in favor of video visitation was on the rise in county jails, where the majority of inmates have yet to be convicted of a crime and whose families are more likely to live locally.

The post FCC Will Cap the Cost of Prison Phone Calls appeared first on Reason.com.

Read the whole story
francisga
8 days ago
reply
Lafayette, LA, USA
Share this story
Delete

J.D. Vance Turned an Inspiring Personal Story Into an Unsatisfying Political Sales Pitch

1 Share
J.D. Vance at the 2024 Republican National Convention in Milwaukee | Carol Guzy/ZUMAPRESS/Newscom

In the pages of his best-selling memoir, Hillbilly Elegy, Sen. J.D. Vance (R–Ohio) uses his life story as a model for how the children of down-on-their-luck Americans from outside the country's political and cultural power centers can find success.

It is, sincerely, a compelling personal story. One that Vance retold in vivid detail to cap the third night of the Republican National Convention (RNC) in Milwaukee. He got out of his childhood home of Middletown, Ohio—"a place that had been cast aside and forgotten by America's ruling class in Washington," he said—to join the Marines, attend college, graduate from Yale Law School, and become a husband and father. 

"Some people tell me I've lived the American Dream, and, of course, they are right," said Vance, who accepted the nomination to be Donald Trump's running mate this November. 

Vance is still telling his personal story, but eight years after Hillbilly Elegy was published (and Trump roared onto the political scene), the lessons of the tale have changed. Back then, he wrote, "These problems were not created by government or corporations or anyone else. We created them, and only we can fix them."

That's not the case anymore. Instead of encouraging an escape from the cycle of poverty and drug addiction that is holding back the Americans who might otherwise follow Vance's example, it's now more politically expedient for Vance to encourage those individuals to find someone to blame for their problems.

In his RNC speech, there were plenty of scapegoats. To highlight just a few:

  • "NAFTA, a bad trade deal that sent countless good jobs to Mexico."
  • "From Iraq to Afghanistan, from the financial crisis to the Great Recession, from open borders to stagnating wages, the people who govern this country have failed and failed again."
  • "Wall Street barons crashed the economy, and American builders went out of business."
  • "Thanks to these policies that [President Joe] Biden and other out-of-touch politicians in Washington gave us, our country was flooded with cheap Chinese goods, with cheap foreign labor, and in the decades to come, deadly Chinese fentanyl."

And the answer to those problems is no longer to grab the most readily available rung of the economic ladder and start climbing. It's not to join the Marines, work hard, and get good enough grades to earn a spot at Yale (and grab the corresponding ticket to the elite world that Vance now inhabits).

Now, Vance posits that the answer is to vote for Trump, naturally.

This is a pretty unsatisfying answer if your goal is to provide an actual economic and cultural lifeline to people who have been left behind. They've probably tried voting before! In fact, many of them have probably voted for Trump before—possibly twice.

This is the uncomfortable contradiction at the center of Trump's third campaign for the presidency. He's still campaigning as the ultimate populist outsider on a mission to overturn the political system and rewrite the American economic order—even though he controls one of the two major political parties and was literally the president for four years.

That contradiction is reflected in Trump's decision to put Vance on the ticket. Trump is a wealthy heir from New York City who embraced low-brow culture to become a champion of the working class. Vance is the child of the working class who embraced the political and cultural elite to get to a place that he said Wednesday he "never in my wildest imagination" believed he'd end up at. 

They're both compelling stories, and Trump and Vance are an undeniably fascinating set of characters playing roles that break some of the usual archetypes in the political system. But being a fascinating character is not the same as having good ideas or workable solutions.

There will be plenty of time later in the campaign to dig into the specifics of those policies. For now, this much should be clear: Voting harder isn't going to save the people of eastern Ohio, or anywhere else. Vance escaped that life and parlayed his empowering personal story of success into the chance to be one heartbeat away from the presidency. Do as he does, not as he says.

The post J.D. Vance Turned an Inspiring Personal Story Into an Unsatisfying Political Sales Pitch appeared first on Reason.com.

Read the whole story
francisga
8 days ago
reply
Lafayette, LA, USA
Share this story
Delete

Your Book Review: The Family That Couldn’t Sleep

1 Share

[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]


“You wake up screaming, frightened by memories,

You’re plagued by nightmares, do we haunt all of your dreams?”

The Family That Couldn’t Sleep by D. T. Max was published in late 2006. This glues it to a very particular era. A spectre was haunting Europe – the spectre of mad cow disease. Something was tearing through Britain’s cows, turning them inside out, eating their brains and thrashing their souls. It had been doing so since, DTM thinks, “the late 1970s”. When we look back in retrospect and think about a timeline like this, knowing everything we know, you can’t help but feel a shiver down your spine.

By 2006, some few hundred people had developed variant Creutzfeldt-Jakob disease, colloquially “mad cow disease”. They were almost all young adults in the United Kingdom.  The disease was a nightmare – healthy young people succumbing to a terrifying dementia, caused by naught but a beef dinner they had enjoyed years ago. Few diseases were more horrific. Even the worst neurodegenerative disorders rarely struck people so young. The fact it was caused by – and covered up by – the cattle industry made it all the worse.

The numbers were down by this point; the fear of a mad cow pandemic seemed to flicker, then die. The dust was settling, as it were, and it was just now possible to write a history. Simultaneously, it was still in the spotlight. Prion diseases gripped people’s souls with fear. You couldn’t sell a book about mad cow so well ten years later; people were much less scared of it. “That thing people were panicking about in 1999? Wasn’t it a nothingburger?”

(forgive me for “nothingburger”)

The Family That Couldn’t Sleep comes from this era of...optimism? Yeah, let’s say optimism. The wildest predictions – that hundreds of thousands of people across Britain would be struck by vCJD around the turn of the millennium – were clearly wrong. The disease was severe enough to strike the fear of prion diseases into people’s hearts; the name, entirely unfamiliar a few years earlier, now defines a bogeyman cluster of The Worst Diseases Possible. It seemed possible they could be human epidemics, if small ones. This was enough to be scary. But it wasn’t quite as scary as a Game Over.

Having done all this work to set the scene, let’s talk about the book itself. It’s great! I really dig it.

Like all the best nonfiction, it’s a cavalcade of characters. Fiction is leashed by verisimilitude. We have some loose expectations of “how the world works”, and dismiss fiction as unrealistic if its events are too bizarre, its coincidences too forced, or its characters just that much larger than life. God does not care about any of this, and works with the trust you will believe what he says.

Accordingly, The Family That Couldn’t Sleep is beneath all else a “character-driven narrative”. It first introduces us to, of all poetic things, a fallen noble-blooded Venetian family. The money ran out, you see – not because of profligate spendthrifts or revolutionary uprisings, but because of whispers, taunts, that its members were cursed to go mad. In midlife, it seemed, a strangely high fraction of them were struck by a specific sort of insanity. It started with a fever that never quite let down, even after any supposed illness should have ran its course. A little trouble sleeping – but is that so unusual, for someone feverish in the languid Italian summers?

At first, perhaps, this could permit a paradoxical productivity. D. T. Max traces (he thinks) the first description of the family’s disease to a doctor who died in 1765. For a scholarly man in that era, less time spent sleeping may well permit more time pursuing one’s plans. “In the beginning,” he writes, “the feeling might not have been unpleasant—he could stay up all night playing cards or maybe read Morgagni’s famous comparisons of the body to a machine, published just a few years before.” Many of us scheme against the God of Sleep, trying to fight its teeth and claws, eke out more power from days and nights that would otherwise slip away. But we let it win, eventually. What happens if you can’t let it win?

The first part of the book is DTM’s pseudobiography of this doctor, and it presents a fascinating story – all hypotheticals, but all driven by what a learned man in the mid-eighteenth century could have thought while watching his body and mind betray him in a way no one else’s ever had. The story is beautiful, crossing the streams of fiction and nonfiction in an impossible way. As the disease wore on, DTM speculates, our physician friend would start thinking it a curse rather than a blessing. As he soaked through his clothes, his servants would find themselves pouring through his wardrobe several times a day for new shirts. He would guzzle wine, the supposed treatment for insomnia, and find himself drunken but no more capable of sleep; his limbs would grow heavy and his mind exhausted, but he could never cross the divide. He might try to leave the noisy city of Venice for quieter pastures, and find himself no more relieved. He could consult with his colleagues, and none of them would know a word more than he did; in all likelihood they would know far less, the curse of everyone interested in medicine and experiencing something beneath its umbrella.

The disease would wear on inexorably, no matter what he tried. He would find himself trapped in illucid places between waking and sleeping, never quite dreaming, never quite not dreaming. His fever would never abate, but it would gyrate – the fevers typical of the disease, we know, are marked not by consistent high temperatures but by impossible fluctuations, jumping rapidly between every possible extreme. Even today, they look like measurement errors.

When he died, no one would know what to call it. They didn’t know what to call it in his nephew1, or in any of his nephew’s children or grandchildren. As the disease spread across generations, it took upon thousands of names – every wasting disease, infection, or psychosis you could find. It wasn’t exceptionally good for the family’s prospects; the repeated deaths of able-bodied adults made the family poorer, and neighbours refused to marry into the “mad” bloodline.

A point about prion diseases that D. T. Max likes emphasizing is that they don’t steal your reason. Everyone was unanimous that across multiple prion diseases – fatal familial insomnia itself, but also many forms of Creutzfeldt-Jakob, and plenty of other things you could grant such a name – the afflicted were consistently aware of their fates, even in the worst reaches of the illness. Many people with FFI never lost the ability to talk at all, and could express this very well for themselves. Others did, but seemed to know their surroundings infaillibly. There is a famous case report about a man with FFI who managed to slow the disease’s progression with a slew of treatments; he could consistently describe his state in his most “incapacitated” periods when remitting. I’ll let him speak for himself:

(This report was, as it happens, published in the exact same month as The Family That Couldn’t Sleep.)

DTM came to know the family well. He befriended them by way of two members of their younger generation, Lisi – a woman terrified by the shadow of the disease, and Ignazio – the doctor she had married, who was more terrified by the shadow of the disease. Ignazio put together the pieces of the family puzzle, consolidating all the disparate diagnoses into a single disorder and filling out a lot of blank spots on family trees. When DTM came along, he was able to help Ignazio make the case that the family would benefit from the spotlight – that greater awareness of FFI could lead to a cure both for them and for a slew of other prion diseases.

As it so happens, he is one of those nonfiction authors who serve as a character in their own story. DTM has some form of progressive muscular palsy. He is, or at least was in 2006, not entirely sure what it is. The relatively unimpressive state of genetics at the time had not identified his causative mutation, though it looked a lot like one of the rarer forms of Charcot-Marie-Tooth disease2. DTM is pragmatic about this, the way everyone chronically ill is either pragmatic or doomed. Whatever he has, it is a defect in protein structure; his peripheral nerves decay not because of a problem with the nerves themselves but an inability of their scaffolding to hold them together, as he puts it. The last chapter of the book dwells on this, on the web of connections popping up between a thousand disorders. DTM’s disease is something vaguely similar, if you squint, to an exceptionally slow-progressing motor neurone disease; if you jump another level out, you see amyloid plaque diseases like Huntington’s and Alzheimer’s, and if you jump yet another level out, you see something like prions. His interest in the Venetian family was driven by this. Some of its members thought this a beautiful act of sympathy; others thought him a grotesque parody of themselves, an onlooker, a gawker, peddling their tragedy to salve his relatively insignificant problems. They are, he thinks, both right.

That’s the beginning, and that’s the end. What happens in the middle?

---------------------------------------------------------

The Venetian family lends the book its title, but they’re really more of a framing device. The Family That Couldn’t Sleep is separated into four parts, of which the first and fourth – the shortest by far – deal with the family. Part 2 is kuru, the king of fucked up diseases you read about in clickbait Weird Medicine listicles. Let’s talk about kuru!

Kuru, is, famously, the prion disease you get if you eat another person’s brain. Well, not quite. It’s a prion disease that became endemic amongst women in the Fore society, who ritually ate brains, one of which had an inherited or spontaneous prion disease. This is an important note – there’s a tendency (which the book’s later chapters engage in) to assume cannibalism just has a Prion Disease Generator attached. If you eat people who don’t have prion diseases, you won’t suddenly get one. Uh, don’t eat people.

Anyway, part 2 is DTM’s historiography of Fore-Westerner first contact. It’s hilarious. Papua New Guinea is a frankly ridiculous place; one of the all-time best Lyttle Lytton winners (worst first sentence from a hypothetical or, in this case, real work) was “Papua New Guinea is so violent that more than 820 languages are spoken there”. The native residents were so hostile to outsiders that all the colonial empires had cut their losses – and when you think about the places they colonized, that says something. After the First World War, PNG was ripped from its nominal German ‘owners’, but no one else wanted the place.

So, of course, they gave it to the Australians.

It was thirty years and another war before we actually made contact. 1940s Australia was as ‘settled’ as it’d ever be; the cities were bustling and the interior was mapped. The kind of explorer who two centuries before would be heading to new continents had to console himself with Pacific islands. Console he did. The native peoples of the PNG coasts were hostile enough to the wannabe-colonialists that the Australians, flying planes overhead, were the first people to discover that the island’s inland was populated too. No one had broken through on land.

In all this deep and angry rainforest, the Fore were the furthest out. They lived far into the island’s mountainous interior; DTM describes their territory as “nearly vertical”. Calling people primitives is a bit passe these days for understandable reasons, but no other term comes to mind. The Fore had no name for themselves; we call them by an exonym, “the people to the south”. They weren’t, to be clear, hunter-gatherers – they were slash-and-burn agriculturalists, but very well-fed ones. Despite the tendency in grain-focused cultures for poor agriculturalists to be stunted/malnourished, the Fore were a remarkably healthy people.

Well, except for the famous bit.

The first remarkable thing about the Fore was just how quickly they wanted to assimilate. Most PNG tribes weren’t particularly enthused by Western offers of injections/tractors/radios/Christianity. Yet as soon as the Australians arrived, the Fore made ceasefires in their wars with other tribes, volunteered to help large-scale Australian projects on the coast, started planting and trading coffee, and enthusiastically participated in censuses. It’s the only first-contact narrative I’ve seen where the colonizers were concerned about how badly the other guys wanted to be colonized.

The next was the one that got their names in the history books. Australian officials started to notice a remarkable lack of women in Fore camps. Some tribes sequestered their women, particularly when Westerners were around, so at first they thought nothing of it. The high rate of unpartnered young men, though, was way out of PNG norms.

DTM tells this part fantastically. The Fore chapters drip with the dread of dramatic irony. When the first breakthrough comes, you have to catch your breath:

“Tiny” Carey noted something in the middle of August 1950 that deepened this mystery. He noticed that near the village of Henganofi there had been an unusual number of deaths. “It appears,” he wrote his superiors, “natives suffer from stomach trouble, get violent shivering, as with the ague, and die fairly rapidly.” [...] McArthur investigated a little more [...] One day in August 1953 he ran into more of the shivering people Tiny Carey had seen several years before: “Nearing one of the dwellings, I observed a small girl sitting down beside a fire. She was shivering violently and her head was jerking spasmodically from side to side.”

It would be quite some time before anyone figured out what caused it – but the problem, as DTM notes, was that its cause wasn’t possible. Everyone priored that the weird undescribed disease in the Fore lands was some nocebo sorcery-sickness. Vincent Zigas, the first actual doctor sent to work with the Fore, tried to placebo-effect them and failed miserably:

On the way, Apekono stopped at a hut and showed Zigas his first kuru victim. “On the ground in the far corner sat a woman of about thirty,” the doctor wrote. “She looked odd, not ill, rather emaciated, looking up with blank eyes with a mask-like expression. There was an occasional fine tremor of her head and trunk, as if she were shivering from cold, though the day was very warm.” It was almost exactly the tableau McArthur had witnessed in 1953. Zigas, though, was a doctor. He could do more than look—or so he thought: “I decided I might as well try my own variety of magic,” he remembered. He rubbed Sloan’s Liniment, a balm for sore muscles, on her and declared to her family and his guide: “The sorcerer has put a bad spirit inside the woman. I am going to burn this spirit so that it comes out of her and leaves her. You will not see the fire, but she will feel it. The bad spirit will leave her and she will not die.”

The lotion penetrated the woman’s skin and she writhed in pain. “Get up! Walk!” Zigas commanded theatrically. “The woman struggled feebly as if to rise, then, exhausted, started to tremble more violently, making a sound of foolish laughter, akin to a titter.” That evening Apekono asked Zigas not to try to cure any more kuru victims; “Don’t use your magic medicine anymore. It will not win our strong sorcery.”

This was a disaster. The Fore were so cooperative precisely because they hoped “Western magic” could conquer theirs. As it became clear it couldn’t, they turned hostile. The Australians had hoped to “modernize a Stone Age people”; now all their subjects were dropping dead before their eyes, from what they could only assume was a “hysterical reaction” to colonization itself.

So, to solve this, they needed a batshit insane American.


Carleton Gajdusek is one of the characters who dominates The Family That Couldn’t Sleep. He couldn’t not. You could put him in a car commercial and he’d dominate it.

Gajdusek was a physician with a rare, intense combination of science and practice. He was a romanticist, a field worker, and a lover of everything strange. He’d been an army doctor, a government conspiracy-cover-upper, and a postdoc under Linus Pauling who described his intent as “to straighten out Pauling’s ideas about proteins”. He hated civilization, in a slightly-to-Ted’s-centre sense, and was passionate about “primitives and isolates”. He jumped at the chance to work in Papua New Guinea; he planned to conduct a multi-site study on child development in such cultures, and relished the opportunity to live in a “primitive” environment himself.

He did all this so he could rape kids. Oh, he did it for the scientific curiosity and love of medicine, but he also did it so he could rape kids. Gajdusek was a pedophile in the actual-lifelong-exclusive-paraphilia sense, as opposed to the “metonym for child molester” sense. Some people who roll snake-eyes on the Sexuality Dice repress it, but some are perfectly happy to act on it; Gajdusek was #2 in its fullest form, the kind of guy who believes that a well-lived life includes raping some kids. DTM doesn’t shy from this, not for a moment. It’s the first thing he tells you about Gajdusek. It couldn’t not be; you couldn’t talk about why he went to PNG otherwise.

When Gajdusek landed in PNG, he first found the place too civilized. He’d been promised a land of “cannibal savages” – where were they? After some traipsing, he found them, right where he was promised. The Fore were perfect for Gajdusek. They had some kind of medical mystery that’d been lost on everyone else. They ate each other, in exactly the way he loved detailing in his diaries (“”Women and children, particularly, partake of the human flesh,” he noted with pleasure”). As kuru cases popped up, he aggressively recorded them. He wrote lovingly detailed notes that he sent back to his Australian advisor. He wrote with intensity, with exclamation marks, with the joie de vivre of a man just where he wanted to be. Gajdusek smothered the Fore with ‘cures’ that never worked, but they didn’t get angry at him. As DTM dryly puts it: “Their children trusted him, and that was enough for them.”

At some point, someone suggested sending an anthropologist...or an epidemiologist...or literally anyone with more credentials than Gajdusek and Zigas3. Gajdusek threw a shitfit, convinced this one-and-a-half-man team was enough to Solve The Problem Forever. But he got bored eventually – running off with another tribe with, as his diary notes at length, an apparent custom of youths ritually fellating older men – and Zigas, I dunno, the book neglects him a bit here. So they managed to sneak in some anthropologists.

The husband-and-wife team of Robert Glasse and Shirley Lindenbaum4 were the first involved parties to give a shit about the Fore as people, rather than as colonial subjects/medical mysteries/walking sex toys. What they uncovered was fascinating. The Fore were cannibals, yes, but they were recent cannibals.

They didn’t have an ancient tradition of eating their dead, like the other visitors assumed. They happened to be in contact with some cannibal groups, and after a Fore man died of “sorcery”, they thought: well, what would happen if we ate him? “People tasting it expressed their approval. ‘”This is sweet,” they said, “What is the matter with us, are we mad? Here is good food and we have neglected to eat it.”” If not for the wild coincidence that the first Fore cannibalism victim had a prion disease, kuru would never have existed.

Glasse and Lindenbaum started to put together the pieces. They’d been sent down to rule out a genetic explanation – to track the kinship ties of the Fore and see how the disease ran through families. It didn’t run through families in any coherent sense, but it sure did run through cannibalism. The clincher was the age distribution. The Fore, ever enthused by colonialism, quit eating each other as soon as the Australians arrived. Children stopped dying of kuru shortly after; they simply weren’t exposed to the infectious agent.

The couple sent the news to Gajdusek, who was off raping kids somewhere else. In the next part of the book, DTM runs through Gajdusek’s many conjectures of kuru’s cause – more like sketches or abstract paintings than like true hypotheses. Gajdusek was annoyed that someone else was doing something he “totally could’ve done”, and even more annoyed that another lab was running similar experiments – an attempt at a vaccine for a particular sheep disease had accidentally created a prion generator. But he was happy to swoop in and claim the credit for what he was starting to think of as “slow viruses”, an infection that somehow lays dormant for years.

DTM portrays Gajdusek perfectly, in that “real life has no need for verisimilitude” way. Gajdusek was at once a brilliant man, an all-consuming narcissist, an entertaining character, and a monster beyond redemption. A lesser book might pick one or two. The Family That Couldn’t Sleep portrays him as all four, and on a personality level (as opposed to a scientific one), the Gajdusek-focused parts are some of the most gripping.

---------------------------------------------------------

Outside of the jumps between the Venetian family and everything else, The Family That Couldn’t Sleep is not siloed. The narratives of all prion diseases are deeply intertwined. This is what makes it a great book. It’s 300 pages of dramatic irony. You read the whole thing, waiting for the eureka moment – the point everyone realizes they’re looking at the same cause. It does, however, make it a tad difficult to review or synopsize. The book’s story is so weird – and, often, so at odds with conventional wisdom that trickles down about the Fore et al – that you have to recap quite a bit, and the book steadfastly resists recapping.

The next couple chapters after we depart from Gajdusek’s credit-claiming are mostly about experiments with various prion diseases. They’re scientifically fascinating. Unlike some medical-books-for-general-audiences (cough, How Not to Study a Disease), DTM never talks down to the reader. He assumes someone reading a 300-page book about prions is smart and wants to learn about prions. He also has – you can feel it in his words – the agonizing experience of spending his life on the other side of the doctor’s desk, trying to beat into whoever he’s talking to that no, seriously, you don’t need to lie to him or try explain a complex disease at a fourth-grade level.

The first prion disease studied was scrapie. Scrapie was a big deal – it starved and killed large shares of British sheep flocks, making it a serious economic problem. Veterinary researchers had tried to prevent or cure it for centuries. It was a veritable graveyard of ambitions:

Quintessential was D. R. Wilson at the Moredun Institute in Scotland, who worked in the middle of the last century for more than a decade trying, with mounting frustration, to kill the scrapie agent. He found that it survived desiccation; dosing with chloroform, phenol, and formalin; ultraviolet light; and cooking at 100 degrees centigrade for thirty minutes. The scrapie researcher Alan Dickinson told me he remembered Wilson at the end of his career as “very, very, very quiet. Of course, that was after his breakdown.”

“Now it is our turn to study prions. Perhaps we should approach the subject cautiously.”

The problem, as DTM explains, is that prion diseases were impossible. They violated 20th-century understandings of biology. Proteins “were no more alive, and no more infectious, than bone”. Prion diseases seemed to have too many causes – genetic, infectious, and sporadic. They looked infection-like in some ways, but patients didn’t produce virus antibodies. Sheep exposed to scrapie, or chimps infected with kuru, took years to develop symptoms. Their facts did not fit together.

In the 1960s, people started wondering. The unifying trait of prion agents was that they had to be denatured to be destroyed. Was this a particularly small virus defined by its protein coating? Or – even more outre – was it pure protein, no DNA at all? No one could figure out quite how the latter worked, but it was tempting. Gajdusek, by now a major figure in this field, kept a foot in both worlds. He didn’t want to stake his reputation on a no-DNA hypothesis, but he certainly sympathized.

Enter Prusiner. Stanley Prusiner was Gajdusek’s counterpart. Where Gajdusek seemed permanently manic, Prusiner was deliberate and exacting. He entered Gajdusek’s “slow viruses” field in the early 1970s after a chance encounter with a CJD patient. He relished the laboratory in a way Gajdusek didn’t at all, and set out to optimize the hell out of his projects.

Prusiner set out to isolate the smallest infectious particle in the scrapie agent. He injected tons of hamsters (hamsters got sick faster than mice) with increasingly tiny scrapie proteins, hoping to determine whether the Minimum Viable Scrapie was DNA. By the mid-1980s, he’d produced something so small it couldn’t possibly be a virus. Denaturing it destroyed it; exposing it to nucleic acid dissolvers actually made it stronger.

Emboldened by this discovery, Prusiner set out to anoint himself the King of Prions. Here emerges something of a Voldemort-Umbridge distinction – the difference between cartoonish villainy and banal evil. Gajdusek is a bad guy because he rapes kids. Prusiner is a bad guy because he is the most grotesque stereotype of the Advisor/Peer Reviewer from Hell made flesh. Everything Prusiner did was to build his reputation atop a pile of skulls. When recruited as a peer reviewer for other prion papers, he wrote negative reviews to undermine their authors. He worked his grad students to the bone and intentionally destroyed their careers, telling them he’d “ruin them” if they entered prion research as competitors. He lied about the origin of the protein-only hypothesis, claiming he originated it a decade after it was actually conjectured. But hey, he was good at getting grants.

I was surprised reading a lot of this, because for all the time I’ve been aware of it, the cause of prion disease has seemed settled. “Oh yeah, it’s a protein that gets all fucked up.” But DTM goes through just how unsettled it was right up through to The Family That Couldn’t Sleep’s publication. Serious confirmation only arrived a couple years later. Many people were deeply critical of the prion hypothesis – often, it seemed, because they loathed Prusiner too much to go along. Throughout the book, he cuts an uncharismatic figure.

Gajdusek and Prusiner both won the Nobel for discovering prions, decades apart. This tells you something – the “discovery” of prions can be construed quite a few ways. Gajdusek formulated the hypothesis; Prusiner proved it. Gajdusek was grievously offended by Prusiner’s Nobel, perceiving his rival – not inaccurately – as a follower who never originated any ideas of his own. But Gajdusek was offended from a federal prison cell, so how’d that work out for him?


Fascinating as all this is, no one published a book about prions in the mid-2000s because it was about kuru or FFI. They published books about prions because teenagers were dying, and people wanted to know why.

DTM lays the seeds for part 3 – the mad cow section – in part 1. This is a discussion of scrapie, the longstanding prion disease of sheep. Scrapie was a medical mystery for centuries (remember poor D. R. Wilson), precisely because of the intuitive implausibility of prions. The scrapie chapter is a great history-of-science piece, covering the agricultural productivity revolutions of the 18th century, the surfeit of bizarre origins veterinarians concocted, and the treatments that never worked.

Scrapie is not transmissible to humans – well, we hope. It’s concerningly transmissible to primates. But it’s been around for a long, long time, and it doesn’t epidemiologically look like humans get it...we hope. Anyway, you ever tried to generalize from one example?

The British government did! In the mid-1980s, strange reports started coming out of the UK’s farms. Farmers were describing a new disease where dairy cows – incredibly docile creatures, under normal circumstances – turned hostile, kicking them as they went into the milking stalls. The symptoms looked to all the world like scrapie. Epidemiologists tracing the outbreaks found a unifying link with “cake” – animal protein feed sweetened with molasses. The scrapie-like symptoms must have traced to an infected sheep. But scrapie doesn’t transmit to humans, so it must be okay to keep slaughtering them, right?

We all know how this ended.

The best term for the British response to the mad cow outbreak is “cacklingly evil conspiracy”. The agricultural industry really, really didn’t need a huge zoonotic outbreak – so it decided it didn’t have one. They first suppressed all mentions that the disease looked like scrapie, then – when this became impossible – hyped up that scrapie doesn’t transmit to humans, so there’s nothing to worry about. The formal name of the disease, “bovine spongiform encephalopathy”, was supposedly chosen to optimize for unfamiliarity – it wouldn’t fit well in a headline. They emphasized, extensively, that there was nothing to worry about. Ever.

At some point, people started asking questions. If there was nothing to worry about, why was the agricultural industry panicking so hard? As things became ever more worry-inducing, this turned down ludicrously twisting paths:

Meanwhile, the Southwood Working Party and the experts who advised it were learning on the job. They learned, for instance, that the BSE agent entered the animal through the mouth and then followed the digestive tract into the organs that try to filter out infections—the tonsils, the guts, and the spleen—and from there traveled into the peripheral and central nervous system, and finally arrived at the brain. They also learned that pasties, meat pies, and even some baby foods contained tissues from a lot of those organs. So the Southwood Working Party recommended banning these organs, but only from baby food. This started a chain reaction of consumer doubt: if infected cow organs were unsafe for babies, how could they be good for adults? The government then banned offal, as the organs were collectively called, in all human food but gave the industry a grace period to get it out of the feed supply. Then pet food manufacturers began to wonder if what drove cows mad might not also drive dogs, cats, and parrots mad. The feed they sold came from concentrate made of the same sick animals that had previously made up the meat and bone meal farmers used. Their trade group decided to put a similar ban in place—immediately. So for five months it was safer to be a dog than a human in Britain.

DTM spends pretty much this whole section of the book making fun of the British government. To be fair, they deserved it. They killed hundreds of kids in agonizing and preventable ways – they could take some ribbing.

This is all throughout the mid-1980s to early-mid 1990s. Through this period, it wasn’t yet clear that mad cow could spread to humans. The panic was clear, and deserved, but it didn’t yet have a match for its powder keg.

It would alight. The first suspected case of vCJD – human mad cow – was in 1994. Fifteen-year-old Vicky Rimmer developed a sudden, strange disease. Doctors gave her months to live...until she died in 1998. A couple other suspected cases trickled down through the mid-90s, including a young man who made meat pies for a living, whose grieving mother received a letter from the Prime Minister that “humans do NOT get mad cow disease”. (That must’ve been fun.)

Soon, they couldn’t deny it any longer.

On March 20, 1996, Stephen Dorrell, the health secretary, stood up in Parliament to announce the news that had already appeared as a tentative conclusion in scientific journals and as rumor in newspapers for the previous two years: British beef was killing British teenagers. The first confirmed death was that of Stephen Churchill, a nineteen-year-old student from Wiltshire, who died in May 1995. Back in 1989, at the Southwood Working Party’s suggestion, the government had set up a surveillance unit in Edinburgh to watch for any evidence that BSE had crossed to humans. One worry had been that if BSE passed to humans, how would anyone know it? How would you recognize something you had never seen? It turned out to be easy: Churchill and the nine other teenagers who had gotten sick had spectacular amyloid plaques in their brains, chunks of dead protein almost visible to the naked eye. If sporadic CJD was a whisper, BSE-caused prion disease was a shout. The investigators sat open-mouthed looking at slides whose damage, they feared, portended the most severe epidemic in modern British history.

This part of the book is not fun. It lacks the insane personalities and duelling careers of the other entries. It is an honest chronology of the vCJD epidemic – a gruesome failure of the agricultural industry, the one system that everyone is vulnerable to. The government and industry had completely violated their duty of care to citizens and consumers. They were paying the price. No one would buy British beef anymore – not while they watched their children die.

Now here’s the thing: this is ethnography, not historiography. The Family That Couldn’t Sleep is a book from the mid-2000s. The epidemic was not at all in the rear view mirror. There were piles of unanswered questions that DTM constantly alludes to. We have eighteen years more hindsight than he did then. What do we know now?

---------------------------------------------------------

In 2006, the vCJD epidemic looked like it was going to be a lot better than the worst fears. BSE itself was a huge problem for the cattle industry, but honestly, no one is too sympathetic to the cattle industry. People were not going to die in anywhere near the numbers believed. We had all sorts of reassuring data coming out about this, which DTM chronicles. We were learning that only some genotypes seemed susceptible to vCJD. We didn’t see any older people die of the disease. We were seeing numbers drop, such that vCJD must have a pretty short incubation period.

Anyway, all of this is wrong!

The Family That Couldn’t Sleep was written in the candidate gene era. Back then, the nascent field of human genetics was sure it was about to Solve Polygenism. Yes, the simple Mendelian monogenic patterns popular a few decades back clearly didn’t apply to common diseases, but how many variants could there be? We were about to discover the five genes influencing 20% of Alzheimer’s risk each, the five genes influencing 20% of heart disease risk each, etc., and once we were done we’d just do gene therapy and cure Alzheimer’s. A paper on autism genetics from 1999 was so outre as to speculate there might be as many as fifteen genes involved. The fact we are now using the term “omnigenic model” should tell you roughly how well this worked out.

Do you remember SNPedia? If you were a 2014 Slate Star Codex reader, you might. 2014 was still pretty candidate gene. People were out there publishing papers saying a single variant could increase your life expectancy by 15 years. SNPedia was a site that beautifully categorized all of these, so you could do 23andme or whatever, look up your results on SNPedia, and make horrible life choices.5 It was eventually bought out by one of the consumer DNA companies, so no one ever edited it again, making it a great time capsule of early-mid 2010s behavioural/medical genetics takes.

SNPedia will excitedly explain to you that common genetic variants make you immune to vCJD. They cite a 2009 post from the now-archived 23andme blog titled “No Good Evidence That Potential Pool of Mad Cow Disease Victims Is Expanding”, explaining how fears of late-onset vCJD are clearly debunked by new Scientific Knowledge. Everyone who developed vCJD in the 1990s and 2000s had an M/M genotype in a particular part of the PRNP prion gene, so the roughly half the population with M/V or V/V genotypes were immune.

The Family That Couldn’t Sleep buys this, too. In fact, it buys it in an even more agonizingly 2000s way. The first sign that transmissible prion diseases weren’t genotype-restricted should’ve been the growth hormone kids. You might have heard this story – from the late 1950s through mid-1980s, human growth hormone produced from brain tissue was used as a treatment for pituitary dwarfism, until it turned out to spread CJD if the originating brain was infected.

DTM discusses this, to set the scene for the genetics thing. He mentions what was the state of the art at the time – that a disproportionate share of both the growth hormone kids and sporadic CJD cases were V/V homozygotes. This, uh – so the book was written in the mid-2000s, yeah?

Yeah.

The conclusion DTM drew – and this was a common conclusion at the time – was that homozygosity somehow made you more vulnerable to CJD, and M/M homozygosity made you vulnerable to BSE-borne CJD in particular. We cannot criticise the author for not predicting the future, but we live in the future, and can say how this worked out. Turns out, nope, M/V heterozygotes totally get vCJD. After a British man in his 30s died of CJD in 2016, he was found to have vCJD and an M/V genotype. He was tested for vCJD only because he was exceptionally young for someone with a sporadic prion disease – meaning people developing it later in life would be missed6.

Did you know up to 1 in 2000 people in the UK have latent vCJD?

There is one line in The Family That Couldn’t Sleep that stopped me dead in my tracks when I read it:

What happens to the Italian family in the end depends less on their own actions than on the world’s interest in prion diseases, which they cannot control. If lots of people are afraid of getting variant CJD, the family benefits. If fear of prion disease goes the way of the fear of swine flu or Ebola, then they will be orphaned again.

THIS BOOK IS FROM 2006! Three years before the swine flu pandemic! Eight years before the Ebola pandemic! “If you’re looking for a sign, this is it.”

---------------------------------------------------------

The last section of The Family That Couldn’t Sleep addresses BSE fears in America and a nascent internet subculture DTM calls “Creutzfeldt Jakobins” – people who track American CJD cases, trying to spot vCJD patterns. When reading his description of the Creutzfeldt Jakobins, my mind constantly, uncontrollably turned to covid. Here it was – an online community of people deeply skeptical about a disease’s official story, tracking every contradiction, every implausibility, every statistic that failed to apply to the individual. Self-described “redneck hippies” and “soccer mom Republicans” teaming up to find the truth hidden behind an impossible world. You know what they’re doing now.

I’ve always combined a deep interest in medicine with a healthy distrust for it. People who are constitutionally inquisitive, anti-authoritarian, and suspicious about official narratives tend to end up skeptical of at least some mainstream claims in the field. This is not to say I think you should take bleach enemas or something, just that I understand the impulse behind concluding the US government was covering up a local vCJD wave.

Traditionally, sporadic prion diseases are said to have a prevalence of one in a million. (Hold on to that for a second.) The last section of the book is a chronology of Americans finding bizarrely more than one in a million of their friends dying of sporadic CJD, often at inexplicably young ages, sometimes in geographical clusters. This is understandably suspicious. Then DTM goes on to reassure us by saying none of these cases were confirmed to have an M/M genotype, which OH GOD OH FUCK

A number of high-profile people in the prion world, including Gajdusek, are clarified as not believing sporadic prion diseases exist. You get the impression DTM doesn’t, either. Now, how common are prion diseases?

Eric Vallabh Minikel has an answer for you! Eric and his wife Sonia are prion researchers from a rather unique background – after Sonia was diagnosed as having a single-gene mutation with ~100% penetrance for prion disease, they left their previous jobs to dedicate their lives to curing it. It turns out, when you run the numbers, you get not one in a million but 1 in 5000 people dying of prion diseases.

This is best described as “nightmarishly high”. I’m normed on genetic disorders. A genetic disorder that affects one in five thousand people is pretty common! I have known, in person, completely unselected, just from “random people I’ve met in my life in a non-medical context”, someone with a ~1/250k syndrome and someone with a ~1/50k-100k syndrome. I don’t think anyone in my extended family knows someone who died of a prion disease. I feel like it would’ve come up if they did!

Prion diseases have distinctive phenotypes. Not distinctive enough, apparently, to avoid a lot of CJD being misdiagnosed as Alzheimer’s – but diagnosis is consistently insane. Something DTM reiterates throughout The Family That Couldn’t Sleep is just what prion dementia looks like. The characteristic dementia in prion diseases spares something – “self” or “recognition” or “reflection” – that is not spared by Alzheimer’s, or by most common dementias. Shouldn’t this be, uh, noticeable?7 They kill rapidly, often over the course of months, and often onset in midlife. ALS shares this pattern and is way, way more common than prion diseases; you hear about ALS far more in the “disorder people actually have” sense. What am I missing here?

Anyway: 1 in 2000 prevalence of latent vCJD in the UK + extreme lack of clarity over whether scrapie is human-transmissible + blood donations spread vCJD + sporadic CJD prevalence keeps going up = ???

(Yes, I am annoyed that most countries have lifted their ban on UK blood donors, thank you for asking!)

---------------------------------------------------------

But back to the book. The “American chapter” is one-third about the country’s response to vCJD, one-third about the Creutzfeldt Jakobins, and one-third about chronic wasting disease. The last part is the most interesting.

Chronic wasting disease is a prion disease of deer. Like scrapie, it “probably, we hope” isn’t human-transmissible (eat venison at your own risk). Under natural circumstances, deer shouldn’t get prion diseases:

A prion plague should not be possible among ruminants in the wild. Deer are not cannibals, as the cows that spread BSE were forced to be; and, because deer and elk are not domesticated, they do not have enough contact with one another to spread a prion infection the way sheep are thought to spread scrapie. But deer do not live as they used to live, humans having once again brought their ambitions to bear on the natural course of things.

The Family That Couldn’t Sleep is a book of medical anthropology. Anthropology of the Veneto, anthropology of Papua New Guinea, anthropology of 1990s Britain. Here, it is an anthropology of America. Americans, having won the world, still fight to win their own backyard. The North American continent is geographically diverse, cutting through rain-snow-shine, mountains jutting over plains, cities sprawling into wilderness, habitations criss-cross dotted with surprisingly few empty zones. Go somewhere like Denver, the Mile High City, three million people fighting against nature. Few other countries have anything like this; geographically vast polities usually have uninhabitable blocks. Australians are twenty-five million people clustered against the shore. It still surprises me, after all this time, how every US state has a meaningful city8.

Midcentury Denver, growing and sprawling out across its mountains, started to run into their natural inhabitants – deer.

Starvation is one way nature adjusts the deer population to the available food supply. People did not usually see this process, but in the 1950s and 1960s Colorado became more densely settled, reducing forested areas and forcing deer to look longer and harder for food. At the same time, the state enacted conservation laws, limiting when and where hunters could shoot. Soon emaciated deer began wandering onto the lawns and through suburban streets looking for a meal. People began to feed them, only to find that they died anyway. They would drop dead by haystacks, along highways, and in flower beds.

In the late 1960s, a young biologist named Gene Schoonveld tried to figure out why the deer starved even when they were fed.9 He deprived some deer of food for a while, “[h]e cut windows in their stomachs to see what went on inside, and then he began to feed them”. While this was going on, he had a control group of healthy, well-fed deer as backups in case anything went wrong. It did...but not to the experimental group.

The pen in which the deer were kept also housed sheep, which, it turned out, were scrapie carriers. The deer somehow acquired scrapie – there’s a huge unanswered question here, which DTM doesn’t address. How did they get scrapie? They didn’t eat the sheep, presumably. Did it somehow transmit from casual contact? This is not supposed to happen. And yet: the deer in the sheep pen started dying of a mysterious scrapie-like disease, one never reported before, that would go on to infect thousands.

These deer were released into the wild. Ten years later, the first reports of chronic wasting disease came out. The disease spread across deer and elk in the western half of the country. By the turn of the millennium, cases were exploding – and lost all geographical restriction. DTM can report up to 2005, at which point it was floating around Upstate New York.

This kind of spread doesn’t track natural deer migration. That’s irrelevant, because nothing about CWD’s spread is natural. We shift gears into an anthropology of the American hunter. The hunter wants to shoot the most impressive buck, to bag himself one with as many “points” as possible – one whose antlers branch out most. A “ten-point buck” has five branches on each horn:

Original by Ric McArthur

Nature doesn’t make enough bucks with perfectly symmetrical ten-point horns. To fill the demand, the market had to step in. Thus was born the deer farm industry, which raises captive deer in better genetic and nutritional conditions than Nature permits, then ships them across the country so hunters who couldn’t get legit ten-point bucks get the taxidermy piece for their wall. These are controversial amongst hunters and illegal in numerous states – but the industry is big enough to spread CWD. (The kind of hunter who needs a deer shipped to his house is the kind of hunter who will fumble killing it.) Another problem is supplemental feeding – leaving out protein-enriched food for deer to eat. This produces “trophy class animals at an earlier age”, but again, what’s in that protein? (“It is much like feeding your cows 41 percent protein cottonseed cake during the winter to raise the protein level in the cow’s diet to a level that will maintain acceptable production”, says that article from 1991.)10

The book segues into a vignette. CWD was new in Wisconsin in the early 2000s, and the state’s Department of Natural Resources was optimistic it could eradicate it. In a state with a love of hunting, you could, in theory, recruit people to kill every single deer in a 400-square-mile radius:

In many states, the state would have had to call out the National Guard for such an onslaught, but hunting is a passion in Wisconsin. Hunters shoot 450,000 deer every year, more than in any other state. “I’m looking for ardent hunters to help us, unless fear or their wives keep them away,” one DNR official told a Milwaukee magazine. The state extended the normal hunting season and waived the usual limit of one buck per hunter, and the hunters came out in force.

The whole affair was gruesome – one official called it “hunting for slob hunters”. If you’re trying to eradicate a prion disease, you can’t very well let people take the carcasses home to eat. Bodies piled up in control stations, decomposition mingling with bleach. The 2002 hunt established a base rate of 2% for chronic wasting disease in Wisconsin deer, with the most affected areas getting up to 10%. Further hunts in 2003, 2004, and 2005 spread to wider and wider areas – and didn’t move the needle one bit.

This is to say that CWD is quite a bit more common in the American deer population than BSE ever was in British cattle. Since publication, it’s popped up in Norway and South Korea. Notably, Norway doesn’t allow for the import of cervids, raising numerous questions about how it got there. There are no unambiguous cases of CWD transmission to humans, and in vivo/in vitro primate studies have mixed results. There sure are some unusually young hunters with sporadic CJD, though. But don’t worry, most of them aren’t M/M homozygotes!


There is an absolute ton going on in this book. I’ve had to skim over whole sections. Parts that couldn’t be easily slotted into a narrative review include:

  1. When Gajdusek was invited to a party at Prusiner’s house, he was horrified to find his rival had purchased hundreds of New Guinean statues – all with the genitals removed.

  2. Elio Lugaresi, the neurologist who clinically identified FFI, shipped his patient’s brain to his former student Pierluigi Gambetti. Gambetti at this point ran a neuropathology lab at Case Western Reserve University, in Cleveland. Lugaresi was absolutely bewildered as to why his student would leave Italy for Cleveland.

  3. The only time DTM ever saw Lugaresi upset was when Lugaresi took him out for dinner at a restaurant specializing in wine, and he had to tell him he didn’t drink it.

  4. The British agricultural minister attempted to force-feed his four-year-old daughter a burger as a photo-op. Later, a family friend’s daughter died of vCJD.

  5. BSE origin theories include: space dust, Gajdusek being a secret CIA operative who infected cows, CJD-infected human remains accidentally getting shipped to Britain as animal feed.

  6. George Glenner, an Alzheimer’s expert and collaborator of Prusiner’s, died of cardiac amyloidosis, where amyloid plaques (similar to those seen in the brain in both prion diseases and Alzheimer’s) build up in the heart. Several of their collaborators suspect he was somehow infected.

  7. Prusiner experimented with quinacrine, the malaria cure, as a potential CJD treatment. He gave a bunch of it to a young woman whose father heard about his research. It stopped her symptoms from progressing – and exploded her liver. “By now, more than three hundred prion disease sufferers have tried quinacrine and, according to Graham Steel, the head of the CJD Alliance in the United Kingdom, “they’re all dead.””

  8. There were internet fora full of people convinced they had vCJD. Minor side notes like “symptoms persisting for several years without decline” or “spontaneous recovery” were not considered contraindications. Some claimed to be cured of vCJD, and shilled various alt-med solutions. “[T]he woman who said she’d been cured responded with accidental ambiguity, “I don’t believe my information gives false hope—at the moment, what else is there?””

“What else is there”, indeed.

I’ve also skimmed over most vignettes of the family. Some of these are real highlights, particularly the explication of Silvano’s illness – the man who brought FFI to medical attention, after decades of misdiagnosis. This is impressively distinctive. His illness does not look like much anything else:

During good moments, Silvano could still read. He wore his glasses on the end of his nose. He ticked off the days on a pad so he wouldn’t get disoriented. He dressed in black silk pajamas with a pocket square and continued to receive visitors.

The nights were not so smooth. At night, Silvano dreamed, re-enacting memories of his old life, just as his sisters had. FFI strips its sufferers not just physically but psychologically. Silvano had always loved social life. He had even found the old crest of the Venetian doctor—black and red with a gold star—and hung it outside his bedroom. Now, in his dreams, Silvano carefully combed his hair, as if for a party. Once he saluted as if he were part of the changing guard at Buckingham Palace. He picked an orchid and offered it to the Queen of England.

During lucid moments, Silvano could laugh with Ignazio and Lisi over what was happening—he joked that the brain-sensor cap on his head made him look like Celestine V, and that, like the thirteenth-century pope, he wanted to renounce his crown—but such joking did not disguise his terror. Two months into his stay at Bologna, he was howling in the night, his arms and legs wrapped around themselves, the tiny pocket square still in place. In the last days of his life he lay in a twitchy, exhausted nothingness.

My thoughts turn back to DF, the FFI patient who swung between total lucidity and introverted serenity. Everyone was enraptured by Silvano’s illness precisely because he didn’t look “demented” in a traditional way, or even an unusually rapid way. This is supposed to be also true of CJD – some kind of lucidity persisting unusually far into the illness, even in a state that in any other disease would be marked by confusion and delirium.

So, let’s close on a question: is Alzheimer’s a prion disease?

Remember, earlier, that we mentioned how pituitary growth hormone treatment can spread CJD. Turns out – probably – that it can also spread Alzheimer’s. Tauopathies – which include Alzheimer’s – seem to be...prion-like, in some way. Prusiner has been banging this particular drum since the 1980s, but hey, that’s part of what “good at getting grants” means.

What is Alzheimer’s? No, seriously, that’s an important question. “Alzheimer’s disease” originally referred to a rare, early-onset dementia. It’s always been characterised by a specific neuropathology, but the value of “specific” here is, uh, not specific, and Alzheimer’s-like pathology is seen in surprisingly many healthy elderly people. The neuropathology of autosomal dominant Alzheimer’s is not always identical to that of the sporadic disease, nor is that of the dementia seen in Down syndrome (also, people with Down’s develop Alzheimer’s neuropathology 15+ years before symptom onset, and some people with Down’s never get dementia despite having the pathology). It’s tricky to avoid the conclusion that modern Alzheimer’s is a wastebasket diagnosis.

Does this mean there are diseases that can operate by the “genetic-infectious-sporadic” prion triad without being prion diseases per se? It’s possible to transmit cancer via organ donation, maternal-fetal transmission, or unhinged experiment; cancer can be genetic (e.g. BRCA, Li-Fraumeni) or sporadic (e.g. most of it). “Prion disease” is probably not the best way to think about cancer. It’s also apparent, when you look at it this way, that what we’re calling “sporadic” is not “you always get it by bad luck with no external influence”; there are unambiguously environmental factors in many cancer cases.

What I’m saying is that we need to redo the pituitary HGH experiments with more diseases My advisor has informed me not to say this.

This is interesting to dwell on. DTM mentions early in the book that modern medicine owes its existence to a fluke of prionlessness:

In 1862, Louis Pasteur boiled broth to kill the microscopic life in it, put some of the liquid in a goose-necked flask, and showed that if nothing living ever reached the liquid, no life would ever grow there. The experiment had enormous practical impact; it gave doctors the knowledge they still use to save lives by showing that because infections were living, reproducing things, if you could keep an environment sterile, you could keep a patient healthy. Had infectious prions been in Pasteur’s flask, curative medicine would never have gotten started. Doctors would still be competing with shamans and medicasters.

Perhaps prion research owes its existence to a similar fluke. Prions are recognizably genetic-infectious-sporadic in a sense untrue of most diseases. But the mainstream take on prion infection is that it’s actually pretty tricky – animal prion research involves injecting the proteins directly into the brain, because anything else won’t work well enough. If we had tried harder to shoot prisoners full of cancer in the 1950s, would it have scooped the recognition of prions? What if Alzheimer’s became a significant research area decades before it did in our world? (Maybe if a 1920s starlet rather than a 1940s one had an early-onset case.) It’s possible prions could never have been recognized as “unique”. We might still be running in circles with scrapie. We might – just might – have had a much bigger problem with BSE.

So goes the Land of Hypotheses. But we don’t live there, not in this life. In this life, we have a thousand unanswered questions. We can’t – yet – simulate all the universes where the prion question went different ways, where Alzheimer’s was a research focus in the 1950s, where we didn’t spend decades chasing the M/M homozygosity illusion, where the first Fore cannibalism victim died of something else, where prions were in Pasteur’s flask. But we can think, and dwell, and dream, and chase possibilities. The Family That Couldn’t Sleep is a great book, precisely because it inspires possibilities. Even the ways it’s become wrong with time are usefully wrong – illustrative, if you will.

“If prion fears go the way of swine flu or Ebola”, indeed.

1

Fatal familial insomnia is an autosomal dominant disorder. This implies that if the doctor had FFI, his sibling must have also had it, tracing the origin of the gene another generation back – a mutation in one of their parents. The book doesn’t really grapple with this; it seems that whoever the mutation really originated in must’ve died of unrelated causes before developing FFI, which is common in autosomal dominant neurodegenerative disorders and makes it a real pain to construct family histories.

2

I think DTM’s explanation is erroneous. He identifies his disorder as resembling “a form of CMT caused by a mutation on Chromosome 21”, then goes on to describe the specific gene affected. That particular gene is on 8p21, so...understandable game of telephone. There is a reported form of CMT caused by a mutation on 21q22, but it seems to have been first described in 2019.

3

Despite the rhetorical description of Zigas as an “actual doctor”, quite a few people – including Gajdusek – were sure he was lying about having a medical degree.

4

They divorced at some point in the 1960s, and Shirley reverted to her maiden name. The book refers to them as “the Glasses” and uses the same surname for both, but most sources write Lindenbaum.

5

Draw your preferred parallels to modern companies offering polygenic embryo screening.

6

Realistically, this wasn’t the first case. However, the man in 2009 with vCJD and an M/V genotype wasn’t confirmed at postmortem, so the medical community didn’t accept it until 2016.

7

Gerstmann-Sträussler-Scheinker disease is apparently more Alzheimer’s-like, in that DTM says the misdiagnosis is particularly common this way (I get the impression the Vallabh variant is phenotypically most GSS-like, from what I’ve read of Eric and Sonia’s work). GSS is really rare – or really underdescribed, one of those – and it’s hard to find good detailed case descriptions, the kind you’d need to compare it to FFI or CJD on this axis.

8

Let’s pretend for rhetorical purposes that somewhere like Cheyenne, Wyoming is a meaningful city. The metro area is 100k people – it’s meaningful enough. The equivalent spot in Australia has a population of “no one”.

9

With the benefit of hindsight, this is known as refeeding syndrome. If someone is deprived of food for long enough, they can’t instantly return to a normal diet. The best-case scenario is that your digestive system is very unhappy with you for a while; the worst case is sudden death.

10

DTM refers to this as Quality Deer Management, but I think he’s wrong? QDM seems to be a particular attitude towards hunting that avoids shooting young bucks to optimize antler development, while shooting more doe than a pure “kill all the cool-looking ones” strategy will to avoid overpopulation. You can do QDM with or without supplemental feeding. I might be wrong – I know very little about deer hunting



Read the whole story
francisga
14 days ago
reply
Lafayette, LA, USA
Share this story
Delete

The MTV News Archive Is Gone—and That's OK

1 Share
MTV logo | Illustration: Lex Villena

My first ever job in journalism was at something called Hollywood Crush, a young Hollywood news and gossip site that was part of the larger MTV News ecosystem. 

Although the MTV brand still had a certain cool-kid cachet, left over from a time before it became associated primarily with teen reality shows, I was not exactly pounding the pavement for the groundbreaking stories that would change the world. By the time I left—laid off in one of those periodic mass purges that were as much a hallmark of 2010s journalism as the Buzzfeed-style listicle—my greatest contribution to the discourse was a week-long dragging on Twitter by outraged Disney adults, who didn't like a joke I'd made about casting Vanessa Hudgens in the upcoming live-action reboot of Mulan

But if my early journalistic efforts were not cosmically significant, they were nevertheless real. When Twilight took the country by storm; when Jennifer Lawrence fell down at the Oscars; when one of the Jonas brothers had a messy breakup—I was there, laptop at the ready, documenting it all.

…Or was I? 

Alas, all evidence of my early career has now been stricken from the record. Last month, the MTV News site vanished in its entirety from the internet, and with it every last article, interview, and top-ten list in GIFs produced by its journalists over the course of nearly three decades. 

Granted, the most iconic content still survives elsewhere: A clip of O.G. MTV newsman Kurt Loder breaking the news of Kurt Cobain's suicide, for instance, remains available on YouTube. But for those of us whose beat was, shall we say, less crucial to the public discourse, years of our professional output have disappeared down the memory hole, lost in time, like tears in rain. 

Many of my former colleagues were dismayed by this, and I understand why: Imagine seeing an entire decade of your professional output callously erased in an instant, just so some corporate overseer can save a few pennies on server space. Sites like the Internet Archive, excellent as they are, still cannot catalog everything; some of these articles are well and truly gone. 

Nevertheless, I've been unable to muster the same righteous indignation, as if this were an unimaginable loss. So much of what we—what I—produced was utterly frivolous and intentionally disposable, in a way that certain types of journalism have always been. The listicles and clickbait of early aughts culture may differ in many ways from the penny press tabloids of the 1800s, but in this, they are the same: They are meant to be thrown away. 

While some of MTV's old archives still survive, they can be difficult to unearth unless you have the precise URL where an article once lived. I was able to find some of my own old stuff preserved on an archived version of my MTV News author page, but only after an hour of scrolling through old versions of the site that resembled one of the crumbling dreamscapes from Inception, all dead links and broken images and blocks of HTML in a state of terminal decay. 

Here's the thing: Once I did, my first thought was to wonder why I'd bothered. 

As it turns out, there was very little gold in them thar hills of the 2010s media landscape. My work at MTV appears to have consisted mainly of clickbaity blog posts with titles like "Ben Affleck Seems To Have Gotten a Giant Divorce Tattoo," or "Game Of Thrones Has Spawned a Giant Gingerbread Landscape," or (and I swear I am not making this up) "7 Pic Pairs That Prove AnnaSophia Robb Is a Kitten Disguised As a Human Being." 

This is how it was, in a journalistic paradigm that favored quantity—and virality—over quality. 

If the media today is in existential crisis, this was arguably the moment when that crisis began. There had never been more people competing for fewer scraps, to the point where just getting paid to write was, itself, a coup of sorts. (A running joke amongst journalists at the time was how many so-called writing jobs came with no money at all. Instead, you were told, you would be paid in "exposure.") 

The magazines that used to pay $2 per word had collapsed en masse; so had the local news outlets, with the shoe-leather reporting jobs that had launched the careers of journalists in previous generations. Every outlet was trying to do more with less, which invariably meant less reporting, and more opinion, the latter being comparatively cheap to produce. 

For an enterprising writer, the most remunerative option was to turn into a one-man content machine, churning out ditzy blog posts and aggregated news stories for tens of dollars at a time. If you did enough of these, you could almost make a living, which may be why so many outlets made writers meet quotas that, as I recall, could be as much as 20 posts per day. 

This is not to underrate the distressing impact of MTV News being wiped from existence, especially for writers whose old articles might have been valuable for reasons beyond their contribution to the discourse. Having a portfolio of clips, even very stupid ones is, after all, how we get work. But how reasonable was it to think those archives would live forever? 

Long before the era of the news aggregator or the listicle in GIFs, "yesterday's news" was a euphemism for worthlessness for a reason, and disposability was built into the physical medium. Yesterday's newspaper was what you used to line a birdcage, or build a fire, or stuff your shoes to keep them from losing shape. Sure, the occasional paper might make it into a library archive, or onto a microfiche spool, but how diligent were those archival efforts? And how often has anyone even bothered to look at them since? 

But to acknowledge that digital media is just as disposable as its physical counterpart isn't just a blow to the egos of the people who make it. It also cuts hard against the common wisdom that the internet is forever—or indeed, that what is publicly posted online is not just permanent but important. This has long been a crucial subtext to the punitive culture of cancellation: To ruin someone's life over a ten-year-old tweet requires the conviction that said tweet is not just some ephemeral sentiment, but a personal artifact, capturing an essential truth about the character of the person who made it. 

Maybe it's better, actually, that this illusion of permanence be shattered. Maybe so much internet chatter—including what passed for journalism in the era of news being aggregated rather than reported—is less like a precious historical artifact, and more like the eighth-grade burn book that molders forgotten on a shelf in your childhood bedroom until your parents throw it away.

And maybe, when it comes to something like the archival content of MTV News, we can rely on a truth that predates the internet: that things worth preserving tend to be preserved, if not through the ubiquity of the mass market then through the discernment of individual people. Sometimes, even when doing so is against the law

Humans have always had the instinct—even a sort of sixth sense—to save things, whether it's an unpublished manuscript, a bootleg recording, a sheet of newsprint, or an old magazine with a particularly interesting story. This is true of digital content, too: The best articles of the internet era have a way of taking root and replicating, even if the publication where they originally appeared goes bust. They live on in archived snapshots, in forum discussions, on college syllabi, in PDFs printed out, or posted to Listservs. (Well, most of them do; my painstaking curation of kittens who look like actress AnnaSophia Robb somehow slipped through the cracks, but trust me, the resemblance was uncanny.)

It's kind of nice, actually. In an era where so much content is curated by algorithm, and where our archivists are as likely to be AI as human, the stories with the most staying power are still the ones some person, somewhere, thought were worth remembering. 

And the rest? Maybe it's not just forgettable, but also best forgotten.

The post The MTV News Archive Is Gone—and That's OK appeared first on Reason.com.

Read the whole story
francisga
16 days ago
reply
Lafayette, LA, USA
Share this story
Delete
Next Page of Stories