Mark Dery is guest blogger du jour until August 17. He is the author of Culture Jamming, Flame Wars, Escape Velocity, and The Pyrotechnic Insanitarium. He’s at work on The Pathological Sublime, a philosophical investigation into the paradox of horrible beauty and the politics of “just looking.”
In February, 2009, I approached Steven Pinker, a deep thinker about linguistics and cognitive science who fishes where the two streams flow together, with a request for an interview. I was on assignment for the cultural studies journal Cabinet, writing a personal essay that would intertwine my own fraught relationship to the notion of intelligence with a historically informed critique of the cultural politics of the IQ test, specifically the Stanford-Binet and its successor the Wechsler.
A professor in the Department of Psychology at Harvard University (until 2003, he taught in the Department of Brain and Cognitive Sciences at MIT), Pinker has popularized his theories of language and cognition through articles in the popular press and via critically acclaimed books such as The Language Instinct, How the Mind Works, Words and Rules, and The Stuff of Thought: Language as a Window into Human Nature. The furthest thing from a vulgar Darwinian—he rejects the term “genetic determinism”as a social-constructionist slur—Pinker is nonetheless a vigorous opponent of what he contends is the ideologically inspired insistence (often from the academic left, he maintains, and typically from those in the humanities rather than the hard sciences) that we are exclusively products of cultural influences, rather than, as he puts it, “an evolutionarily shaped human nature.”In his popular critique of this assumption, The Blank Slate, he takes up the sword for evolutionary psychology, behavioral genetics, and cognitive science against social constructionism.
Exhaustively knowledgeable about the science of cognition, and a foeman who gives as good as he gets (if not better) in the nature-versus-nurture culture wars, Pinker seemed the perfect foil for some of my ideas about the IQ test. Thus, I was delighted when he agreed to an informal e-mail exchange that lasted through much of February and into early March. I was equally chagrined when I had to inform him that his thoughtfully considered, sharply argued quotes didn’t make it into my published essay. Happily, my guestblogger stint offered the perfect solution: publish our spirited exchange as a Boing Boing exclusive. I owe Professor Pinker a debt of gratitude for allowing me to publish our interview on Boing Boing. I’m very much the beneficiary of his deeply insightful, eloquently argued ideas; the privilege of sharpening my ideas on the whetstone of his intellect is a rare one, and I’m delighted to share that opportunity with Boing Boing’s readers…
Mark Dery: I’m interested in the relationship between a facility with language—eloquence, by any other name—and intelligence. I’m especially interested in the question of whether people possessed of a certain facility with language can use language as a sort of simulation engine to create the illusion of a greater intelligence than they actually possess, whether through eloquence or, more crudely, the strategic use of a large vocabulary (specifically, arcane words or rarified jargon), highbrow allusions, and the like.
Thanks for taking the time to read and consider this query.
Steven Pinker: Unfortunately, there has not been much systematic work on the relation between language fluency and psychometric measures of intelligence. There are some neuropsychological and genetic syndromes in which retarded children and adults can speak deceptively well, fooling onlookers into thinking that there is nothing wrong with them. I discuss one case of hydrocephalus, and another of a child with Williams Syndrome, in chapter 2 of The Language Instinct.
Within the normal range, the word “glib”pretty much captures the common-sense intuition that it is possible to be verbally fluent without saying anything intelligent. On the other hand, even if fluency, high vocabulary and the like can momentarily fool listeners into overestimating the person’s intelligence (or at least the quality of his thought, which is not perfectly correlated with intelligence—smart people can say foolish things), I suspect that the vast majority of verbally fluent people are also intelligent by standard measures. Vocabulary, as you probably know, is highly g-loaded, and on average, people who test well in verbal intelligence also test well in all other measures of intelligence (that is the basis for “g”).
On your cultural critique of IQ tests: I’m not sure if you plan to reiterate the arguments of Stephen Jay Gould in The Mismeasure of Man (and similar critiques from Leon Kamin and others) but I assume you know that his arguments were considered highly inaccurate (to the point of dishonesty) by the scientists who study intelligence even when the book was published, and by now have been pretty much discredited.
MD: I take your point about glibness, although I’d argue that, in popular parlance, the usage implies a certain mendaciousness or sophistry or flipness—a use of language to hoodwink. Then again, perhaps that’s just the Judeo-Christian mistrust of the silver tongue reasserting itself: Satan the special pleader, and all that. You write, “I suspect that the vast majority of verbally fluent people are also intelligent by standard measures.”But, as you note, there’s no hard data to support that presumption, correct? Obviously, your gut instinct is surely better than virtually anyone’s, on this subject, but I wanted to be sure I had that right. By the way, mightn’t the phenomenon I’ve described be a species of savantism, at least in theory? Or do I misunderstand savantism?
As for my critique of IQ tests, I’m not out to mailbox-whack the IQ test, although I am inclined to believe that no psychometric instrument is immune to the intellectual climate of the moment that gave birth to it, or uninflected by the prejudices and presumptions of the culture around it. In a previous essay, I delved deep into the MMPI, researching it in depth and, ultimately, taking the test. I was simply flabbergasted, and not a little alarmed, to discover that such a risibly biased artifact—a perfectly preserved fossil of antediluvian attitudes toward women and gays—was still in use in corporate and military contexts. It was like learning that Harvard medical school’s core curriculum still included mesmerism and phrenology!
In any event, I am aware that the battle lines are drawn between social constructionists and genetic determinists. I believe we can give Darwin his due while rendering unto Gould what is his.
SP: There are no hard data (that I know of) showing that verbal glibness or sophistry is correlated with general intelligence, mainly because there are no standardized tests (that I know of) for verbal glibness or sophistry. There are measures of “verbal fluency”(e.g., how many synonyms a person can come up in some unit of time), and they are correlated with intelligence, but that’s not exactly the same thing.
Retarded people with excellent language fluency are generally not thought of as savants because their abilities do not exceed that of normals, in the same sense as, say, autistic people who can do lightning-fast calculation or calendrical computation. There are some autistics who seem to be “savants”at acquiring multiple languages, but again that is not the same as silver-tongued patter.
It is not accurate to write that “the battle lines are drawn between social constructionists and genetic determinists.”When it comes to psychological traits, there are no “genetic determinists”(i.e., people who believe that genes determine psychological traits such as intelligence and personality with probability = 1). The term is a debating tactic designed to misdescribe those who note that genes probabilistically influence psychological traits—that is, it is a crude form of straw-manning.
MD: Surely the sheer size of something as unsubtle as vocabulary could be quantified, and perhaps even its “breadth,”i.e., the extent to which esoteric words are represented within that data set (although esoteric is, of course, an inescapably subjective judgment)? Obviously, determining how many synonyms a person can come up with in a limited time period may measure fluency, though it may not sound the depths of the subject’s vocabulary or map its outer bounds, if that makes sense. Is there a psychometric test that accurately gauges vocabulary size in, say, the same way that some such tests gauge mnemonic ability? In the popular mind, a large vocabulary is shorthand for being smart; hence the tendency to characterize eggheads as people who use arcane or polysyllabic words while the rest of us stumble along with garden-variety vocabularies. Hence, too, the ever-expanding pop-psych business literature of “vocabulary building,”which attempts to arm the corporate warrior with “word power.”Both presume that words = smarts; the bigger the word, the smarter the speaker. (Of course, class animosities in American culture, manifest in the American preference for “plain speaking”and g-droppin’ “tough talk,”also ensure that public speakers with big vocabularies are tarred with the charge of elitism—Palin and McCain tried to make that charge stick to the more verbally fluent Obama, in the recent race.) Of course, popular perceptions are hardly hard science, so the connection may simply be a figment of the mass imagination. My point is that we may be able to isolate testable aspects of verbal ability—vocabulary—as opposed to glibness, which as you say may not be testable. And we may be able to isolate vocabulary in a manner that tests not only for the ability to use it, on the fly, but that also gauges its size and breadth (variety).
Interesting to hear you say that “when it comes to psychological traits, there are no ‘genetic determinists’ (i.e., people who believe that genes determine psychological traits such as intelligence and personality with probability = 1).”I just finished interviewing a source who told me, unequivocally, that “about 50% of our personalities”is attributable to genes. Clearly, there are those who are pushing the position beyond the one you’ve outlined.
In any event, point taken that your use of genetic influences isn’t simplistically deterministic. But your statement that I should be wary of Gould’s Mismeasure—specifically, the pre-emptive nature of that statement, coming as it did before I had expressed any affinity with Gould’s ideas—only reaffirms my perception that the culture war rages on, even within the secured perimeters of “value-neutral” hard science. I’m no postmodern practitioner of science studies who thinks empirical truth (small “t”) is some Baudrillardian mirage (although I do believe that the Alan Sokal-style mischaracterization of the postmodern position as the belief that there’s “no such thing as reality”is, to use your phrase, “a crude form of straw-manning”). But neither do I think that science, in everyday practice, can possibly remain untouched by the cultural that surrounds it or the historical moment in which it is embedded. The history of science, from social Darwinism to eugenics, Victorian “cures”for “hysterical women”to the midcentury vogue for lobotomies, is instructive, I think. Let me be emphatic: I am not, repeat not, using the cheap, guilt-by-association tactic to tar your work or the work of any who reject Gould, here. Rather, I’m simply noting that both sides of the debate see their positions as based on incontrovertible scientific evidence and their opponents as blinded by ideology. My position is that ideology is part of the cultural air we breathe; that humans are creatures of nature and culture; and that we must give both Darwin and the devil (Gould?) their due.
Of course, we part company on the question of human nature. By my lights, nothing could be more unnatural than the human animal. The naked ape is unquestionably a product of evolution, to be sure, and much of what we once thought was imbued in us by nurture or the wider society is turning out to be genetically influenced. Yet, at the same time, culture is nature, for naked apes, and is only becoming more so, as we live more and more of our lives immersed, headfirst, in the world on the other side of our screens, coming up for air at ever more rare intervals.
But now I’m talking poetry.
Thanks for hearing me out, nonetheless.
SP: Vocabulary generally makes up a substantial portion of verbal IQ tests, as well as the verbal component of SAT and GRE. My understanding is that it correlates highly with other measures of intelligence. Indeed, there is a quickie IQ test called the Wordsum that consists simply of ten multiple-choice vocabulary questions, and it correlates > .8 with IQ measured by traditional tests. Likewise, a commonly used test for intelligence in children consists of matching words to pictures; again it correlates well with other measures. There are several possible reasons why this could be so: rarer vocabulary involves more sophisticated concepts; learning words from context requires powerful inferential abilities; smart people read more and are exposed to more words. I don’t know enough of this literature to know whether any have been confirmed or disconfirmed.
The degree to which words are “esoteric”is easily quantified. The frequency of words in large corpora (occurrences per million words of text) have been tallied, and are routinely used in psycholinguistics experiments.
If your informant literally said that “50% of our personalities is attributable to genes,”he or she does not understand the concept of heritability, which quantifies the percentage of variance in a trait, not percentage of a trait. If he or she said that 50% of the variation in personality is attributable to variation in genes, then he or she is not a genetic determinist, because 50% is different from 100%. No one who understands the concept of heritability and is familiar with the past hundred years of data in behavioral genetics is a genetic determinist when it comes to psychological traits. (When it comes to certain diseases, such as Huntington’s, genetic determinism is correct.)
My own view, and I suspect that of most scientists, is that ideology is a nuisance that good science is designed to work around. It’s like personal ego, financial conflicts of interest, methodological sloppiness, cognitive limitations, and other barriers to discovering the truth—obstacles that make doing science nontrivial, and that call for countermeasures such as peer review, open criticism, academic freedom, testability and falsifiability, and others. The fact that science has undeniably progressed, both intellectually and practically, is evidence that those countermeasures have been successful, though of course they are not perfect and cannot act instantaneously.
I also find that many specific examples alleging that scientific theories are products of culture and ideology are pretty flimsy—after-the-fact just-so stories that aren’t rigorously tested against alternative explanations. Worst of all, I’ve never heard of anyone admitting that his own claims are culturally biased; it’s always “relativism for the other guy.”
MD: Yes, science always intends to work around ideology, the weasel word here being “intends.”Peer review, open criticism, academic freedom, testability and falsifiability are all intended as guarantors of good science, and as a true son of the Enlightenment, I place much faith in them—but not all my faith, since peer review is only as good as the peers involved, and entire cultures can suffer from ideological blind spots, prejudices and presumptions that are the birthright of anyone born at that time, in that place, in that socioeconomic class or gender or whatever. Peer review, open criticism, academic freedom, testability and falsifiability were—more or less—alive and well in the ’20s, weren’t they? And yet the scientific establishment’s best minds toasted the bright future promised by eugenics and state legislatures followed suit, mandating the forced sterilization of felons whose subnormal criminal minds, as any good scientist knew, were hereditary.
Regarding your assertion that “many specific examples alleging that scientific theories are products of culture and ideology are pretty flimsy—after-the-fact just-so stories that aren’t rigorously tested against alternative explanations”: How, then, to explain the medical practices enumerated in my earlier letter? I’m not arguing that scientific theories are entirely products of culture; nor would I deny that they are less so now, happily, than they were in the past. Rather, I’m arguing something subtly different: that in everyday practice scientific theories have, at various historical junctures, borne the deeply embossed stamp of ideology.
In the past hundred and a half years, we’ve witnessed the dominance of scientific theories that even in their purely theoretical form, let alone everyday practice, were sharply etched with the prejudices and presumptions of the day. I’ve mentioned eugenics; I’ll name a few others: the 1950s vogue, in psychotherapy, for dosing with antidepressants or institutionalizing housewives suffering from what Betty Friedan later called the Problem With No Name (a suicidal dissatisfaction with the role of the happy homemaker). Psychiatry and psychosurgery have been used, throughout their morally checkered history, to bring to heel women who were feminists avant la lettre, as well as political radicals and others who questioned the ideological assumptions of the world they were born into. “Science”decreed them aberrant, and “science”dealt with them, summarily. If pills didn’t work, the electroshock room beckoned, or perhaps the leucotomist’s pick. Another example: What peer-reviewed journals, open debate, and falsifiability claims prevented the pathologization and in many states criminalization of homosexuality, which was only removed from the DSM-II in 1973? Who, in the peer-reviewed medical journals and openly critical medical community, stood up (until very recently) to oppose the routine surgical “reassignment”of intersex (hermaphroditic) babies, at birth, to a single gender? To be sure, hermaphroditism is anomalous in the strictly statistical sense, but the reflexive assumption that the intersexed patient must be “normalized”with the knife—like the presumption that homosexual “deviance”must be psychopharmaceutically treated—is inarguably a cultural bias, soaked through with ideology. So, too, is the not uncommon tendency, in such cases, to “rationalize”the hermaphroditic male infant with the small but fully functional penis (and male reproductive system) into a female, on the presumption that a small penis is too unendurable a humiliation for any man to bear, in American society. These are only a few examples of science corrupted by cultural bias—ideology, by any other name. If sexism, homophobia, and culturally bounded notions of the normative aren’t to blame for the lamentable chapters in American medicine I’ve just detailed, then what alternate explanation would you posit?
As for your assertion, “I’ve never heard of anyone admitting that his own claims are culturally biased—it’s always “relativism for the other guy.” Well, I’ll happily admit that my oppositional politics incline me to be wary of institutional power, whether it wears a mortarboard, a white smock, or a power suit with the requisite American flag lapel-pin. I try to correct for that bias by honoring reason, logic, empirical evidence, and the scientific method. But I’ll be the first to admit that our blind spots are, by definition, unknown to us. This is precisely why even science needs its watchdogs, and why a few of them must not be peers.
In any event, this fascinating debate was intended to be a preamble to questions about the history of, or scientific validity of, the IQ test. Beyond your own work, and Gould’s, who else would you suggest I read regarding the history of, controversies surrounding, and scientific validity of either the Stanford-Binet or the Wechsler (which I gather has largely if not entirely superseded the Stanford-Binet)? In IQ: A Smart History of a Failed Idea, Stephen Murdoch is quite critical of the Wechsler. I wondered if you were familiar with his critique. (Interestingly, he doesn’t cite Gould!)
SP: Thanks for the explication, but I’m not persuaded by the examples. Even putting aside the fact that eugenics is not a scientific theory but a political policy, I don’t see any evidence that the practices and policies you mentioned are connected to the ideology of the times (as opposed to being examples of stupidity, ego, error, ambition, unanticipated consequences, bad science, and so on ). Eugenics was popular among the robber-baron right and among the progressive left (e.g., Emma Goldman, George Bernard Shaw, Harold Laski, John Maynard Keynes, Sidney and Beatrice Webb, Margaret Sanger); was institutionalized in the American South and in Scandinavia; today it’s practiced in Singapore and often rediscovered by ingenuous undergraduates. I don’t see any ideological common denominator, or identifiable ideological change that correlates with its rise and fall in popularity. (As far as I can tell, the main reason for its current taboo status is the historical experience of Nazism.) The same with the medical practices you describe: most were abandoned when people became aware of the harms they led to, or were replaced by better techniques; in some cases, they are still used when the evidence suggests they can be helpful (e.g., electroshock). Some of these changes were driven by explicit moral argumentation (e.g., in the case of sex reassignment surgery or the classification of homosexuality as a “disorder”) which persuaded people to change their practices. And yes, the mechanisms of open disagreement, review, moral argumentation, examination of hypotheses in the light of evidence, and so on, were effective in changing these practices; that’s why lobotomies were abandoned after a couple of decades, whereas trephination, exorcism, prayer, etc. lasted for millennia.
My point isn’t that science (or more generally, rational and evidence-driven argumentation, of which science is a part) could ever prevent people from trying out stupid and evil things. It’s that I don’t see the evidence that these stupid and evil things are inevitably, or even usually, products of the surrounding social and political ideology. Maybe they are sometimes, but that has to be demonstrated in comparisons with simpler, alternative explanations, by getting independent assessments of the prevailing ideological climate and showing that they predict which kind of bad ideas arise in a given period. The tendency I see in the science-studies mindset to treat ideology as the self-evident, only thinkable explanation.
My own books are not about IQ or IQ testing, so I don’t go over the history myself. The Bell Curve, though treated as toxic by many intellectuals, has some clear historical discussion, and the journalist Daniel Seligman (who died a couple of weeks ago) wrote a history of intelligence testing a while back. An interesting article on the original embrace of intelligence testing by progressives was written by Adrian Wooldridge: “Bell Curve liberals,” New Republic, February 27, 1995. Linda Gottfredsen has documented the predictive power of IQ tests in her work, and Camilla Benbow and David Lubinski have many papers on the predictive power of high SAT scores in their sample of people who were given the test in early adolescence. J. B. Carroll has written a number of books and edited handbooks on intelligence, and I suspect that one or more contain histories of the field. Ian Deary is probably the most active researcher today on neural correlates of intelligence as measured by IQ tests, and Robert Plomin the most active researcher on its heritability. I also have citations to a number of reviews of Gould’s book that challenge his prosecutorial history of testing:
Blinkhorn, S. 1982. Review of S. J. Gould’s “The mismeasure of man.” Nature, 296, 506.
Jensen, A. R. 1982. The debunking of scientific fossils and straw persons: Review of The Mismeasure of Man. Contemporary Education Review, 1, 121-135.
Rushton, J. P. 1996. Race, intelligence, and the brain: The errors and omissions of the “revised” edition of S. J. Gould’s The mismeasure of man. Personality and Individual Differences, 23, 169-180.
Samelson, F. 1982. Intelligence and some of its testers (Review of S. J. Gould’s “The Mismeasure of Man”). Science, 215, 656-657.
I’m by no means an expert the particulars of individual IQ tests. The Wechsler strikes me as pretty cheesy, but that’s part of the scientific interest of it: you can measure intelligence with all kinds of disparate, fishy, and amateurish-looking tests, and they all correlate with one another—Wechsler, Stanford, Raven’s, SAT, Wordsum, Wonderlich, and even a three-item test devised by Shane Frederickson at MIT. Basically, any test that anyone devises that depends on what common-sense would call “intelligence”is going to pick up the same thing. This is part of the pattern in which the various subtests of any IQ tests correlate with one another, and have surprising predictive power in real-world outcomes. All of this tells us that a large part of intelligence consists of some big, robust, unsubtle dimension of individual variation. It’s a bit like height: you don’t need lasers to document that some people are taller than others.
The hypothesis that “while IQ is highly heritable, the environment also has massive effects”is the consensus these days, if by “environment”you mean “not the genes.”It’s a simple restatement of the fact that heritability of IQ is less than 1. The concession that “IQ is highly heritable”is quite a departure from the Kamin-Lewontin-Rose-Gould position: Kamin wrote in 1974, and reiterated in 1984, that “there exist no data which should lead a prudent man to accept the hypothesis that IQ test scores are in any degree heritable.”
The problem is in identifying those “massive environmental effects.”This is a complex question (see, e.g., the “Children”chapter in The Blank Slate). At any one historical period, and holding culture more or less constant (e.g., more-or-less middle class), the 50% or so of the variance that is not genetic (the “environmental”effect) is, as far as I can tell, due to random chance, probably in prenatal or early postnatal development of the brain; it doesn’t correlate with anything in the environment you can measure.
On top of that there has been an environmental factor, probably now petering out, that increased IQ scores across the board about a standard deviation between the 1920s and the recent present (this is the Flynn effect). In addition, low-SES [socioeconomic status] kids may show bigger environmental effects than the middle class, and in addition to that, there is some factor that causes black people to score a standard deviation lower than white people, on average. Flynn used to suggest that whatever caused the Flynn effect (no one really knows) might also cause the B-W difference, but has backed off from that claim, since the B-W difference is in g, and the Flynn effect is in a slightly different constellation of mental abilities. I believe that while IQ variation has a substantial genetic component, there is enough non-genetic variance in addition that the B-W difference could be completely environmental.
MD: Respectfully, I think you’re moving the goalpost—defining as “science”whatever seems most defensibly untainted by ideology. To say that eugenics was political policy, not scientific theory, is to perform a forensic sleight-of-hand, not to mention some tactical revisionism of inconvenient historical truths. I’d argue that eugenics was scientific theory that, in time, influenced political policy. If you’re arguing it wasn’t scientific theory, then how do we account for its defenders among the ranks of scientists, from Francis Galton to Julian Huxley to Charles Davenport to John Watson? The historical record amply evidences the vocal support of prominent scientists; as well, the racial theories undergirding eugenics were widely perceived, by popular audiences as incontrovertible fact: “Galton said it, I believe it, that settles it.”Eugenics robed itself in the cultural authority of the white lab coat, legitimated its discourse with the languages of population genetics and evolutionary biology, and pleaded its case in the statehouse and the state fair based on the scientific soundness of its social program. It’s a riddle to me how eugenics could have so many scientists among its defenders—claiming the mantle of science, employing the rhetoric of science, and justifying its social program in the name of science—and not believe itself to be scientific theory, nor be popularly received as such. Perhaps you can help me unriddle it.
As for the progressives you count among the company of the guilty, you’ll get no argument from me on that point. But if you’re arguing that the left and right are equal-opportunity eugenics offenders, and that this proves there is no “ideological common denominator,”I can’t agree. Jack London and Jacob Riis were both progressives, yet both bore the stamp of social Darwinism. Are you arguing that a culture, in a given historical moment, isn’t governed by prevailing ideas? Or are you conceding that, yet arguing that science stands outside culture and history, untouched by either? Perhaps the tripwire, here, is the word “ideology.”I use it in the Marxist/post-Marxist sense, meaning: the hegemonic worldview of the power elite (as opposed to the narrower, purely political definition).
In that sense, I believe that eugenics was part of the warp and woof of Western thought, expressing itself in the 19th century as social Darwinism and even earlier in the notion of the Great Chain of Being. The “ideological common denominator,”throughout much of this period, is the white, patriarchal, Eurocentric, Christian worldview. Ideology is the air we breathe—the unconsidered (because self-evidently “true”) prejudices and presumptions that inflect our view of ourselves and the world around us. Hierarchical dualisms (man/woman, self/Other, culture/nature, white/black, Christian/heathen, civilized/savage, empire/colony, et. al.) have structured Western epistemology for centuries; only recently have they been called into question by critical theorists and activist intellectuals who’ve exposed and interrogated their philosophical foundations, and the moral and ethical implications of those foundations.
The a priori assumptions that permeated the historical moments in which social Darwinism and eugenics flourished were profoundly ideological in that they manifestly served the interests of power. Ideology is the theology of power. Which is why intellectuals on both the right and left swelled the ranks of the eugenicists. Slice that stat another way, substituting class for political consciousness, and you’ll find that nearly all of its adherents were white, upper-middle-class or upper class Americans and Europeans—the very flower of the ruling class. Class makes happy bedfellows of political foes, sometimes. Yes, the Third Reich made racial hygiene the untouchable third rail of public discourse, but many of its devotees—including its apologists among credentialed scientists, simply found more palatable ways to articulate such ideas, or to promote such programs. of course, you’ll still find indefatigable adherents among those who consider themselves to be both members of the cognitive elite and the socioeconomic elite, some of whom are even indiscreet enough to out themselves as eugenics sympathists in front of a live microphone. I’m thinking of poor, dotty old James Watson. Oddly, it’s only ever white people, most of them men, most of them upper class, who exhibit such sympathies. Women, blacks, and especially black women are curiously underrepresented among their number. Which would argue my point: that this pseudoscientific cant, legitimating the evolutionarily ordained superiority of the white, European male, is the theology of power, a theology that even some scientists ardently espoused, when the world was a little younger.
SP: Actually, the characterization of “science”that differentiates it from ideology in the case of eugenics goes back at least a century. G. E. Moore, in his famous paper introducing “the naturalistic fallacy,”specifically cited eugenics as the paradigm case of illegitimately leaping from an “is”to an “ought.”And that’s the criterion I was appealing to. Eugenics is not a theory of how anything works or how it came to be (“is”); it’s a prescription for how to improve society (“ought”). Yes, I know, I know, postmodernists and Marxists reject the is/ought distinction, but that’s one of the reasons I’m not a postmodernist or Marxist.
I’m not arguing that science, in practice, stands outside of culture and history, just that ideology, class, race, and gender interests are one subset of many sources of human folly and error, together with ego, greed, ignorance, cognitive limitations, illogic, the information available at a given historical period, and so on. I would argue that the institutions of secular reason (including science) are designed to overcome these nuisances, and that over the course of history, they more or less work, haltingly and unevenly. The exposition of the Marxist/postmodernist view that you provided in your email is well put and clear, but I think it’s a dogma; it has swept the humanities, but has never been convincingly demonstrated (as I mentioned in a previous e-mail, the fact that our predecessors made blunders has numerous explanations, of which class/race/gender interests don’t strike me as the most compelling). To take just one example, the intellectual opposition to eugenics, as I understand it, mainly came from the church, another group of privileged white males who subscribed to all those tenets of Western epistemology.
MD: For the record, I’m not a card-carrying Marxist, although I’m convinced that Marx is one of our most prescient, poetically eloquent critics of the cultural corrosions of capitalism. Likewise, I believe postmodernism has much to teach us about life in the Digital Age, a historical period whose salients include information overload, technological hyperacceleration, global capitalism, appropriation aesthetics (the tendency toward quotation and recombination that characterizes our visual culture, from Hollywood to the Web to gallery art). The assertion, popular in some quarters, that postmodernism entails believing there’s no such thing as an empirical fact is in my opinion a straw man—the merest Sokal-ism.
But I digress.
What, in your opinion, accounts for the Flynn effect? Richard E. Nisbett has speculated that the upspike in IQ scores might have something to do with an increased emphasis, in elementary-school education, on math and related, problem-solving skills—the very skills the Wechsler measures.
SP: As I understand it, no one really knows what caused the Flynn Effect. The problem is that the effect itself has chugged along for more than 70 years, but no single putative cause —literacy, schooling, nutrition, technology—has been increasing steadily over that time. Most likely there has been a mixture of small factors, overlapping in time, and all pushing in the same direction. They would include schooling and literacy (more so in the earlier part of the century), electronic gadgets with visual controls, and an increasing emphasis on abstract thinking in schools, businesses, and public life—what Flynn himself calls the spread of “scientific spectacles,”as reasoning styles from the sciences and social sciences have left the academy and permeated everyday discourse. This last factor is vague and hard to quantify, but may account for why the increases seem to be in the most abstract subtests of IQ tests, like similarities, analogies, and pattern extrapolation, rather than in world knowledge, memory, or mathematical calculation.
MD: I’m going to ask you to play cultural critic for a moment. Not your bailiwick, I realize, but since you obviously do not shrink from spirited debate, I thought you might be willing to try your hand at cultural commentary. My question is this: What do you think is America’s relationship to Intelligence? (I capitalize the term to signal that I mean it in a mythic, or iconic, sense.) Intellectuals, especially left-leaning ones, have argued that American culture has historically been hostile to intellectualism, which is metonymic, presumably, of intelligence. Richard Hofstadter’s Anti-Intellectualism in America is the classic text for this argument. Hofstadter and others maintain that our puritan roots, and the Christian tradition more generally, have created a cast of mind that is suspicious of skeptical inquiry rather than leaps of faith. As well, the argument goes, our Benthamite utilitarianism and our embrace of laissez-faire capitalism as a national religion ensures a certain hostility toward “impractical”thought—philosophy, the arts—that has no immediate market value. At the same time, the shelves of bookstores groan under the weight of business books that console anxious executives with the knowledge that they, too, can build “word power”and pump up their memory muscles. And the Web is overgrown with pop-up ads for bogus IQ tests. On one hand, we elected that manifestly incurious C-minus student George W. Bush, and a certain segment of the public thrilled to Sarah Palin’s mockery of Obama’s apparently suspicious “eloquence”; on the other, many of us seem to rejoice in the knowledge that we have, at long last, a smart guy in the Oval office (although Maureen Dowd derides our brainiac-in-chief as too Spock-like in his coolly deliberative intelligence). So which is it? Do we respect intelligence, or revile it? What is our relationship, as a nation, to the notion of smartness?
SP: I hesitate to generalize about America because it is so heterogeneous. At a first cut it comprises two political cultures: a culture of the Enlightenment and a culture of honor, to use the anthropologist’s term (and the title of another of Richard Nisbett’s books). This corresponds roughly to the blue-state/red-state divide. David Hackett Fischer, in Albion’s Seed, cuts the pie into finer slices, and traces them back to the regions in England from which the first waves of colonists originated.
I suspect that many of the contradictions in the embrace of intellectualism come from differences among these subcountries, with the dominant strand at any time coming from the various borderline, neutral, and uncommitted blocs that tip their allegiance one way or another. In a culture of honor, the cardinal virtue is resolve, strength, deterrence, and pursuit of justice (often in the form of revenge against insults). Thinking too much corrodes these virtues, since a thinker may correctly calculate that retaliation is not in his best interests, which only emboldens his adversaries. Standing firm as a matter of constitution rather than rational choice makes your implicit threats more credible, because you can’t be talked out of them. Your adversaries have more trouble making you an offer you can’t refuse.
MD: You write, “What Flynn himself calls the spread of ‘scientific spectacles,’ as reasoning styles from the sciences and social sciences have left the academy and permeated everyday discourse. This last factor is vague and hard to quantify…”This is especially intriguing, since it belies the received wisdom, at both ends of the political spectrum, that we live in an Age of Unreason, when irrationality owns the cultural battlefield. Liberals like Susan Jacoby (The Age of American Unreason) argue that the anti-science, anti-rationalist cast of the evangelical mind, together with what she sees as the dumbing-down of the culture brought on by post-literate media (no friend of Grand Theft Auto, she), have turned us into a “nation of dunces.”On the right, American Tories like George Will bemoan the flood tide of “graphic entertainments”(the dreaded video game again!), which he contends are hastening the “infantilization”of American culture, drowning us in “stupidity.”Left and right make common cause, it seems, in their contempt for what Mencken called the “booboisie.”
Yet you, a scientist, argue that “reasoning styles from the sciences”are permeating “everyday life.”What’s the channel by which these values are transmitted? More to the point, can you offer any evidence of the percolation of scientific values—the importance of skeptical inquiry; of self-critique and peer review; of falsifiable propositions based on logical, causal reasoning and material evidence—into “everyday discourse”? Our notorious scientific illiteracy has made us the laughingstock of the so-called first world: more of us believe in the literal existence of a horned-and-tailed devil than in the truth of Darwinian theory (62% versus 42%, according to a 2007 poll), and as of 1988 one in five of us was, in effect, living in a pre-Copernican cosmos, based on a National Science Foundation study that found that millions of Americans were convinced the sun revolves around the earth. Thus, I’m curious to know what evidence you’ve found to support the notion that the reasoning styles of the sciences, hard or social, have taken hold in the public mind.
Your use of the “culture of honor” argument makes me think of game theory—Prisoner’s Dilemma, that sort of thing. But doesn’t the tenacious grip of irrationalist epistemology, most notably religion, play a larger role in our attitude toward intelligence? Perhaps I’m confusing intelligence and reason, or rationalism, although by my lights they’re inseparable. (I’m a creature of my culture—the scholastic culture of academe!) But surely the fact that America is more zealously religious, and more biblically fundamentalist, than so many European nations cannot be unrelated to our pervasive hostility to science in everyday life: Darwin in the classroom, stem-cell research, global-warming research, and so forth. And doesn’t it logically follow that an intractable, irrational hostility toward Darwin and others who dethroned the divine would translate, in the popular mind, into a hostility toward skeptical inquiry and reasoned debate—intelligence, by any other name? Or maybe I’m using the term “intelligence”too broadly. But it simply stands to reason, in my mind, that an insistence on blind faith over proven fact correlates to a suspicion and hostility toward the intellect. Is it mere happenstance, I wonder, that so many religious traditions insist that the price of enlightenment is unplugging one’s critical intellect? Whenever I hear the exhortation to turn off my mind, I reach for my Bertrand Russell.
SP: There are several dimensions to scientific thinking, and Flynn’s suggestion did not address the culture of discovery in science (open criticism, testability, etc.) so much as the analytic tools that have escaped from the ivory tower and become part of conventional wisdom, everything from “zero-sum,””exponential growth,””triangulate,””circular reasoning,””fuzzy math,”and “market forces”to the very idea of taxonomic categories. An uneducated person in the 1920s who was asked “What do a fish and a crow have in common?”(one of the kinds of IQ-test questions that has shown a Flynn effect) might have replied “absolutely nothing.”Today we would expect even a child to note that they are both exemplars of the category “animal,”a fact that would have been considered too banal and irrelevant for the 1920s respondent to mention. But the scientific tool of putting things in valid categories so as to generalize from them (e.g., you can deduce that a crow breathes even if you have no direct experience of that fact) has become part of our accustomed ways of thinking. This use of analytical tools from science might be independent of other aspects of scientific reasoning such as skepticism and falsifiability, not to mention an explicit embrace of intellectualism and articulateness.
I don’t know of any direct evidence for Flynn’s hypothesis (as I mentioned, I don’t think anyone has a well-validated theory of the Flynn effect, perhaps because it was caused by many factors). You may want to look at Flynn’s recent book on intelligence, or at The Rising Curve, a collection of essays (now probably out of date) that grapple with the phenomenon. Recent evidence suggests that the Flynn effect is petering out in the Western democracies.
I suspect that the American reluctance to accept evolution is not so much scientific ignorance (most people who believe in evolution don’t really understand it, and probably couldn’t tell you the evidence for it) as using a belief as an identity badge for moral worth and community identity (a common human vice, and another source of folly that good science tries to circumvent, not always successfully). In much of the country, to say that you doubt evolution is to affirm that you’re a decent person with loyalty to family, community, and morality, opposed to the godless and amoral forces that foster pornography, urban violence, illegitimacy, and so on. This particular badge is probably a historical legacy of the fact that in much of the anarchic American West and South, religion was a civilizing force: the brawling, hard-drinking cowboys and miners only settled down with families after the women and preachers arrived (see David Courtwright’s Violent Land). I think this in part explains why “family values”and “faith”are such talismans in red-state America. In Europe (and the parts of the US that are extensions of the European mindset), law and order has been maintained by government for centuries, and religion is more of a joke.
MD: For my last question, I’d like you to put yourself on the psychobiographical couch. Self-analysis, in public, is odious to the scientist in you, I suspect, but perhaps this question will go down easier if you think of yourself as a lab rat whose cortex you’re dissecting in the lab.
Question: I’ve interviewed several subjects for this essay, all experts on intelligence, and all have had revealing stories to tell about what drew them to the subject. I’m not asking what intellectual passions led you to the subject of intelligence, but rather what formative events in your childhood struck the catalytic spark of your interest in intelligence. What was your relationship to intelligence, as a kid? You were, I suspect, the smartest kid in the room, in many situations, and knew it. How did you react to that realization, psychologically? How central to your sense of your own self-worth is your sense of yourself as smart? Do you find yourself unconsciously ranking others according to intelligence? On what do you base that estimation, in social or professional encounters? How did your immersion in the academy, where smart people congregate, change your relationship to the idea of intelligence? Do schoolyard anxieties regarding who’s the smartest kid in the room still prevail, at conferences, beneath the veneer of scholarly civility? To what extent is the desire to prove one’s intelligence—let’s call it the Alpha Geek syndrome—and a gnawing anxiety about who’s the brightest bulb in the field—Cortex Envy, I call it—shape the psychologies of academics, even in the sciences?
SP: Funny you should ask about formative childhood experiences: see my recent article (PDF) in the New York Times magazine, where I hardly shy away from self-revelation. I’m not particularly interested in intelligence as a research topic, and in fact said some rude things about it in the final pages of my first trade book, The Language Instinct: “To a scientist interested in how complex biological systems work, differences between individuals are so boring! Imagine what a dreary science of language we would have if instead of trying to figure out how people put words together to express their thoughts, researchers had begun by developing a Language Quotient (LQ) scale, and busied themselves by measuring thousands of people’s relative language skills. It would be like asking how lungs work, and being told that some people have better lungs than others, or asking how compact disks reproduce sound, and being given a consumer magazine that ranked them instead of an explanation of digital sampling and lasers.”Some individual-difference researchers gave me a hard time about this passage.
I only began to mention intelligence when I detected that academics’ hypocrisy about the topic was a symptom of the doctrine Of the blank slate. As I wrote in my book with that title, “I find it truly surreal to read academics denying the existence of intelligence. Academics are obsessed with intelligence. They discuss it endlessly in considering student admissions, in hiring faculty and staff, and especially in their gossip about one another.”
Ideological denials aside, academics probably have a similar relationship to intelligence that professional athletes have to innate athletic talent. It goes into the status hierarchy, to be sure, and to all the anxieties that that triggers, but is modulated by a number of realizations. One is that there will always be colleagues and competitors who are smarter than you, so being the smartest person in the room is not an option. Another is that intelligence is a necessary but not sufficient condition for productive work: we all have colleagues and students who are brimming with raw intelligence but never amount to much, for any of a number of reasons: they’re head cases; they are so critical of everyone’s work, including their own, that they never take chances or publish anything; they’re lazy; they’re uncreative; and so on. Conversely there are contributors who are not off the chart in raw intelligence, but who have the minimum, and who achieve stardom by being creative, indefatigable, insightful, have a “feeling for the organism,”or are just plain lucky.
Copyright Mark Dery. This interview may be reproduced by anyone, anywhere, for nonprofit use; written permission required (markdery at verizon dot net) in all other instances.
Photo of Steven Pinker by Henry Leutwyler