(originally posted in 2018)
First, let’s clear up some misconceptions about what intelligence is and isn’t. Intelligence is a collection of mental abilities—pattern recognition, abstract reasoning, learning capacity, general knowledge, and environmental adaptation—that mutually reinforce one another in most people. The mutually reinforcing characteristics of these abilities are the reason why researchers believe that a general factor of intelligence, or g, exists1. For neurotypical people of average intelligence—roughly half the population—the idea of general intelligence usually works. This set of traits is traditionally measured using IQ tests, which include a number of tasks that are thought to be related to the construct of general intelligence. It is a descriptor of how people’s brains learn, adapt to the world around them, recognise patterns, and interpret the information they receive from their environment. Intelligence is not an indicator of human value. Everyone has the right to exist, regardless of their learning style.
This general description of intelligence holds true for the majority of the population. Of course, the reality isn’t so simple for some people. There are many whose mental abilities may not reinforce one another to the same degree as they would for most people; they’re more atomised skills rather than the positive feedback loops associated with the typical model of general intelligence. For example, somebody can score very high on the verbal portions on one of the Wechsler intelligence tests and fare far worse on a section that requires a strong working memory, excellent fine-motor skills or visual-spatial ability. These requirements seem to penalise some disabled people, as well as those who are simply more methodical than others. Some disabled people may score well enough on IQ tests but have difficulty generalising their abilities outside the testing environment. The existence of savant syndrome gives the lie to the idea that extreme mental capabilities exist consistently in people. Many people with savant syndrome may score low on IQ tests but have strong skills in one or two areas, like calendrical calculation, word decoding, musical ability, or drawing from life. Also, people who experience poverty, trauma, or other difficulties early in life may not be able to develop their abilities as well as people who grew up in well-off, intellectually nourishing environments2.
Any thoughtful analysis of how intelligence works must be conscious of these exceptions. In a talk she gave a few years ago, Linda Silverman, a psychologist who specialises in advanced learning ability, emphasised that IQ tests are a diagnostic tool that should be combined with clinical judgement, not an absolute determiner of a person’s intellectual abilities that can be divorced from the context in which they live, grow and develop. The current incarnations of IQ tests are designed to be used as clinical tools to identify people’s relative strengths and weaknesses. They’re less accurate when they’re used to determine the cognitive skills of very quick or slow learners. Some quick-and-dirty tests designed for people with acquired cognitive conditions like Alzheimer’s and traumatic brain injuries can’t even give people very high or low scores. Moreover, like other clinical tests, intelligence tests can produce false negatives or type II errors, especially in intelligent neurodivergent people whose abilities are more uneven and may have an overall score that appears average despite their intellectual, social, and emotional differences from typically developing people. The history of IQ testing and the value judgements people place on intelligence tend to cause a lot of anxiety around IQ scores, though. Far too often I see descriptions of high intelligence that rely solely on IQ scores and do not acknowledge the existence of false negatives in testing. While these exceptions may be statistically rare, rarity is not the same thing as nonexistence. People who describe the traits of highly intelligent people should be aware of these exceptions; since they are describing outliers, they should recognise that even these outliers have outliers. I fear that treating the most common representations as universal will cause people to feel as though their experiences cannot possibly be real. The late Mel Baggs wrote eloquently about the problems with IQ testing in neurodivergent people several years ago. I agree with hir to an extent; I think that IQ tests do not always capture the abilities or struggles of neurodivergent or disabled people. For some people, the tests are downright useless; some autistic people in particular have received “gifted,” “average,” and “intellectually disabled” scores in their lives depending on the testing conditions, their emotional state and their ability to access their skills.
More holistic analysis requiring understanding people’s practical skills is required to give a person a diagnosis of intellectual disability; clinicians should use the same principle when determining whether somebody qualifies for gifted education, too. Mechanistically interpreting scores and believing the numbers uncritically without considering people’s backgrounds, subtest discrepancies, interactions with the test administrator, and potential disabilities is not “intelligent testing.” I actually believe that systematic qualitative measures of people’s intellectual abilities, based on people’s developmental trajectory; abilities in childhood, adolescence, and adulthood; interactions with the interviewer; and answers to abstract questions should be developed and tested to be used in the field. These measures would be especially useful for people whose traditional IQ scores don’t seem to match up with their abilities or presentation.
When talking about intelligence, it is important to avoid being prejudiced against marginalised people. Unfortunately, the history of intelligence testing is fraught with racism, disablism, classism, and misogyny. IQ tests like the Stanford-Binet scales, the British eleven-plus and the US Army intelligence tests were used to devalue the intelligence of women and racially marginalised people, consign poor and working-class people to subpar educations, institutionalise disabled people and people erroneously thought to be disabled, and create Great Chains of Being in which more intelligent people were superior to people of average or below-average intellectual ability. Some people cling to these abhorrent notions and use IQ scores as a means to rank people. In fact, some IQ tests, like the Wechsler intelligence tests, still use the category superior to refer to people of significantly above-average intelligence, a relic of the days in which IQ tests were used to rank people’s eugenic qualities. They may not be calling people imbeciles and idiots anymore, but the old prejudices still remain. Also, there are some researchers and journalists in the intelligence field who have expressed toxic views about people of colour and disabled people, including Richard Lynn, Satoshi Kanazawa, Tatu Vanhanen, Steve Sailer, Philippe Rushton, Arthur Jensen, Hans Eysenck and Charles Murray. Linda Gottfredson’s research often falls into this category too. Moreover, IQ tests should not be used to determine people’s “mental age.” Mental age is a pernicious construct that is demeaning to people with intellectual disabilities, and gifted advocates need to abandon it. The mental-age argument can be used to infantilise and devalue people with intellectual disabilities—or to take advantage of bright children and teenagers who are not emotionally prepared for things like sexual or romantic relationships. A five-year-old who can read Shakespeare is still a five-year-old. A fifty-year-old who struggles with reading and needs support to understand paperwork is still a fifty-year-old.
Intelligence, like other aspects of human cognition, is a complex and multilayered subject. It is disingenuous to say that it does not exist at all, but it is equally wrong to claim that it is easily quantifiable in all people or that it is a determiner of human worth.
Further reading
- James Whitman’s Hitler’s American Model is a good overview of how Nazi Germany drew inspiration from American policies promoting eugenics and racial segregation.
- Stuart Ritchie’s Intelligence: All That Matters is a brief introduction to concepts related to intelligence and the history of its assessment.
- ‘Intelligence: New Findings and Theoretical Developments’ (Nisbett, R. et al., 2012), an article from American Psychologist is a good academic overview of the current state of intelligence research.
- Linda Silverman’s Giftedness 101 is a good resource for psychologists and curious laypeople to find out about assessing, working with and teaching students who need more complexity and intellectual challenge than the traditional curriculum provides.
- Alan Kaufman’s IQ Testing 101 is a slightly more exhaustive introduction to IQ testing and its current uses, and emphasises an ‘intelligent testing’ approach that ultimately relies on clinical judgement rather than just spitting out a score and using that to determine a person’s intellectual ability.
- The last few chapters of The Myth of Race, by Robert Wald Sussman, describe historical and current uses of intelligence tests to marginalise black and Latino people in the United States.