The
claim about babies was startling: A test administered to infants as
young as 6 months could predict their score on an intelligence test
years later, when they started school.
“Why
not test infants and find out which of them could take more in terms of
stimulation?” Joseph F. Fagan III, the psychologist at Case Western
Reserve University in Cleveland who developed the test, was quoted as
saying in an article by Gina Kolata on April 4, 1989. “It’s not going to hurt anybody, that’s for sure.”
In
the test, the infant looks at a series of photographs — first a pair of
identical faces, then the same face paired with one the baby hasn’t
seen. The researchers measure how long the baby looks at the new face.
“On the surface, it tests novelty preference,” said Douglas K. Detterman, a colleague of Dr. Fagan’s at Case Western.
For reasons not quite understood, babies of below-average intelligence do not exhibit the same attraction to novelty.
Dr.
Fagan suggested that the test could be used to identify children with
above-average intelligence in poorer families so they could be exposed
to enrichment programs more readily available to wealthier families.
But
his primary motivation, said Cynthia R. Holland, his wife and longtime
collaborator, was to look for babies at the other end of the
intelligence curve, those who would fall behind as they grew up.
“His
hope was always was to identify early on, in the first year of life,
kids who were at risk, cognitively, so we could focus our resources on
them and help them out,” said Dr. Holland, a professor of psychology at
Cuyahoga Community College.
25 YEARS LATER
For the most part, the validity of the Fagan test holds up. Indeed, Dr.
Fagan (who died last August) and Dr. Holland revisited infants they had
tested in the 1980s, and found that the Fagan scores were predictive of
the I.Q. and academic achievement two decades later when these babies
turned 21.
“It’s
really good science,” said Scott Barry Kaufman, scientific director of
the Imagination Institute at the University of Pennsylvania and the
author of “Ungifted: Intelligence Redefined.”
But
Dr. Fagan’s hope for widespread screening of infants has not come to
pass. “There are some centers that have it,” Dr. Holland said. “It never
came to be the kind of thing where it’s widely available.”
The
trend is perhaps in the other direction, away from dividing young
children by I.Q. and its surrogates out of concerns that the labels
become self-fulfilling prophecies. Private schools in New York City, for
example, have agreed to abandon intelligence tests for 4- and 5-year-old applicants.
Dr.
Kaufman said that because the Fagan test was only “moderately
predictive” of later academic success, it was not accurate enough to
forecast the intellectual trajectory of a particular child. “From a
practical standpoint, it’s not valid,” he said. “It’s not random, but
it’s not enough for individual prediction.”
But
the numbers become more reliable in aggregate, and the test is widely
used in the academic world to quantify the effects of, for example,
toxic chemicals on young children.
For
the last decade of his life, Dr. Fagan was unexpectedly drawn into the
“genes versus environment” debate over intelligence after he found that
babies from widely different cultural backgrounds performed equally well
on his test. That, he argued, undercut the argument for a biological
basis for the stark “achievement gap” between white and black children,
or rich and poor.
In
a chapter in “The Cambridge Handbook of Intelligence”, co-edited by Dr.
Kaufman and published in 2011, Dr. Fagan wrote, “A parsimonious
explanation for the findings is that later differences in I.Q. between
different racial-ethnic groups may spring from differences in cultural
exposure to information past infancy, not from group differences in the
basic ability to process information.”
No comments:
Post a Comment