General
Intelligence
Definition of Intelligence
The concept of intelligence
can be found in the great texts of the Hindus and ancient Greeks. This is not
very surprising since in almost any activity we can see things being done more
or less intelligently. Most adults can correctly define “intelligence” in the
lexical sense. It is defined in Merriam-Webster’s Dictionary (2017) as follows:
(1): the ability to learn or
understand or to deal with new or trying situations: REASON; also: the skilled
use of reason
(2): the ability to apply
knowledge to manipulate one's environment or to think abstractly as measured by
objective criteria (as tests)
Intelligence has been
defined by prominent researchers in the field as:
Binet and Simon (1905): the
ability to judge well, to understand well, to reason well.
Terman (1916): the capacity
to form concepts and to grasp their significance.
Wechsler (1939): the
aggregate or global capacity of the individual to act purposefully, to think
rationally, and to deal effectively with the environment.
Cattell (1940): the order of
complexity of the relations that an individual is capable of handling.
Detailed definition of intelligence:
Life is essentially a
relationship between a living organism and its environment, but it is a
permanently threatened and unstable equilibrium. As long as the equilibrium
between the organism and its environment is maintained, no further adaptation
is required and the living process remains automatic. But when an obstacle, a
hesitation or a choice occurs, this blind activity becomes insufficient and
consciousness appears. Consciousness is not yet synonymous with intelligence;
it is first a feeling or a need but not truly a thought-up relationship or the
conscious awareness of a relationship.
To be intelligent is to
understand, and to understand means to be aware of relationships. Judgment is
what makes us aware of relationships. To be intelligent is also to be able to
solve new problems or to deal with open-ended situations. In other words, it is
about discovering relationships or being capable of invention. Thus, all
intelligent action is characterized by the comprehension of relationships
between the given elements and a finding out of what has to be done, given
those relationships, to create new relationships, solve a difficulty or reach a
desired goal. To study intelligence is therefore to study judgment and
invention.
Logicians have defined
judgment as the assertion of a relationship between two ideas. To say:
"Dog is a mammal" is to establish a relationship between the idea of
"dog" and the idea of "mammal". But this definition
faultily assumes that those two ideas have already been distinguished one from
another. In fact, the two ideas are not given right away as distinct. Given the
judgment: "Milk is white," it is obvious that a child does not
perceive milk on one side and the whiteness on the other. He grasps
holistically the "white-milk" or the "milky whiteness". The
discernment of the relationship implies, on the contrary, the discernment of
the two terms between which dissociation must be established. Analytical
thinking, which distinguishes, dissociates, is the first condition of judgment.
But for the relationship to be perceived, the two terms must be consciously put
together, united by the mind. This implies a synthetic activity of the brain
capable of maintaining the two terms simultaneously present in consciousness.
The same two functions of analysis
and synthesis are at work in the discovery of relationships (invention) but
only more so. Our mind is usually a prisoner of old relationships which have a
tendency to repeat themselves in thought. We have an inclination to use the
same expressions, gestures or combinations of ideas. But the inventive mind is
precisely opposed to such laziness. The discovery of new relationships implies
that the mind is first capable of breaking old systems and of freeing itself
from pre-established connections. This dissolving power of the brain is a
necessary beginning to escape from obsessive routines and habits. The
analytical part of our mind is always at the core of any invention or
discovery. Yet, obviously, to demolish is not enough, to build is also required.
For de Broglie: "Invention is the ability to discover relationships that
are more or less hidden among ideas or difficulties." Indeed, in the
ability to bring together seemingly remote elements lies a power of synthesis
far beyond what is required for judgment.
Thus, the more mental
schemes an individual can possess or handle, and the more differentiated and
integrated the structure of these schemes, the more intelligent the individual.
Mental
Growth and the IQ
In 1904, Alfred Binet and
Théophile Simon were asked by the French Ministry of Education to create a
practical and accurate method of assessing the children who could not profit
from regular instruction. Binet chose to use a battery of tests, which made no
pretense of measuring precisely any single faculty. Rather, it was aimed at
evaluating the child general mental development with a heterogeneous group of
tasks. Binet had noticed that children who had difficulties at school were very
often late in other fields easily mastered by most pupils of the same age. It
was their general development that was slow. Indeed, it was really not that
they learned slowly as that they lagged behind in developmental readiness to
grasp the concepts that were within easy reach of the majority of their age
mates. Such children would eventually grasp these basic subjects fairly easily,
but about a year or two later than their age mates. They were better thought of
as “slow developers” than as “slow learners.”
The 30 tests on the 1905
scale ranged from very simple sensory tasks to complex verbal abstractions. The
items were arranged by approximate level of difficulty instead of content. A
rough standardization had been done with 50 normal children ranging from three
to eleven years of age and several subnormal children as well. The notion of
mental age originated directly from Binet's observation that, as they grow up,
children can learn increasingly difficult concepts and ideas and do
increasingly difficult things. This allowed Binet and Simon to order their
tests according to the age level at which they were typically passed.
Wilhelm Stern soon decided
to express mental development as a ratio computed from the mental age (obtained
from Binet's tests) that he then divided by the chronological age of the child.
He obtained a number he called the MQ for Mental Quotient. Lewis Terman
suggested multiplying the Mental Quotient by 100 to remove decimals and he
created the Intelligence Quotient or IQ, which has survived to our days:
IQ = (Mental Age) / (Chronological Age) x 100
Psychologists at the
beginning of the 20th century discovered two things about the IQ that really
impressed them: first, the IQs of children of the same age were normally
distributed or bell shaped, like height for instance or most other physical
measures. Second, mental growth was almost linear and therefore the IQ was
quite constant: a child who was three years late at the age of 8 (IQ = 5/8 x
100 = 62.5) turned out to be six years late at the age of 16.
Because mental growth
plateaus at around 20 years of age, it was obvious that adults could not be
assessed by the same formula; so psychologists decided to measure intelligence
for the adult population as a whole after roughly age 20. They simply kept the
questions used for the oldest teens and slightly changed them for adults.
(Psychologists also kept the name of IQ to avoid artificially distinguishing
the intelligence of children from that of adults.) Similarly, the distribution
of adult scores was quite bell shaped, as had already been observed for all
children of a given age.
Today, the Stanford-Binet
Fifth Edition (SB-5, 2003) is perhaps the most prestigious of all individual
intelligence tests (along with the Wechsler scales). A major goal of the fifth
edition was to tap the extremes in intelligence – the major historical strength
of the Binet that had been fundamentally lost in the fourth edition. The
standardization sample consisted of 4800 subjects ranging from age 2 to 85+.
The SB-5 consists of 10 subtests designed to assess ability in different
fields: the examinee's IQ is his/her standing on the composite performance on
the different cognitive skills sampled. By sampling broadly from a wide range
of cognitive tasks, factors specific to any particular subtest diminish in
importance.
As Binet used to say:
"The tests do not really matter provided they are numerous."
Spearman
and the g factor
Charles Spearman was puzzled
by his discovery: "Mental abilities of nearly all kinds are positively
linked in the sense that if we are good at one thing, we are also likely to be
good at others." If a person has a good vocabulary, there is a better than
even chance that he has a good memory and that arithmetic is not a problem.
Similarly, if a person is good at arithmetic, he probably has a better than
average vocabulary or memory. These associations are not always true, but they
are true on average and it is said that all our abilities are intercorrelated.
Spearman proposed the
simplest possible explanation for this universal fact. Intelligence would
consist of two kinds of factors: a single general factor, the g factor (or g for short) that would explain all the observed correlations, and
numerous specific factors, s1, s2... that would account for the differences
across tests. Suddenly, it became clear why the IQ was such a good measure of
mental growth and of general mental functioning: the IQ is an average where the
specific factors are unrelated across tasks – they cancel each other out. The g factor (or general intelligence) can
thus stand out. Spearman was fascinated, would some specific tests be more
"g loaded" than others and
if yes, what would those tasks be? Only insight into the nature of such tasks
would let him know whether he had found something trivial or worthwhile.
To understand better what
Spearman did and what the g factor
is, the following analogy with school may be useful. We are all familiar with
schools/universities and with the concept of the “bright” student vs. the “not
so bright” student. The GPA (Grade Point Average or overall mean) is calculated
everywhere in the world to evaluate students on a single dimension: scholastic
success. Schools and universities can be harsh or lenient but school subjects
are always intercorrelated and a general factor of school success – the GPA –
does exist. This general factor is correlated with absolutely all school
subjects. Spearman, once he discovered the
g factor, was like a school principal curious about which subjects would be
the best summary of his entire curriculum (or that would be most correlated
with the GPA). The only difference being that Spearman's "curriculum"
was the entire range of human skills and abilities and that his "GPA"
was the IQ.
To comprehend the nature of g was not easy, the tests or items with
a high g loading (highly correlated
with the IQ) were not similar at all at first glance, and neither were those with
a low g loading (low correlation with
the IQ). "All sorts of vehicles could carry g." The great discovery
of Spearman, which has been refined ever since, was that the tests or items
were loaded on g in direct proportion
to the level of mental complexity involved to solve them ! The best measures of g are still those where one must
compare and choose, analyze and synthesize, induce and deduce purposefully or
discover structures and infer properly. In other words, judgment and invention
(see detailed definition of intelligence above) are the two exact synonyms of
the g factor.
Let's summarize: Binet invented the notion of mental age to describe
the global level of children's cognitive development. This approach led to the
notion of IQ, a kind of super-average of all our mental aptitudes. Spearman
extracted the quintessence of the IQ, the
g factor, by identifying the items loading it the most. The g factor turned out to be more,
qualitatively, than the mere sum of the different elements involved in the
original IQ tests: g was the ability
to manage complexity, the essence of intelligence itself.
Intelligence
at War and in the Workplace
Given the American
pragmatism and efficiency, intelligence tests quickly crossed the ocean where the
slow pace of development in testing picked up dramatically when the US entered
World War One. Colonel Yerkes (originally from Harvard) helped design two group
tests whose influence would be difficult to overestimate: the Army Alpha and
the Army Beta. The Army Alpha was a mostly verbal group test of intelligence
for average and superior recruits, whereas the Army Beta was a nonverbal group
test of intelligence for illiterates and non-English-speaking recruits.
Shortly after the Army Alpha
and Beta were released for general use, C.C. Brigham, a disciple of Colonel
Yerkes, became the secretary of the College Entrance Examination Board (CEEB),
which became the College Board and the Educational Testing Service (ETS) in
1948. The CEEB had been established in 1899 to avoid duplication in the testing
of applicants to American colleges. In 1926, C.C. Brigham created the SAT
(Scholastic Aptitude Test), which is now the most widespread of any
intelligence test ever used (it is a college admissions test taken by millions
of students each year). Arthur Kroll, an ETS official who has worked on the
SAT, estimates that the correlation between the SAT verbal score and the full
IQ is .60 to .80. Thus, in spite of several recent name changes, the SAT
remains mostly what it was 90 years ago, a strongly g-loaded test of intelligence akin to the Army Alpha. For the
record, the SAT was massively used in 1945 and 1946 to help millions of US
soldiers return to civil life and countless jobs were granted based on SAT
scores.
It is probably not true that
mental testing contributed much to the outcome of WWI but what cannot be denied
is the paramount contribution of mental testing to the successful drafting of
millions of men and women in the US and British armed forces during World War
II. Without mental tests, quick and efficient allocation of human resources
would have been impossible. Second World War testing showed that over 90% of
recruits with an IQ of 140 or more became commissioned officers, while less
than half of those with an IQ of 110 succeeded. If intelligence testing is the
major achievement of psychology in the 20th century, both world wars made it
what it is today.
The scores on intelligence
tests predict who will succeed and who will fail in the professions or in the
army and such tests are quick, inexpensive and easy to interpret. According to
Gregory:
"An ongoing debate
within Industrial/Organizational psychology is whether employment testing is
best accomplished with highly specific ability tests or with measures of
general cognitive ability. The weight of the evidence supports the conclusion
that a general factor of intelligence (the so-called g factor) is usually a better predictor of training and job success
than are scores on specific cognitive measures – even when several specific
cognitive measures are used in combination. Of course, this conclusion runs
counter to common sense and anecdotal evidence… The reason that g usually works better than specific
cognitive factors in predicting job performance is that most jobs are
factorially complex in their requirements, stereotypes notwithstanding. For
example, the successful engineer must explain his or her ideas to others and so
needs verbal ability as well as spatial and numerical skills. Since measures of
general cognitive ability tap many specific cognitive skills, a general test
often predicts performance in complex jobs better than measures of specific
skill."
Selection
method
|
Validity
|
Cost
|
Intelligence test
|
High
|
Low
|
Biographical questionnaires
|
High
|
Low
|
Integrity tests
|
High
|
Low
|
Comprehensive test batteries
|
High
|
High
|
Assessment centers
|
High
|
High
|
Aptitude and ability tests
|
Moderate
|
Low
|
Personality and interests tests
|
Moderate
|
Moderate
|
Panel Interviews
|
Moderate
|
High
|
CV ratings
|
Low
|
Low
|
Educational records
|
Low
|
Low
|
Reference check
|
Low
|
Moderate
|
One-to-one interviews
|
Low
|
Moderate
|
A measuring device cannot be
valid unless it is reliable, but the opposite does not hold true. Reliability
is a necessary but insufficient condition for validity. Reliability has to do
with the consistency or reproducibility of what is measured. Being highly
valid, intelligence tests are also highly reliable (e.g., they provide
consistent and stable results).
g-tests
provide the best basis for personnel selection in most occupations. Rivaled
maybe only by the work sample, g-tests
have a validity coefficient of .55 averaged over many tests and many samples.
In sum, the g
factor is a better predictor of adult occupational status and income than any
other known combination of variables (it is also a strong predictor of life
expectancy).
The
Biological Basis of Intelligence
The first point to make is
that intelligence tests are merely samples of what people know and can do. Tests
are never samples of innate intelligence or culture-free knowledge. The meaning
of a test may differ among cultural groups, which will affect the validity of
comparisons. Today, few would dispute the claim that human intelligence – in
its most common scientific and lay definitions – reflects a biological property
of the brain.
The brain waves that
scientists can detect follow a common pattern when an individual reacts to a
sudden stimulus such as a flash of light or a loud noise. Most effective has been
a type of EEG study known as the averaged evoked potential (AEP) which records
what happens in the cortex during transmission of a message.
Eysenck (1994) did some work
on the phenomenon and discovered that during the transmission of a message
through the cortex (from the dendrites of one cell through the intermediary of
the synapse to the axons of other cells) errors might occur that alter the EEG
picture (the more errors, the lower the IQ). Thus he defined human intelligence
as "Error free transmission through the cortex".
The AEP curves (following
auditory stimulation) of the high IQ subjects are more detailed (less smooth) than
those of the low IQ subjects because the evoked potentials are more alike (less
error prone) before being averaged.
Today, the most promising
biological approach to general intelligence is to consider what is known about
the structure of the human brain. Indeed, the most striking feature of our
brain is its unmatched proportion of association areas, not the quality of the
neural impulse.
The parieto-frontal
integration theory (P-FIT) by Jung and Haier (2007) considers intelligence to
relate to how well the parietal and the frontal brain regions integrate to form
complex behaviors. The P-FIT was based on a review of 37 neuroimaging studies
with a total of 1,557 participants – the review included the very best
neuroimaging techniques with high spatial resolution to examine the structural
and functional correlates of intelligence.
The P-FIT theory is of paramount
importance because it shows the links between the parietal cortex and the
frontal/prefrontal cortex which are the most associative of all the association
areas of the brain: the parietal cortex is for sensory integration and
abstraction, whereas frontal areas are for reasoning and problem solving. Hence
smarter brains have a stronger power of association and communication – they
are capable of managing more complex information.
The P-FIT was recently reinforced by Aron Barbey (2012) who investigated the neural substrates of the general factor of intelligence (g) and executive function in 182 patients with focal brain damage using voxel-based lesion–symptom mapping. “The observed findings support an integrative framework for understanding the architecture of general intelligence and executive function, supporting their reliance upon a shared fronto-parietal network for the integration and control of cognitive representations.”
In brief, the more developed the association areas of an
individual and the more connected they are, the smarter the individual (it is
worth noticing that Einstein had exceptionally large parietal lobes as well as
a fourth ridge in his mid-frontal lobes).
Challenges
to General Intelligence
A very weak challenge to general
intelligence is Gardner's theory of "multiple intelligences". First
of all, it must be said that Gardner's theory is not new. Psychometricians have
long established that intelligence consists of a lot of aptitudes, and Gardner
is in fact developing a theory of the "multiple aptitudes". Guilford
once proposed a theory with 120 aptitudes, and then he suggested 150. There is
virtually no limit to the number of aptitudes we can come up with: chewing gum
can be considered an aptitude, or biking, or watching TV, or… Indeed, Gardner's
approach can be highly confusing. Consider the following syllogism: "My
computer can beat me at chess. Playing chess is a form of intelligence.
Therefore my computer is smarter than me…"
Intelligence is not a
collection of various aptitudes but the integration of various aptitudes into a
coherent whole. Humans are smarter than computers because they can switch from
chess to music and see the connections between those fields, something that
computers are completely unable to do. Idiot-savants may have very high
aptitudes in one field but have very low IQs; this is why they are considered
"idiots". Intelligence is at least as much in the links between our
various aptitudes as in these aptitudes themselves and it is a serious mistake
to reduce intelligence to the aptitudes that support it.
Average raw scores on
various IQ tests have been rising regularly and substantially in many
populations all over the world. This secular increase in IQ is known as the
Flynn Effect. Using the IQ values of 1997, Neisser estimated that the average
IQ of the United States in 1932, according to the first Stanford–Binet
Intelligence Scales standardization sample, was 80. He states that "Hardly
any of them would have scored 'very superior', but nearly one-quarter would
have appeared to be 'deficient.'"
The graph below indicates the point gains on different
tests over 50 years:
green = Raven
Progressive Matrices
blue = Wechsler
Similarities
red = Wechsler
Total IQ
mauve = Wechsler
Comprehension
dark grey =
Wechsler Information, Vocabulary, and Arithmetic
Trahan et al. (2014) found
that the effect was about 2.93 points per decade, based on both Stanford–Binet
and Wechsler tests; they also found no evidence the effect was diminishing. Thus,
the theory that IQ is a kind of inborn “ceiling” to our possible development
doesn’t seem to agree with the facts. Intelligence is not something fixed and
stable. Yes, thinking can be taught and improved. Intellectual development is
not a fatal process in which the basic assimilative faculties are more or less
fixed at birth. Because the elementary laws of intellectual development are not
well understood, IQ tracking or segregation cannot be justified. Recent
pedagogical advances make it more and more possible to facilitate the
development of intellectual skills in everyone and at any time.
Many people believe that
they were not born smart. This myth is popular because it shifts responsibility
for failure away from an individual and onto the genetic blueprint from which
the brain was constructed. Students who get poor grades can excuse their marks
by thinking: "I wasn't born smart enough to cope with studying!"
Incompetent teachers can justify poor grades by complaining: "How can we
teach kids who were born to fail?"
If the Flynn effect proves
anything, intelligence is something we acquire from experience rather than an
inborn ability. This does not mean that inheritance has no role to play in
establishing levels of intellectual ability. Few would argue against the fact
that the upper limits of human intellectual capacity are to a great extent
determined by the physical structure of the brain (as explained in the previous
section). It is also fair to assume that these upper limits will vary from one
individual to the next.
My contention is, however,
that these genetically determined upper limits are of no practical significance
to the person seeking to increase mental ability substantially. There are many
reasons for supposing that we never come anywhere near these limits and that
man's ability to reason, to remember, and to learn expands as the need arises.
Our brains are not significantly different, in biological terms, from the
brains of our Bronze Age forebears, yet they are capable of exploring space,
constructing computers, and comprehending complex abstract scientific concepts.
The harder your brain is obliged to work, the greater will be its capacity for
work. The more efficiently you allow your brain to function, the greater will
be its ability to function with speed, accuracy, and confidence, no matter what
the intellectual challenge.
Of key importance in
imposing restrictions on this functioning, however, is the manner in which an
individual comes to view his or her intellect. This is why the myth of inborn
mental inferiority is so damaging. Believe it, and you place your brain behind
bars to serve out a life sentence of inadequacy. Change those beliefs, and you
can free it to work better than you might ever have considered possible.
Although I believe that many
people restrict their horizons and others’ unnecessarily, and I encourage you
to expand yours, I do not profess that you can accomplish anything you set your
mind on, so long as you believe you can. All of us have our restrictions and
limitations. Taking charge of your cognitive development does not imply that
you can achieve anything imaginable. It implies, instead, taking advantage of
the opportunities you have.
Look at it this way. Suppose
that the total amount of terrain over which you might travel at will equals a
million acres. Due to various constraints you cannot overcome – no-trespassing
areas, as it were – the possibilities are reduced to 10,000 acres. You are
certainly blocked off from a lot of terrain. That's a pity, but why complain,
when at present you only utilize 100 acres, or 1 percent of your possibilities?
Stop bemoaning the limitations, and start exploring the possibilities.
You may contact me (the author) by
clicking on the profile link (above right) or by typing in the following email
address: