16
$\begingroup$

It is uncontroversial to say that the cognitive sciences do not exclusively deal with directly observable phenomena, but nonetheless aim to study the physical causes of behavior and cognition scientifically. "Grasping", for instance, is directly observable, but cognitive science also studies "intelligence," which is not directly observable, at least not for all conceptualizations of intelligence.

When cognitive science studies such an unobserved quantity, the difference between the general concept, the unobserved construct, and the direct measure is sometimes rendered implicit. More colloquially: go far enough down the methodological rabbit hole for an unobserved construct, and you will likely eventually find a patient zero paper that defines a broad folk concept, then makes a spherical-cow or just-so statement that a construct maps on to the concept.

Example: Working memory

For an illustration of a just-so definition, take Baddeley and Hitch's definition of working memory in their seminal 1974 work as a replacement for the classical short-term/long-term memory model:

a system that provides temporary storage and manipulation of information

(The relationship rendered implicit here is ["memory" -> STM/LTM -> WM/LTM].)

To be fair to Baddeley & Hitch, and so as not to single them out, Baddeley has later defended this problem with their model on productivity and guide to discovery grounds (Baddeley, 2007). I find this quite reasonable, and I do not mean to impugn their excellent research or the model's productivity in any way, only to use the example as an illustration for the question.

Problem of implicit relations

Implicit relations are antithetical to the practice of science because they engender unstated assumptions and hidden theoretical incompatibilities. It is also antithetical to science communication and scientific literacy, because important words like "intelligence" and "memory" become effectively polyvalent, with subtly different meanings that are not apparent without considerable study. The relationship between IQ and intelligence is the prototypical example.

Question

What is the relation between measures, constructs and concepts as actually used in the cognitive sciences? (Contrast intelligence or memory with grasping or walking, if concrete examples are helpful.)

Please read my comments to the question and current answer before asking for additional information.

Related questions

References

  • Baddeley, A. (2007). Working memory, thought, and action. Oxford University Press.
  • Baddeley, A. D., & Hitch, G. (1974). Working memory. Psychology of learning and motivation, 8, 47-89.
$\endgroup$
6
  • 1
    $\begingroup$ Isn't this question dependent on the subject of study? I mean, do concepts not lead to constructs in order to find measures to study them and isn't their interrelation defined by the study topic? Also, intelligence is measured by IQ. debated, but accepted. I find this question to be quite philosophical in nature, and difficult to reconcile with the practical side of it as exemplified by the Methodology and Measurement tags. What exactly are you after, if I may ask? +1 for the undoubtedly great amount of work that went into this question. My comment asks for clarification - It is not critique. $\endgroup$
    – AliceD
    Commented Mar 24, 2015 at 11:46
  • 1
    $\begingroup$ @AliceD I think you are overinterpreting it somewhat. The question is not whether IQ IS an accepted measured of intelligence or HOW a perfect cognitive scientist might define/justify such an acceptance, but rather about how real cognitive science defines/justifies such an acceptance. For example, I would accept an answer which showed sufficiently (e.g., a few sourced examples) that different fields of study actually define/justify this mapping differently. The answer involves philosophy in some sense, but the question is more empirical or practical than philosophical. $\endgroup$ Commented Mar 24, 2015 at 11:54
  • $\begingroup$ Right..... Quite frankly a question over the top of my head then :) Good luck! $\endgroup$
    – AliceD
    Commented Mar 24, 2015 at 11:57
  • 1
    $\begingroup$ @ChristianHummeluhr, I would like to answer this question, but after reading it over several times, I'm still not sure what you're looking for when you say "relation between ..." Are you asking how to determine whether or not a particular test actually measures what it purports to measure? Are you asking how scientists can justify the introduction of a cognitive construct that is based on a folk concept? Are you asking how different interpretations of the same construct may be reconciled? What is the process of determining a suitable measure for a given construct? A little bit of each? $\endgroup$
    – Arnon Weinberg
    Commented Mar 27, 2015 at 2:24
  • $\begingroup$ @ArnonWeinberg I think the question may be confusing because most of us tend to think of methodology in normative terms. This is an empirical question about how cognitive scientists in fact do this, in line with recent research on how they in fact understand, say, confidence intervals (spoiler: poorly, as I recall). I considered using "ontology" instead of relation, but I surmised it would only cause more confusion as most people would read it in the philosophical sense. If you want/need more detail, I'm available in chat. $\endgroup$ Commented Mar 27, 2015 at 7:22

2 Answers 2

13
$\begingroup$

In speaking to constructs vs. measures, I believe that the difference is clear and implied in your background: constructs are that which cannot be directly measured (but we assume exists), where measures are directly measurable attributes that we assume relate to the construct. The process you seem to be questioning is that of the operational definition, or the process of defining that which cannot be measured (the construct) as that which can (the measurement). Your concern that these are not equivalent is quite right, and many papers are written for the sole purpose of explaining why a certain measure matches a theoretical construct (the process of measurement validation and establishing construct validity).

As to why operational definitions are necessary, the straightforward answer is simple: communication. As constructs are not measurable (in many cases), some type of operationalization must be used. If multiple claims are made on a construct, each using different operational definitions, the results may be contradictory purely due to the differences in the operational definitions assigned by separate researchers! (Education Letter, 2013)

Using operational definitions instead of simply listing the construct can help researchers better determine which results should be comparable. Remember, each of these definitions may have been validated, yet differences in the measurement will naturally yield differences in the data that may not be relevant to the construct. This is the heart of construct invalidity (Groves et al., 2009).

I've always enjoyed seminal papers, and one that you may be interested in is the seminal work defining construct validity (and distinguishing it from other types) by Lee Cronbach and Paul Meehl.

"Construct validity is ordinarily studied when the tester has no definite criterion measure of the quality with which he is concerned, and must use indirect measures. Here the trait or quality underlying the test is of central importance, rather than either the test behavior or the scores on the criteria" (Cronbach & Meehl, 1955, p. 282).

It sounds like you may take issue with failure to apply correct validity checks, and the application of the correct types of validity to the correct situations.

I hope this help to clarify the nuances of a very involved question! Perhaps the heart of the question/solution is this: though constructs and operational definitions of constructs get used interchangeably, they shouldn't be.

References

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302.

Groves, R. M., Fowler, F. J., Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (Vol. 2). John Wiley & Sons.

General science; new findings reported from Carnegie Mellon University describe advances in general science. (2013). Education Letter, 90.

$\endgroup$
2
  • $\begingroup$ Thank you. I think this is a helpful clarification, but as you note, most of the information here was already given in the question. You start touching on the core of the question when you get into construct validity, but I would edit your stating of the core question: I have a hunch that many lines of research flat out do not support a valid connection between concept and construct beyond making a face-validity argument, and that this is hidden behind what I've called implicit relationships in the question. $\endgroup$ Commented Mar 30, 2015 at 10:42
  • $\begingroup$ And, of course, similarly for supporting a connection between measures and constructs. $\endgroup$ Commented Mar 30, 2015 at 10:50
10
+100
$\begingroup$

Apologies in advance for the long answer. I tried to narrow down the scope by focusing on only a single construct, and only a single aspect of validity, and it still turned out like an essay...

Let's take intelligence research as an example. This work started with an intelligence concept – a fairly vague and ambiguous idea about a personality trait that describes a person's cognitive abilities. From this, a construct was hypothesized: A physical – but unobserved – mechanism that implements intelligence, called faculties (or g-factor). The next step was to figure out how such faculties might be measured. There was a desire to find simple tools to do this, and so a variety of IQ tests – mostly written, and some not – were developed.

It is not clear in any sense that intelligence as a (folk) concept has any ontological basis. Presumably the construct of cognitive faculties should have a physical basis, but in practice it doesn't appear to. And IQ tests do not have as their primary objective the measurement of such faculties anyway (although they may be useful indicators). IQ tests were originally developed to predict academic performance, and test validity continues to be measured against that, which is typical of psychometrics in general. So in what sense (if any) are these terms related to one another?

This is a question of validity, or more specifically, construct validity. The relationship between concept, construct, and measurement, is a dynamic and bi-directional one. An often cited example for intelligence research is the case of the Brazilian street vendors. In a classic 1988 study by Geoffrey Saxe, Brazilian street children who often work as candy vendors were compared with rural non-seller counterparts for mathematical abilities. Though typically measuring lower in IQ, the street vendors matched and in some cases outperformed their rural counterparts on a variety of practical applications involving arithmetic.

Why the discrepancy? It turns out that children with a formal education in math are good at solving mathematical problems framed in a formal manner. Lacking a school education, street kids do not acquire these skills, and so do poorly on tests requiring them to identify and work with symbols such as numbers and operators. However, working as street vendors, these children are very proficient at arithmetic involving currency, such as calculating costs and making change – without any aids. So when presented with math problems framed as currency operations – such as: how much does it cost to buy 3 of this and 2 of that; or how much change do I get back from this bill – they perform better than educated children of the same age.

Similar results were found by Jean Lave (1988): In her study, Berkeley housewives performed significantly better in mathematics framed as grocery shopping tasks - calculating discounts and coupons for example - than framed as typical classroom math problems. In another study by Ceci and Liker (1986) gambling experts were capable of complex mathematics in calculating handicaps in horse-racing, but yet had no difference in IQ than non-experts.

What can we learn from this? Many different definitions of intelligence have been considered. One interpretation of these results is that intelligence is not just a general trait as implied by the single IQ number result, but is composed of a number of independent domain-specific intelligences, some of which are missed by the test. A different interpretation is that an unbiased measure of general intelligence requires framing questions in a context familiar to each subject. One thing that seems certain is that a general factor of intelligence that carries over to any domain appears less ontologically tenable.

Most evaluations of construct validity are domain-specific – they depend on what the theory being validated purports to describe, and as indicated, that too may change based on empirical findings. Of course, changes may not be quick to materialize – the concept of intelligence, and intelligence testing, have both certainly co-evolved over the years, but the rate of change has not been commensurate with findings such as the above, and is often skewed by political agenda. So in the mean time, it is not unusual for researchers to pursue red herrings – cognitive concepts that eventually turn out to be ontologically untenable. But this is hardly unique to cognitive science, and is a natural part of the scientific process of learning.

$\endgroup$
5
  • $\begingroup$ Thanks, Arnon. I think this is a very useful representation of the orthodox cognitive ontology, at least as I learned it. Still, as I indicated in chat, I think not committing explicitly to an ontology is actually fairly unique to cognitive science. We don't have to look much farther than to clinical and experimental cognitive psychology (ever seen a clinician debate a methodologist?) to see clear implicit differences in what constitutes something ontologically tenable, yet it's rare to see this clarified. I think you're reaching a bit in saying that intelligence LACKS a basis, though. ;) $\endgroup$ Commented Apr 1, 2015 at 8:49
  • $\begingroup$ Well, I certainly agree that lack of ontological basis is a common issue in this field, far more common than in pretty much any other field of science, though I wouldn't say it's unique (what is the ontological basis for measuring "running" speed or endurance in biology, or constellations in early physics? It's just part of the learning process). But that's reflective of the complexity of the organ being studied. Anyways, I tried to focus on empirical evidence as you requested rather than the philosophical issues, that could easily take up a book. $\endgroup$
    – Arnon Weinberg
    Commented Apr 1, 2015 at 15:35
  • $\begingroup$ I just wanted to note that I un-accepted your answer only because I've come to think questions with accepted answers are less likely to receive additional answers, and others could still have information to contribute. $\endgroup$ Commented May 12, 2015 at 16:33
  • $\begingroup$ The fallacy of assuming that concepts are naturally ontological is apparently called "reification". $\endgroup$
    – Arnon Weinberg
    Commented Mar 29, 2021 at 6:04
  • $\begingroup$ @ChristianHummeluhr: Given that it's been several years since you asked the question, you may want to re-accept this answer (or whichever one you found most useful/best answered your question). $\endgroup$
    – V2Blast
    Commented Sep 7, 2021 at 22:10

Not the answer you're looking for? Browse other questions tagged or ask your own question.