37
$\begingroup$

I read recently on this site that the growth mindset seems not to be real. I did not know that (I admit that I don't follow research into learning as closely as I would like). Can I turn that experience around and ask: which results useful to a working educator seem to be solid, in that they replicate and perhaps in that there are a number of studies that have reinforced each other over time?

Ideal would be a pointer to a reasonably recent book that would help an educator who is not a psychologist but who is interested in building on what they know.

$\endgroup$

5 Answers 5

63
$\begingroup$

There's a highly upvoted answer here claiming that practically no cognitive psychology findings hold up in replication. I don't think that's true at all. Sure, many findings don't hold up, but also, many findings do.

For instance: we know that actively solving problems produces more learning than passively watching a video/lecture or re-reading notes. This sort of thing has been tested scientifically, numerous times, and it is completely replicable. It might as well be a law of physics at this point. In fact, a highly-cited meta-analysis states, verbatim:

"...[C]alls to increase the number of students receiving STEM degrees could be answered, at least in part, by abandoning traditional lecturing in favor of active learning. … Given our results, it is reasonable to raise concerns about the continued use of traditional lecturing as a control in future experiments."

So there you go, that's one cognitive psychology finding that holds up: active learning beats passive learning.

(To be clear: active learning doesn't mean that students never watch and listen. It just means that students are actively solving problems as soon as possible following a minimum effective dose of initial explanation, and they spend the vast majority of their time actively solving problems -- and by "vast majority" I mean, like, 90%, not 60%.)

Another finding: if you don't review information, you forget it. You can actually model this precisely, mathematically, using a forgetting curve. I'm not exaggerating when I refer to these things as laws of physics -- the only real difference is that we've gone up several levels of scale and are dealing with noisier stochastic processes (that also have noisier underlying variables).

Okay, but aren't these obvious? Yes, but...

  • Yes, but in education, obvious strategies often aren't put into practice. For instance, plenty of classes that still run on a pure lecture format and don't review previously learned unless it's the day before a test.

  • Yes, but there are plenty of other findings that replicate just as well but are not so obvious.

Here are some less obvious findings.

  • The spacing effect: more long-term retention occurs when you space out your practice, even if it's the same amount of total practice. As researcher Doug Rohrer states:

"...[T]he spacing effect is arguably one of the largest and most robust findings in learning research, and it appears to have few constraints."

  • Note: There are tons of more detailed scientific references/quotes I want to include, but I'm going to skip them so not to continue blowing up the length of this already-gigantic answer. If you want to see them, here's a draft I'm working on that covers all these findings (and more) with over 300 references and relevant quotes pulled out of those references.

  • A profound consequence of the spacing effect is that the more reviews are completed (with appropriate spacing), the longer the memory will be retained, and the longer one can wait until the next review is needed. This observation gives rise to a systematic method for reviewing previously-learned material called spaced repetition (or distributed practice). A "repetition" is a successful review at the appropriate time.

  • To maximize the amount by which your memory is extended when solving review problems, it's necessary to avoid looking back at reference material unless you are totally stuck and cannot remember how to proceed. This is called the testing effect, also known as the retrieval practice effect: the best way to review material is to test yourself on it, that is, practice retrieving it from memory, unassisted.

  • The testing effect can be combined with spaced repetition to produce an even more potent learning technique known as spaced retrieval practice.

  • During review, it's also best to spread minimal effective doses of practice across various skills. This is known as mixed practice or interleaving -- it's the opposite of "blocked" practice, which involves extensive consecutive repetition of a single skill. Blocked practice can give a false sense of mastery and fluency because it allows students to settle into a robotic rhythm of mindlessly applying one type of solution to one type of problem. Mixed practice, on the other hand, creates a "desirable difficulty" that promotes vastly superior retention and generalization, making it a more effective review strategy.

  • To free up mental processing power, it's critical to practice low-level skills enough that they can be carried out without requiring conscious effort. This is known as automaticity. Think of a basketball player who is running, dribbling, and strategizing all at the same time -- if they had to consciously manage every bounce and every stride, they'd be too overwhelmed to look around and strategize. The same is true in math. I wrote more about the importance of automaticity in a recent answer here.

  • The most effective type of active learning is deliberate practice, which consists of individualized training activities specially chosen to improve specific aspects of a student's performance through repetition (effortful repetition, not mindless repetition) and successive refinement. However, because deliberate practice requires intense effort focused in areas beyond one's repertoire, which tends to be more effortful and less enjoyable, people will tend to avoid it, instead opting to ineffectively practice within their level of comfort (which is never a form of deliberate practice, no matter what activities are performed).

  • Instructional techniques that promote the most learning in experts, promote the least learning in beginners, and vice versa. This is known as the expertise reversal effect. An important consequence is that effective methods of practice for students typically should not emulate what experts do in the professional workplace (e.g., working in groups to solve open-ended problems). Beginners (i.e. students) learn most effectively through direct instruction.

Why haven't these findings transformed education?

In Daniel R. Collins' answer, he states "if there was some magic solution, it would have been implemented large-scale very quickly."

That raises the question: if cognitive psychology has found many effective learning strategies (like mastery learning, spaced repetition, the testing effect, and mixed practice), then why haven't these learning strategies been implemented large-scale?

Here are a handful of reasons that I'm aware of.

1. Leveraging them (at all) requires additional effort from both teachers and students.

In some way or another, each strategy increases the intensity of effort required from students and/or instructors, and the extra effort is then converted into an outsized gain in learning.

This theme is so well-documented in the literature that it even has a catchy name: a practice condition that makes the task harder, slowing down the learning process yet improving recall and transfer, is known as a desirable difficulty.

Desirable difficulties make practice more representative of true assessment conditions. Consequently, it is easy for students (and their teachers) to vastly overestimate their knowledge if they do not leverage desirable difficulties during practice, a phenomenon known as the illusion of comprehension.

However, the typical teacher is incentivized to maximize the immediate performance and/or happiness of their students, which biases them against introducing desirable difficulties and incentivizes them to promote illusions of comprehension.

Using desirable difficulties exposes the reality that students didn't actually learn as much as they (and their teachers) "felt" they did under less effortful conditions. This reality is inconvenient to students and teachers alike; therefore, it is common to simply believe the illusion of learning and avoid activities that might present evidence to the contrary.

2. Leveraging cognitive learning strategies to their fullest extent requires an inhuman amount of effort from teachers.

Let's imagine a classroom where these strategies are being used to their fullest extent.

  • Every individual student is fully engaged in productive problem-solving, with immediate feedback (including remedial support when necessary), on the specific types of problems, and in the specific types of settings (e.g., with vs without reference material, blocked vs interleaved, timed vs untimed), that will move the needle the most for their personal learning progress at that specific moment in time.

  • This is happening throughout the entirety of class time, the only exceptions being those brief moments when a student is introduced to a new topic and observes a worked example before jumping into active problem-solving.

Why is this an inhuman amount of work?

  • First of all, it's at best extremely difficult, and at worst (and most commonly) impossible, to find a type of problem that is productive for all students in the class. Even if a teacher chooses a type of problem that is appropriate for what they perceive to be the "class average" knowledge profile, it will typically be too hard for many students and too easy for many others (an unproductive use of time for those students either way).

  • Additionally, to even know the specific problem types that each student needs to work on, the teacher has to separately track each student's progress on each problem type, manage a spaced repetition schedule of when each student needs to review each topic, and continually update each schedule based on the student's performance (which can be incredibly complicated given that each time a student learns or reviews an advanced topic, they're implicitly reviewing many simpler topics, all of whose repetition schedules need to be adjusted as a result, depending on how the student performed). This is an inhuman amount of bookkeeping and computation.

  • Furthermore, even on the rare occasion that a teacher manages to find a type of problem that is productive for all students in the class, different students will require different amounts of practice to master the solution technique. Some students will catch on quickly and be ready to move on to more difficult problems after solving just a couple problems of the given type, while other students will require many more attempts before they are able to solve problems of the given type successfully on their own. Additionally, some students will solve problems quickly while others will require more time.

In the absence of the proper technology, it is impossible for a single human teacher to deliver an optimal learning experience to a classroom of many students with heterogeneous knowledge profiles, who all need to work on different types of problems and receive immediate feedback on each attempt.

3. Most edtech systems do not actually leverage the above findings.

If you pick any edtech system off the shelf and check whether it leverages each of the cognitive learning strategies I've described above, you'll probably be surprised at how few it actually uses. For instance:

  • Tons of systems don't scaffold their content into bite-sized pieces.

  • Tons of systems allow students to move on to more material despite not demonstrating knowledge of prerequisite material.

  • Tons of systems don't do spaced review. (Moreover, tons of systems don't do any review.)

Sometimes a system will appear to leverage some finding, but if you look more closely it turns out that this is actually an illusion that is made possible by cutting corners somewhere less obvious. For instance:

  • Tons of systems offer bite-sized pieces of content, but they accomplish this by watering down the content, cherry-picking the simplest cases of each problem type, and skipping lots of content that would reasonably be covered in a standard textbook.

  • Tons of systems make students do prerequisite lessons before moving on to more advanced lessons, but they don't actually measure tangible mastery on prerequisite lessons. Simply watching a video and/or attempting some problems is not mastery. The student has to actually be getting problems right, and those problems have to be representative of the content covered in the lesson.

  • Tons of systems claim to help students when they're struggling, but the way they do this is by lowering the bar for success on the learning task (e.g., by giving away hints). Really, what the system needs to do is take actions that are most likely to strengthen a student's area of weakness and empower them to clear the bar fully and independently on their next attempt.

Now, I'm not saying that these issues apply to all edtech systems. I do think edtech is the way forward here -- optimal teaching is an inhuman amount of work, and technology is needed. Heck, I personally developed all the quantitative software behind one system that properly handles the above challenges. All I'm saying is that you can't just take these things at face value. Many edtech systems don't really work from a learning standpoint, just as many psychology findings don't hold up in replication -- but at the same time, some edtech systems do work, shockingly well, just as some cognitive psychology findings do hold up and can be leveraged to massively increase student learning.

4. Even if you leverage the above findings, you still have to hold students accountable for learning.

Suppose you have the Platonic ideal of an edtech system that leverages all the above cognitive learning strategies to their fullest extent.

Can you just put a student on it and expect them to learn? Heck no! That would only work for exceptionally motivated students.

Most students are not motivated to learn the subject material. They need a responsible adult -- such as a parent or a teacher -- to incentivize them and hold them accountable for their behavior.

I can't tell you how many times I've seen the following situation play out:

  • Adult puts a student on an edtech system.

  • Student goofs off doing other things instead (e.g., watching YouTube).

  • Adult checks in, realizes the student is not accomplishing anything, and asks the student what's going on.

  • Student says that the system is too hard or otherwise doesn't work.

  • Adult might take the student's word at face value. Or, if the adult notices that the student hasn't actually attempted any work and calls them out on it, the scenario repeats with the student putting forth as little effort as possible -- enough to convince the adult that they're trying, but not enough to really make progress.

In these situations, here's what needs to happen:

  • The adult needs to sit down next to the student and force them to actually put forth the effort required to use the system properly.

  • Once it's established that the student is able to make progress by putting forth sufficient effort, the adult needs to continue holding the student accountable for their daily progress. If the student ever stops making progress, the adult needs to sit down next to the student again and get them back on the rails.

  • To keep the student on the rails without having to sit down next to them all the time, the adult needs to set up an incentive structure. Even little things go a long way, like "if you complete all your work this week then we'll go get ice cream on the weekend," or "no video games tonight until you complete your work." The incentive has to be centered around something that the student actually cares about, whether that be dessert, gaming, movies, books, etc.

Even if an adult puts a student on an edtech system that is truly optimal, if the adult clocks out and stops holding the student accountable for completing their work every day, then of course the overall learning outcome is going to be worse.

Connecting to mechanics within the brain

Before ending this answer, I want to drive home the point that the cognitive learning strategies discussed here really do connect all the way down to the mechanics of what's going on in the brain.

The goal of mathematical instruction is to increase the quantity, depth, retrievability, and generalizability of mathematical concepts and skills in the student's long-term memory (LTM).

At a physical level, that amounts to creating strategic connections between neurons so that the brain can more easily, quickly, accurately, and reliably activate more intricate patterns of neurons. This process is known as consolidation.

Now, here's the catch: before information can be consolidated into LTM, it has to pass through working memory (WM), which has severely limited capacity. The brain's working memory capacity (WMC) represents the amount of effort that it can devote to activating neural patterns and persistently maintaining their simultaneous activation, a process known as rehearsal.

Most people can only hold about 7 digits (or more generally 4 chunks of coherently grouped items) simultaneously and only for about 20 seconds. And that assumes they aren't needing to perform any mental manipulation of those items – if they do, then fewer items can be held due to competition for limited processing resources.

Limited capacity makes WMC a bottleneck in the transfer of information into LTM. When the cognitive load of a learning task exceeds a student's WMC, the student experiences cognitive overload and is not able to complete the task. Even if a student does not experience full overload, a heavy load will decrease their performance and slow down their learning in a way that is NOT a desirable difficulty.

Additionally, different students have different WMC, and those with higher WMC are typically going to find it easier to "see the forest for the trees" by learning underlying rules as opposed to memorizing example-specific details. (This is unsurprising given that understanding large-scale patterns requires balancing many concepts simultaneously in WM.)

It's expected that higher-WMC students will more quickly improve their performance on a learning task over the course of exposure, instruction, and practice on the task. However, once a student learns a task to a sufficient level of performance, the impact of WMC on task performance is diminished because the information processing that's required to perform the task has been transferred into long-term memory, where it can be recalled by WM without increasing the actual load placed on WM.

So, for each concept or skill you want to teach:

  • it needs to be introduced after the prerequisites have been learned (so that the prerequisite knowledge can be pulled from long-term memory without taxing WM)

  • it needs to be broken down into bite-sized pieces small enough that no piece overloads any student's WM

  • each student needs to be given enough practice to achieve mastery on each piece – and that amount of practice may vary depending on the particular student and the particular learning task.

But also, even if you do all the above perfectly, you still have to deal with forgetting. The representations in LTM gradually, over time, decay and become harder to retrieve if they are not used, resulting in forgetting.

The solution to forgetting is review -- and not just passively re-ingesting information, but actively retrieving it, unassisted, from LTM. Each time you successfully actively retrieve fuzzy information from LTM, you physically refresh and deepen the corresponding neural representation in your brain. But that doesn't happen if you just passively re-ingest the information through your senses instead of actively retrieving it from LTM.

Further Reading

I've written extensively on this. See the working draft here for more info and hundreds of scientific citations to back it up.

The citations are from a wide variety of researchers, but there's one researcher in particular who has published a TON of papers relevant to this question/answer in particular, has all (or at least most) of those papers freely available on his personal site, and has a really engaging and "to the point" writing style, so I want to give him a shout-out. His name is Doug Rohrer. You can read his papers here: drohrer.myweb.usf.edu/pubs.htm

Similarly, there are amazing practical guides on retrievalpractice.org that not only describe these learning strategies but also talk about how to leverage them in the classroom. They're easy reading yet also incredibly informative. Here are some of my favorites:

As far as books, check out the following:

In the comments, OpalE suggests another resource that I agree is worth checking out: learningscientists.org

$\endgroup$
13
  • 8
    $\begingroup$ My frustration is that my goal as a math professor isn't for students to learn actual mathematics facts or procedures, but rather become better at picking up and digesting new (at least to them) math, solving problems unlike any they have seen before, and (eventually) inventing their own (correct!) math. For me, the content we teach has no purpose in itself - it's just a route for developing cognitive and intellectual skills. Am I just doomed when encountering a student with low WMC? $\endgroup$ Commented May 11 at 16:28
  • 3
    $\begingroup$ @alexanderwoo I wouldn't necessarily say you're "doomed," but I would say that in practice, the most effective way to maximize your tangible positive impact on students is to equip them with content knowledge. Developing solid understanding and technical skills within foundational mathematics is attainable for vastly more students than is "thinking like a mathematician," and it still opens the door all sorts of life opportunities. ... $\endgroup$ Commented May 11 at 17:51
  • 3
    $\begingroup$ This is an answer which mirrors exactly my thoughts on the subject. For extra resources, I have found learningscientists.org to be dedicated to actual replicated cognitive psychology on learning, and it has digestible resources for students and teachers. For extra commentary on why some of these are not implemented in classrooms, I note that when I was a secondary teacher we were discouraged from having deadlines, direct instruction, or rote practice, when struggling students often need these approaches. $\endgroup$
    – Opal E
    Commented May 13 at 14:31
  • 3
    $\begingroup$ In particular, it is extremely difficult to promote spaced practice or varied practice without spaced deadlines, and it is extremely difficult to reach automaticity without some amount of rote practice. As these are essential to long-term retention, the deletion of any sense of deadlines or "doing things on time" in schools for the sake of "grading students on what they've learned, not how they behave" may appear to get them to the same level in the moment, but in my opinion is severely handicapping them long-term due to the lack of spacing & repetition effects. $\endgroup$
    – Opal E
    Commented May 13 at 14:42
  • 2
    $\begingroup$ Can I just say thanks for such a comprehensive answer? This will be of significant use to people looking for references well into the future. $\endgroup$
    – kcrisman
    Commented May 21 at 13:54
26
$\begingroup$

Having pursued the same question as the OP with same motivation for over a decade (as an undergraduate classroom instructor, and a facilitator of a faculty interest group in STEM education strategies at my college's center for teaching and learning), my takeaway is -- practically nothing.

Unfortunately, I've come to the position where I basically don't trust anything coming out of education departments. Anything that happens in a classroom is too subjective, too dependent on a legion of variables, too ripe for researcher bias, and the tidbits suggested by small published experiments almost never make a difference in large-scale replications in actual classrooms.

I mean, everyone's desperate to find more successful classroom teaching strategies, so if there was some magic solution, it would have been implemented large-scale very quickly. Instead, in the last decade or two, student scores and IQs have been steadily dropping. E.g.: Consider the 2020 paper by Bjorklund-Young & Plasman that found none of 144 studied middle schools could close the math achievement gap in the mid-2010's, regardless of what strategies they used.

I've tried most everything suggested in the prior answers posted to date. A few comments:

  • I do recommend Willingham's Why Don't Students Like School? (I got to chat with D.W. a bit about it, and got my copy signed by him.) In particular, I think Ch. 7 busting the "learning styles" myth is useful. On the other hand, Ch. 8 heavily hypes up its heir, Dweck's growth mindset theory, written at a time before all the big replication failures came in.

  • I tried to use the What Works Clearinghouse site in the past and ultimately came away with nothing I could make use of in practice. Note the disclaimer at the top of their home page: "What Works in Math? There’s no single answer to that broad question. Instead, what works varies by grade, subject, and even delivery model."

So it's probably good that the OP cares enough to ask and pursue the question (much as I did), but just so they're fully informed -- the short story is there's no silver bullet.

To be clear, I'm not discounting all of cognitive science, but I am directly answering the question in the title, which I read as, "Which cognitive psychology findings are solid, that I can use to help my students [as a teacher in a standard classroom]?"

You should also read Justin Skycak's answer which outlines a number of effects he considers to be well-founded. Note that his conclusion is that we effectively shouldn't have standard classrooms, but that we should ideally assign students individually to self-paced technology platforms, similar to the one that he personally develops. Generally this kind of platform has been found to work well for exceptionally motivated students. But contrast with attempts at delivering developmental college math courses by online technology platforms, which have been found to be so disastrous that trials have been suspended halfway through (San Jose State via Udacity), and one college's director of institutional research recalled, "The failure rates were so high that it seemed almost unethical to offer the option" (Community College of Beaver County).

$\endgroup$
17
  • 2
    $\begingroup$ Yep, agree wholeheartedly. "But the power of instruction is seldom of much efficacy, except in those happy dispositions where it is almost superfluous"...Edward Gibbon $\endgroup$
    – user52817
    Commented May 11 at 17:31
  • 8
    $\begingroup$ Re "too dependent": A colleague had designed a learning activity she recommended to me, and it fell flat for me. But I'd see her students' reaction — enthusiasm, perseverance, diligence — and I'd ask her, How do you do it? She'd explain, but the activity always fell flat. There was something she understood that she couldn't or didn't put into words for me. It's clear that the understanding, belief and enthusiasm of the instructor is a factor that success depends on. Someone implementing another's recipe might lack some of the attributes needed to replicate success. $\endgroup$
    – user1815
    Commented May 11 at 17:32
  • 2
    $\begingroup$ @user1815 I'm sorry, I have no idea what you're asking. It sounds like you might not be familiar with the term "retrieval practice." This is what "retrieval practice" refers to: en.wikipedia.org/wiki/Testing_effect $\endgroup$ Commented May 11 at 23:31
  • 2
    $\begingroup$ @user1815 retrieval practice is better preparation for both of the instances you've described. The whole point is that retrieval practice deepens the long-term memory encoding of the information being recalled. It's not just about how the learning is measured. At a physical level in the brain, learning amounts to a positive change in long-term memory, and successfully retrieving fuzzy information is what creates that positive change. Re-reading notes does not. This has nothing to do with the mode of assessment. $\endgroup$ Commented May 12 at 0:19
  • 5
    $\begingroup$ @user1815 What your colleague couldn't communicate about the lesson plan were the associated soft skills she has that support the plan. Teaching is a two part equation. It has an instructor and a student. A lot of these educational materials and cognitive psychology focuses on the student, and teaching method, but not the teacher. Part of the problem of the educational materials lack of effectiveness is the society-wide lack of understanding and emphasis on soft skills. Your colleague has skills not found in the plan that enable her to generate enthusiasm in her students. $\endgroup$
    – David S
    Commented May 13 at 15:05
5
$\begingroup$

I highly recommend Daniel T. Willingham's "Why Don't Students Like School?" (from 2009, but there seems to be a new edition from 2020). It covers a wide variety of known results from cognitive science chosen to:

  • Be robust enough to generalize from the laboratory to the classroom
  • Be supported by a large amount of research
  • Have a big impact on learning
  • Be things teachers don't already know

Including the principles behind the growth mindset.

$\endgroup$
1
  • 1
    $\begingroup$ Here is a link to the articles Daniel T. Willingham has written for the American Federation of Teachers on his website. $\endgroup$
    – user 85795
    Commented May 19 at 8:44
3
$\begingroup$

"The What Works Clearinghouse (WWC) is an investment of the Institute of Education Sciences (IES) within the U.S. Department of Education that was established in 2002." They have looked at the research literature and they have summarized what they found. See https://ies.ed.gov/ncee/wwc/Math/

$\endgroup$
1
  • 1
    $\begingroup$ Thank you. I was unaware of this and I appreciate that they rank documents by how well supported they are. I happen to teach undergraduates and they are less strong in that area, but no doubt there is some carryover from secondary work. $\endgroup$ Commented May 12 at 15:55
0
$\begingroup$

I don't know how well-replicated the findings are, but I understand that having students work in groups, writing on vertical (why?!), non-permanent surfaces helps substantially.

I saw that effect in a linear algebra course I taught in a classroom with whiteboards on all the walls. It was so great to see them working together so enthusiastically.

$\endgroup$
2
  • 6
    $\begingroup$ This is from Liljedahl's Building Thinking Classrooms in Mathematics. I recommend this critical response: "The Evidence for Building Thinking Classrooms is Weak" (Pershan). At best the claims for vertical whiteboards are that they increase interaction and enjoyment (not actual learning, proficiency, or test scores). $\endgroup$ Commented May 11 at 20:32
  • 1
    $\begingroup$ I am pretty sure that the section I'm remembering, that enjoyed significantly increased participation, also showed increased learning as measured by test scores. (But yeah, I know this is andecdotal, and doesn't count as data.) $\endgroup$
    – Sue VanHattum
    Commented May 11 at 23:11

Not the answer you're looking for? Browse other questions tagged or ask your own question.