Neuromyths and Edu-Ca$h-In: Vetting the "Expert" Claims

How can we tell if neuroeducation products are based on sound research?
Published in Neuroscience
Neuromyths and Edu-Ca$h-In: Vetting the "Expert" Claims

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

There is a lot of money in education—which means marketing. And marketing means that we should be intellectually skeptical of what we read, and what people claim (even this post). The clinical, white-lab-coat field of neuroscience is no different. There are matters of credibility, diversion, and deceit to be aware of. As you become familiar with how misleading tactics are used to persuade consumers, you'll save money and time, but more importantly you'll grow as a teacher-leader in being able to share this knowledge with others.

Just because a product claims to be based on neuroscience doesn't mean that it actually is. So what are some warning signs that you might look for?

The warning signs of shady education marketing

1. Who did the research?

The research support of the product may list "research" articles, but when you look at the titles of the studies and the journals in which they are published, you find smoke and mirrors. Are the journals respected in the field of science and peer reviewed by experts, or are they merely commercials masquerading as scientific research?

2. Does the research relate to the product?

Are the research articles listed specific to the product, or do they sound important while having no relevance to the product claims? Consider this fictional title: "EEG Abnormalities Found In Parietal Lobes of Some Children With Post-Traumatic Visual Dyslexia." Let's assume this article is listed as research supporting a program of biofeedback and rhyming practice to cure all forms of dyslexia.Here are four invalid or irrelevant claims in this title:

  • The product promises to treat all dyslexias, yet the research was on a very specific and small group of subjects who developed visual dyslexia from traumatic injury.
  • The product claims that the use of rhyming is an effective strategy, but the article actually reported a finding of an electroencephalographic (EEG) change, not any result of treatment.
  • Just because the study used EEG measurements in its evaluation of post-traumatic visual dyslexia, there is no validity in a conclusion that suggests the EEG findings in the research subjects were the cause of their dyslexia. There is no evidence that "treating EEG findings" treats the problem!
  • Jumping even farther from the valid research is the conclusion that if one uses biofeedback to make the EEG recordings "normal," the dyslexia will be cured. That's like saying that dipping a thermometer in ice water to lower the temperature reading will cure the underlying infection causing a fever.

Invalid and irrelevant claims such as these are misleading, irresponsible, and dishonest. What other critical questions can you consider to spot this kind of marketing-speak?

3. Does the research rely on statistics?

Is the research from a laboratory, or is it focused on misleading statistics about outcome? If the research section of a product fills the space with "statistical analysis," beware! It's even easier to mislead with statistics than with claims of neuroscientific research "proof."

For example, be wary of statistics focused on percentage changes without the actual numbers. A study for a computer program that promises to enhance working memory might claim that they evaluated 50 children and the outcome was a 100-percent improvement after five 30-minute practice sessions. That sounds pretty good, but here is a scenario that could be the inadequate basis for that misleading claim:

  • Of the 50 children who initially joined the study, 48 were eliminated after the first screening test because they already scored perfectly on the two questions that constituted subject testing. How valid is a screening test with only two questions, especially if all but two of the 50 subjects get both correct? And how valid is a study with only two subjects?
  • Of the remaining two subjects who qualified by getting both questions wrong, how do we know if the computer activity program that they practiced was the reason for their improved test results?
  • Perhaps the information that they needed to answer those two questions was embedded in the game. It would then be no surprise that, when given these same two questions at the end of their "treatment," the subjects answered them correctly. It would surely not be conclusive evidence that "all children who used the program achieved a 100-percent improvement in post-treatment testing compared to their score before using the program."

In short, look at the details of how the study was done, how many completed the study, and if the testing really measured an outcome directly caused by the intervention.

4. Testimonials can be meaningless

Testimonials can be an equally misleading way of promoting a product that does not have adequate science research or true expert validation. Slick advertisers know that consumers respond to testimonials read from scripts by celebrities or convincing actors. These actors often play the part of teachers describing similar problems with their students and great results with the advertised product.

5. Credibility matters

Who is the expert or guru making the product claims? Is it a recognized authority in the field with the specific background knowledge relevant to the product? As a physician, I'd be within legal boundaries writing a support blurb for a revolutionary post-partum exercise regime. However, my recommendation would be worthless since my specialty is neurology, not obstetrics or physical medicine.

6. Titles and initials are cheap

Is an MDD degree better than an MD degree? One can claim a self-designated, important-sounding title that is completely meaningless, such as Phineas T. Bluster, MDD, PhDS, President of the International Society for Bringing the Best in Education to Children. The initials can be anything that Mr. Bluster designs as long as they are not the property of any other registered group. So MDD could stand for "Me Delight in Dollars," PhDS could be "Phineas Deceives Schools," and since there is no registered International Society for Bringing the Best in Education to Children, he can claim the name and presidency.

Try before you buy

What is the best proof of an effective product? Make sure that it does what you want it to do rather than purchasing based on claims. If it offers testing and validity checks to see whether it fits your specific needs, try it before you buy it. If this isn't possible, talk to someone that you know and trust, ideally someone with whom you've worked and who can answer your questions as you experiment on your own. A Google search can come in handy, but unless you find a community, review, or testimony that you're certain is legitimate, this can mislead you as well.

If the product does not offer a free trial period, contact the seller and explain your interest in using it for your students. If the company doesn't have enough confidence in their product to provide a pre-purchase trial, think twice about that product's value.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in