Story time, says these education journalists
No sooner had I written about "story time" than the LA Times journalists on the education beat announced "Story time!"
An article published recently on using test scores to rate individual teachers has stirred the education community. It attracted Andrew Gelman's attention and there is a lively discussion on his blog, which is where I picked up the piece. (For discussion on the statistics, please go there and check out the comments.)
In reading such articles, we must look out for the moment(s) when the reporters announce story time. Much of the article is great propaganda for the statistics lobby, describing an attempt to use observational data to address a practical question, sort of a Freakonomics-style application.
We have no problems when they say things like: "There is a substantial gap at year's end between students whose teachers were in the top 10% in effectiveness and the bottom 10%. The fortunate students ranked 17 percentile points higher in English and 25 points higher in math."
Or this: "On average, Smith's students slide under his instruction, losing 14 percentile points in math during the school year relative to their peers districtwide, The Times found. Overall, he ranked among the least effective of the district's elementary school teachers."
Midway through the article (right before the section called "Study in contrasts"), we arrive at these two paragraphs (my italics):
On visits to the classrooms of more than 50 elementary school teachers in Los Angeles, Times reporters found that the most effective instructors differed widely in style and personality. Perhaps not surprisingly, they shared a tendency to be strict, maintain high standards and encourage critical thinking.
But the surest sign of a teacher's effectiveness was the engagement of his or her students — something that often was obvious from the expressions on their faces.
At the very moment they tell readers that engaging students makes teachers more effective, they announce "Story time!" With barely a fuss, they move from an evidence-based analysis of test scores to a speculation on cause--effect. Their story is no more credible than anybody else's story, unless they also provide data to support such a causal link. Visits to the classes and making observations do not substitute for factual evidence.
This type of reporting happens a lot. Just open any business section. They all start with some fact, oil prices went up, Google stock went down, etc. and then it's open mike for story time. All of the subsequent stories are not supported by any data; the original data creates an impression that the author uses data but has nothing to do with the subsequent hypotheses. So be careful!