Dec. 29, 2011 – The press release that appeared in the on-line Wall Street Journal "Market Watch" makes seriously misleading claims that are not supported by the report "Measure of Text difficulty." The press release's claim that ETS's SourceRater outperformed the Pearson ® Reading Maturity Metricis is especially misleading, because the correlations produced by the two metrics had overlapping confidence intervals for all measures. (A common interpretation of overlapping confidence intervals is that the two correlations are not "significantly" different.) Of the measurement comparisons across six different samples of texts and student performances, there were zero cases in which the confidence interval around SourceRater's correlation did not overlap with that of Reading Maturity. This is made clear in the Report's conclusion on page 40: "…SourceRater and Reading Maturity were comparable in their correlations with reference measures."
The claim that the Lexile Framework, the ATOS readability formula, and the Degrees of Reading Power (DRP) produced lower correlations on all six comparisons is similarly misleading. For the measures that took into account student performance on texts, these three metrics produced correlations comparable (although slightly smaller) to those produced by SourceRater and the Reading Maturity metric. For student performance at the younger grades, these measures actually produced higher correlations than those produced by SourceRater. Student performance measures went completely unremarked in the ETS press release, but were in fact of central interest in this study.
Even if one insisted that it is appropriate to use differences in the raw correlations regardless of significance, the overall pattern of correlations does not support special recognition for SourceRater. Seventeen separate comparisons of the metrics were made across various reference measures, some aggregated across grade level and text type and others separated for grade level and text type. In these comparisons, SourceRater did well, but it failed to produce the highest correlation on 8 of these 17 comparisons.
The press release is also misleading when it says, "…the SourceRater service performed very well on low grade-level texts". In fact, on every comparison of the metrics on grades 5 and below, SourceRater produced lower correlations than at least two of the other metrics.
The press release quote referring to SourceRater service's "recognition by Student Achievement Partners" implies that the Report awarded a unique recognition to SourceRater. The Report did not do this.
Less serious but still misleading is the press release's statement that "gold standard text complexity measures were defined for each of seven different passage collections". The Report made it clear that there is no gold standard of text complexity and also pointed out that one "cannot privilege either expert ratings or the text difficulty measures."
The press release is misleading in the ways we have illustrated above and thus does a disservice to the other participants in the project and to the Report's conclusions.
This rebuttal has been issued jointly by the authors of the Report on Measures of Text Difficulty and Student Achievement Partners: Jessica Nelson, Chuck Perfetti, David Liben and Meredith Liben
Pearson, the world’s leading learning company, has global reach and market-leading businesses in education, business information and consumer publishing (NYSE: PSO). For more information about the Assessment & Information Group of Pearson, visit www.pearsonassessments.com.
For more information, press only:
Adam Gaber, Pearson, (800) 745-8489 / firstname.lastname@example.org / @Apgaber (twitter)