Late in the third quarter of yesterday’s Superbowl, I said to my friends, the Falcons have out-coached and out-played the Patriots. For almost three quarters of a game the Falcons had the mojo and the Patriots had the no-show. When … Continued
One of the great gifts of standardized testing was the premise of giving the same questions to many students would allow for a more reliable measure of student achievement. This reliability is gained through standardization of the test, while validity is gained by the relevance of the questions to the stated purpose of the test. In the 1920s, college admission officers leaped on this bandwagon in order to compare students from New Hampshire and Ohio, and thus was born our accountability system tied to test scores.
However, there are some problems with this logic. What if the students had not been prepared in the same way? What if there were cultural reasons why answers might differ across state lines? What if smart teachers or test-coaching companies could study the test and provide useful insight? And what if test companies manipulated the pass/fail line, commonly called the cut score, for political reasons? In a recent article in Education Next, Michael J. Petrilli discusses the illusion of proficiency and the resulting gap in honesty:
In the struggle over what kinds of data matter, not just big and little data, but real data that deserves our attention in this busy world we live in, data that move, improve and educate teachers is my favorite. In NYC there is a school called School for Global Leaders that is using micro-data that might be described as very small and almost not worth collecting. And yet this micro-data, for example, where students sit, how much time students spend in certain instructional groups or how much learning is attained in lecture formats, is the best type of assessment data to collect: