Conference at Teachers College: “Testing Then and Now”

I recently had the good fortune to be at Teachers College for the “Testing Then and Now” conference held on December 9, 2013 in New York. The three sessions of speakers focussed on the history of testing contributions by faculty at Teachers College, the backlash against testing today and The Gordon Commission on the future of assessment in education. What a gift to be brought together with all of these perspectives in one room! My thanks go to Madhabi Chatterji, director of AERI, and to partner organizations IUMEESPA and all the participants for an illuminating look into the diverse world of testing. Several views on testing were highlighted, including…

…Bob Brennan’s Milestones talk where he shared some common tensions in testing, Mudhabi Chatterji’s talk on validity issues, and Ed Gordon’s work promoting the future of assessment in education. Dr. Robert Brennan is E. F. Lindquist Chair in Measurement and Testing, and Director of the Center for Advanced Studies in Measurement and Assessment (CASMA), in the College of Education of The University of Iowa. In his talk he pointed out the continuing or even enduring tensions in testing:

  1. Ability vs. Achievement Testing,
  2. Norm Referenced vs. Criterion Referenced interpretations,
  3. Formative vs. Summative evaluation,
  4. Measuring status vs. growth.

And I point these tensions out as they seem to continue over the years, welcoming new researchers to their dynamic starting points and perhaps frustrating senior researchers in the tensions’ resistance to resolution. One very powerful way to understand these tensions is to define the use of any individual test as particular to its construction and purpose and to not allow it to be used for other purposes for which it was not designed. And the second point I want to draw attention to is the role of validity. Validity is established when the test measures what it was supposed to measure, whereas reliability is established when the test scores are reliable over many test takers. As Madhabi Chatterji tells us, there have been advances in measurement of evidentiary reasoning:

  1. Types of understandings instead of rankings
  2. Multiple aspects of proficiency rather than single scores
  3. Change and growth over time
  4. Diagnostic indices
  5. Group differences in cognitive processes & strategies elicited by tasks
  6. Families of models adaptable to broad range of uses.

Dr. Edmund Gordon is the John M. Musser Professor of Psychology, Emeritus at Yale University, Richard March Hoe Professor, Emeritus of Psychology and Education at Teachers College, Columbia University and Director Emeritus of the Institute for Urban and Minority Education at Teachers College, Columbia University and so the Commission that bears his name is well respected. The Gordon Commission argues for the future of assessment to include a broader conceptualization of assessment to foster student learning and better teaching, highlighting a vision for technology-based assessment models. I went to the Gordon Commission Website and found mission statements, papers and a newsletter from which I quote the following:

What could this mean for the future of testing and measurement instruments more generally? In our view, the above recommendations would not mean a diminishment, but rather, an enrichment and expansion of such services. With respect to testing, there is good reason to move toward the following:

  • Making standardized tests available (as opposed to mandatory) to all educational institutions. Whether and how local school systems or districts employ test scores in their deliberations should be locally determined.
  • Radically expanding the kinds of tests available to schools for evaluating students. For example, depending on locale, schools might wish to have tests that would enable them to benchmark students in terms of computer literacy, career fluency, civic and political participation, bilingual capacity, dialogic skills, environmental knowledge, musical aptitude, physical competence, health, and so on.
  • Expanding the range of tests available to schools and school districts for evaluating their own development. For example, schools might varyingly wish to benchmark themselves in terms of parental participation, excellence as a learning community, internal collaboration, civic contribution, relationships with business and government, and the like.
  • Offering educational services enabling local schools to generate effective practices of participatory evaluation.

In conclusion, if we take into account the increasing development of communication technologies and the resulting shifts in demands and opportunities, it is imperative to explore new ways of practicing evaluation. Along with Nussbaum (2011), we argue here for evaluation in the service of creating capabilities as opposed to judging them (Volume 2, Number 5, Dec. 2012;  II. Social Epistemology and the Pragmatics of Assessment, Kenneth J. Gergen, Swarthmore College, Ezekiel J. Dixon-Román, University of Pennsylvania).

These three speakers and the writers of the Gordon Commission newsletter helped encapsulate some very important testing issues—that as with life, there are enduring tensions in testing! Progress is being made on such issues as how we measure evidentiary reasoning, the purposes of testing and the reliability of scores across different contexts. I hope the vision for the future of testing will bring the best out of the test makers and the accountability associated with the results for the benefit of all students!

Dr. Robert A. Southworth, Jr.

Dr. Robert A. Southworth, Jr.

Share this article:

Leave a Reply

Your email address will not be published. Required fields are marked *

More from EdSpeak

Discover the tools and strategies modern schools need to help their students grow.

Subscribe to EdSpeak!

The SchoolWorks Lab Blog, connecting teaching to policy through research.