Principle 8: Optimal Assessment Practices

Optimal Assessment Practices


Assessment practices must support optimal progress for all students by collecting and studying evidence of learning in its many forms and using that evidence to make decisions about future learning needs. Moreover, assessments must be designed in ways to assist students in their pursuit of divergent interests.

  • The assessment process must focus on understanding what and how students are learning, diagnosing learning needs, making decisions about curriculum and instruction, designing appropriate professional development, using evidence to illustrate trends in learning, and directing resource allocations.
  • Teachers should participate in frequent conversations about student work and evidence of student understanding.
  • Testing that is detrimental to students and/or the system of public education itself (e.g. high stakes standardized testing; excessive test preparation; inappropriate value-added measurement) should not be permitted.

CAPE Ohio Position Paper

Changing the Face of Assessment

 If you are a parent or a citizen concerned about public education, it may be difficult to understand the changes that have occurred in public schools in the past few years and, most importantly, why teachers worry about what excessive testing is doing to the quality of education their students are receiving and why they feel they can’t do their jobs very well anymore. After all, tests have always been part of schooling.  Sometimes even professionals in classrooms have barely had time to analyze why teaching has become so difficult and in the words of some “oppressive.” What is different about what students and teachers are experiencing in the classroom now? Changes in assessment practices, specifically testing and preparing for tests, have contributed significantly to the deterioration of classroom learning experiences. To chart a different and more humane path for assessment, I have combined my own teaching experiences including research on student portfolios with what I have learned more recently from researchers (e.g., Peter Johnston, Tom Newkirk, Katie Wood Ray and Ilana Seidel Horn) about why assessment is primarily about a student’s “identity” and the power of narrative in helping students develop identities that make them want to keep learning. This paper explores:

  • Why and how assessment has evolved into its current set of distorted practices;
  • The basics of human development and learning theory that operate in all classrooms and which current assessment practices typically ignore;
  • A new theory about assessment and identity and a practical description of what assessment should be about;
  • The potential benefits of shifting the focus of assessment;
  • What assessment focused on student voice and ownership might look like;
  • How a professional assessment culture using Andy Hargreaves’s concept of “collective autonomy” can be developed, what an assessment network could accomplish, and how parents can contribute to better forms of assessment.

1. Curriculum Alignment: Good Idea Gone Bad

 Much of what has happened to assessment, particularly classroom assessment, stems from an idea that was developed before the current deployment of high-stakes standardized testing based on the 2010 Common Core Standards. There was a time when standardized test scores were used to evaluate programs and give rough estimates of student achievement, not to rank schools and districts and to evaluate teachers. In the 1980s Fenwick English promoted the idea of aligningcurriculum with the standardized tests students would eventually be given.  He reasoned that if students were to be tested, they should receive instruction targeted on what they were going to be asked about. This idea worked if the instruction was focused on objectives or outcomes rather than practicing answers to specific test questions. It was a way for teachers to design their curriculum, “beginning with the end in mind.”

When this curriculum alignment concept morphed into practicing answers for specific test questions, however, developers of the norm-referenced tests used at the time, argued that this compromised test security and ruined the norms the test companies had constructed. These norms, so the test developers argued, depended on questions selected to represent a large domain of understanding. Students who were coached on how to answer specific test questions, however, might not have a good understanding of the entire subject. James Popham (“Teaching to the Test,” Educational Leadership2001) described this as “item teaching” versus alignment efforts that focused on a body of content knowledge.

Researcher Bill Schmidt also made recommendations about curriculum alignment after analyzing TIMSS (international tests of mathematics and science) results and compared the curriculums and grade level placement of mathematics content from educational systems around the world.  U.S. students, he noted, often never had the “opportunity to learn” the more advanced content many students in other countries were exposed to in their curriculums. Schmidt’s and others’ research led to revisions of mathematics curriculum that tried to eliminate the repetition of topics from grade level to grade level creating more time for the advanced topics other countries were teaching at earlier grade levels. This shift in curriculum through alignment research was generally viewed as a positive development and not focused on coaching students on how to answer specific test questions. Although this was a sophisticated analysis and set of recommendations, it their impact on international test results or measures like the National Assessment of Educational Progress (NAEP) is open to debate. Perhaps the lack of dramatic improvement in mathematics achievement is a reflection of how Schmidt’s (and others’) recommendations were translated into the various mathematics textbook series that have been widely used.

There is some evidence that curriculum alignment efforts worked to raise scores on various state tests. In Ohio, for example where annual accountability testing was based on a set of state standards, the vast majority of school districts improved their “performance index” each year and received increasingly higher “district report card grades.”  Serious questions were raised, however, as the improved performance seen by most states on their own annual testing seemed to have little impact on improving National Assessment of Education Progress (NAEP) results or our standing on international assessments like TIMSS or PISA, which tested roughly the same content with each administration. The question became whether improved performance on state assessments was really just the result of more practice on the questions the states were using in their annual testing.

This current crisis with high-stakes standardized testing hijacking much of classroom learning time has been building steadily for years. In the 1980s many states at the insistence of business leaders and legislators enacted minimum competency tests as graduation requirements. Each iteration of standardized tests never seemed to quell the fear—unfounded according to examinations such as Berliner and Biddle’s 1996 work The Manufactured Crisis—that America’s students weren’t learning enough. More recently the decade of increased high-stakes testing required in No School Left Behind accompanied by energetic curriculum alignment efforts in a majority of school districts as well as sanctions such as “reconstitution” for “failing” schools showed little impact on U.S. standing on international tests. The modest progressin improving overall achievement and narrowing achievement gaps between advantaged and disadvantaged students continued to be unacceptable to legislators, policymakers, and corporate interests. This dissatisfaction led to yet another plan to push both students and teachers to “produce” more: the Common Core Standards in mathematics and English language arts. This time the idea was that all states would use the same learning standards and similar tests (PARCC and Smarter Better Assessments), which would represent a rigorous, college preparatory level of difficulty. It was hoped that this new system would guarantee progress through business-like data collection, school, district and state comparisons, and analyses of performance made fair through statistical wizardry like value-added measures.

As the Common Core Standards have been implemented in the past few years, in the view of many, they are a good idea ruined by a set of high-stakes standardized tests that diluted the lofty aspirations of the authors of the standards. In the rush to roll out the Common Core and to administer tests that could be used to issue rankings and evaluations, students, parents and teachers experienced confusion and anxiety. State officials and school district administrators began a well-intentioned but ultimately disastrous push toward precise alignment with Common Core grade-level standards and the sample test questions released to “help” teachers. Everyone struggled with what many of the standards meant, particularly standards at specific grade levels. There was a dearth of quality ‘aligned” materials available.  Missteps occurred in locally and commercially designed standards-based curriculum. The flash point in terms of parent and community anger emerged with the quickly constructed and ultimately flawed high-stakes tests paired with equally flawed teacher-quality metrics like valued-added measurement (VAM). This latest and largest curriculum alignment effort has stressed public education, its students, parents and employees in unprecedented ways and, most unfortunately, distorted assessment in classrooms, schools, and districts.

As testing expert Lorrie Shepardnotes, in many schools and school districts, every day assessment, arguably the most important and influential dimension of any assessment system, shifted dramatically toward on-going, yearlong test preparation:

Data systems that routinizeuse of standardized assessments throughout the year, using formats that look just like the end-of-year accountability test, narrow the curriculum just as end-of-year tests have done and limit any claims that can be made about achievement gains. They harm students’ understandings of what successful learning really looks like, and they must not be thought of as the first steps in implementing an assessment-for-learning vision.

–Lorrie Shepard “Assessing with Integrity in the Face of High-Stakes Testing” 60thYearbook of the Literacy Research Association. 2011

Standardized test results have become so important to school and district ratings and for teacher evaluations that in many classrooms test question preparation rather than the standards themselves determine what teachers teach and how they will teach it. The new purpose for classroom assessment is predicting how students will fare on high-stakes tests. Decisions that teachers used to be able to make about how to make lessons as interesting as possible are now often dictated by scripts, pacing guides, and practice tests that are mandatory. Teachers find themselves drilling students on the specific vocabulary and phrases that are used in standardized test questions.  Curriculum alignment in many cases has become test question alignment in an ultimate and futile embodiment of the alignment concept. Even as individual states revise the Common Core Standards to improve them and rebrand them as “state standards,” educators on the revision committees can be heard demanding changes that “will tell teachers exactly how the standard will be tested.”

Some educators resist the idea that their primary work, what they view as both an art and a craft, is to align their teaching with test items developed by “experts”—seeing the majority of these experts as having an incomplete understanding of how children learn. Only the most confident of educators, however, can buck this system in order to focus their teaching on what used to be their priorities:  big picture thinking, hands-on experiences, projects, learning to love mathematics, science and the humanities, creating new products and ideas, and cementing conceptual understandings.

2. Getting the Foundation Right: A Knowledge Base for Assessment

 

How People Learn: Zones of Opportunity, Daily Encounters with Learning

 

Designs for assessment systems can no longer ignore research about learning. Human infants and how they learn offer a key insight for developing useful assessment principles.  All human infants are tiny learning machines incorporating and organizing new information and experiences into the conceptual and physical structures of their brains. Infants learn in unscripted and ongoing interactions with parents, siblings, caregivers and various aspects of the environments they are in. As babies are learning to talk or to perform a new task it is easy to see how this zone of new learning works. They just keep moving forward in their development by interacting and taking in what their brains allow them to process. Parents or caregivers also assist the baby by doing a little bit of her thinking for her. They might hand the baby a toy she is gesturing toward. They might repeat a word or phrase the baby is trying to say. They might help a young child with a puzzle by handing him the next piece. Soon the baby picks up the toy herself or says the word and makes a new behavior or competency her own.

The infant brain from the moment of birth operates in a “zone” of what it is possible to learn at a given moment in time. Assessment should use this important principle of biological development to find the entry point where students can be successful taking on new information and advance in their thinking or skill level. It helps to think of this zone as a band ranging from where a learner does something easily on his own to an upper point where the learner needs quite a bit of help to perform successfully. The zone delineates where new learning is likely to take hold.  It is a range that is developmentally constrained by what holds a learner’s interest and what she can make sense of at a given moment in time. This is a spot where an anchor for new learning can be established. Skillful teachers observe, envision and, over time, map out more precisely the learning zones in which their students are currently operating.

The learning zone (psychologist Lev Vygotsky called it “the zone of proximal development”)is full of mental activity as an individual makes changes (adaptations) in his or her thinking. Teaching is intentionally provoking and supporting these changes, which in turn changes the neural structures in the brain. Performing this type of assessment is a prerequisite for designing instruction that reaches all students. Understanding how the “zone” works makes instruction accessible—i.e., the learner can grasp it– because a learner’s zone is the sweet spot of engagement and potential new understanding.

In school settings teachers are always trying to produce these desired changes in thinking.  They do this by working at the outer edge of students’ learning zones and “lending” them some of their own thought processes.  They also encourage students to try out the thought processes and strategies of peers and others in the learning environment. Teachers teach concepts by highlighting and encouraging the thinking they want students to take up as their own. Jerome Brunercalled this process a “loan of consciousness.”

Much schoolwork focuses on concepts and ideas. Teachers succeed when they enable learners to see things in new ways. Learners’ thinking shifts to a new level. It is, in effect, “pulled up.” So in the seemingly straightforward task of teaching math “facts” to young children, the teacher’s job is not finished when students can easily recall number combinations. She also needs to “pull up” her learners mentally so that they are able to use these combinations to think mathematically.  She has to engage learners in new ways of thinking with multipleexamples, provide accessible practice opportunities, support learners in their tentative attempts, and celebrate the new thinking as it begins to consolidate.

Assessment is intimately connected to the shifts that occur in this learning zone. What is the learner paying attention to and how does that attention represent a new understanding? Assessment – observations, measurements, and documentation – should capture what is going on in this zone, reveal whether attempts to shift thinking have been successful, and offer a well-informed map of what learners know and what they need to learn next. Assessment that uses the concept of the “zone” can capture learners at their most competent and use this information as a launching point for next steps in learning.

What research about the “zone” tells us is that learners do not advance in their understanding without some type of mental (and sometimes physical) activity. There is no foolproof protocol, curriculum or set of standards for creating and managing this dynamic interaction.  Merely presenting information has a slim chance of advancing understanding in any kind of permanent way. Some researchers have described this interaction as a kind of tension that is posed by just the right amount of challenge for the learner. Learners are stretched just a little and positioned to move forward. It is in these interactions that we are most likely to capture a learner’s optimal performance for assessment purposes. For teaching purposes, this is also the opportunity to stabilize a learner’s best performance allowing him to function on a new level of thinking and competence. This is how the concept of development works.

Viewing assessment as a dynamic process smack in the middle of learning itself is a big change from the static world of high-stakes standardized testing. Standardized testing is like an electrical zap that gives very little indication of what is going on in the “zone.” A major failure of standardized testing as it is currently practiced is that it gives teachers no idea of what a learner’s learning zone (range) is or an entry point to help the learner get to the next level of learning. Assessment that holds the most promise for accelerating development and supporting new learning is performed by humans who understand the complexities and interactions present in the learning environment. The professionals who conduct these assessments are in the best position to design the lessons and experiences that build the bridges between what students know and what they need to learn next.

Assessing Development to Accelerate Learning

 

Understanding and being able to assess development means that once you figure out where children are in a developmental sense, you can use the knowledge of how the learning zone works to help them get to a new level of understanding and competence. Using high-stakes tests based on grade-level standards ignores everything we learn by observing infants, virtually all of whom learn and accomplish monumental tasks on timetables that vary considerably. Constellations of mental and physical developmental changes come together and children – with the assistance of other humans at strategic moments – learn to sit up, stand, walk, feed themselves, talk and, of course, eventually use the big potty, brush their own teeth, read, write, and use numbers to think and solve problems. These advancements are all accomplished routinely and nearly universally within the push-pull mechanisms that characterize the zone of proximal development.

Grade level standards, as popular as they are with legislators and policymakers, have harmed many children. Experts like P. David Pearson (“Quality Reading Instruction in the Age of Common Core Standards” 2013) have concluded such grade level statements aren’t derived from research and are, therefore, arbitrary. Because they don’t have a basis in research or development, they work against the practical use of developmental trajectories or progressionsin assessment.  Moreover, assessments based on grade-level standards communicate a sink or swim message: “Here’s where you need to be.  If you miss this target, you’ve failed.”  Developmental trajectories, on the other hand, encourage teachers to say, “Here’s where you are now. Here’s where we need to go next.”  Everyone can have a next step in learning followed by another and another.  Grade-level standards say to many learners, “You don’t have it.”  Developmental trajectories say, “You don’t have it yet.”

Development and experience, critical factors in student achievement, are often difficult to separate. Experiences drive development forward. Some learners appear more advanced in their development because they have had extensive, enriching experiences; less advanced learners often lack these experiences. Young learners who have been immersed in a print culture from birth typically arrive at school knowing letters, sounds, story structures, how to make predictions, how to find print on the page of a book, and how to turn the pages of a book.  They have already learned many reading “skills” that are interpreted as natural ability or advanced intelligence. These learners are already clearly on the developmental path to becoming literate. Learners with less experience with print are also on the same developmental path, but they are at a different point on the trajectory to becoming literate.

Viewed from either a developmental or experiential lens, assessment sets the stage for accelerating learning when teachers can pinpoint where students are on a developmental/experiential continuum and give the learner what he needs to learn next. When teachers understand behaviors and patterns along developmental pathways and know how to back up or move ahead to appropriate points for instruction, they possess a powerful tool for helping all learners thrive.  Personalized learning even for high school learners has a developmental aspect. This is a “glass half full” perspective that looks up and down a developmental trajectory at what learners typically do in an area of study, gauges the current accomplishments of an individual and designs learning experiences to support the next phase of understanding or performance.

3. Formulating a New Theory of Assessment

 

A Long Overdue Shift of Perspective: Thinking about Identities and Narratives

 

Professionals need to reclaim assessment and return it to a humane purpose of helping students learn more. “Fair” assessment builds students up rather than tearing them down.

In undergraduate and graduate courses teachers are taught that the purpose of assessment is to gain an understanding of what, how, and the extent to which students are learning. Teachers also learn there is a difference between “formative” assessments of learning performed along the way and “summative” assessments, which are usually written tests administered and scored uniformly. These are time honored ideas about assessment but they may not be powerful or radical enough to stop the big engines of standardized testing, labeling, and judging.

The current generation of high-stakes standardized tests has trumped most other assessments in terms of importance in the lives of both students and teachers. High-stakes standardized tests required in every state have become the only summative assessments that count. In many school districts what teachers have traditionally learned and believed about assessment has been undermined by the practicalities of getting students to score well on these high stakes tests. As Lorre Shepard has noted, this is an instance of “hijacking”—using reasonable ideas (e.g., summative assessment) for unintended purposes—e.g., ranking, rating, hiring and firing. And really, formative assessment fared no better.  In the aftermath of the adoption of the Common Core, in the name of “formative assessment,” a scrambling ensued to break the new, more complex standards into component “skills” with checklists and I-Can statements. Such lists. once called “discrete skills,” returned with a vengeance. Or maybe they never left the educational scene at all.  Such parsing of standards so that students could be pretested and assigned to skills groups or so “mastery” levels could be tracked in a learning management system ran the risk of trivializing the complexity of what students actually need to know and be able to do.  It made rigorous learning less accessible since some students stayed at the lower levels of the skills hierarchies permanently.

We need a new perspective on assessment, one that is both radical and at the same time meets the test of good common sense. Assessment needs to use a different vocabulary from that used by measurement experts and to begin talking in terms of “identities” and “narratives.” To understand the importance of identities and narratives in assessment, it is helpful to see the educational system and its processes as a complex adaptive system. People who study systems both biological and social, see them as arising and benefitting from a changing canvas of identities, relationships and information. This view helps us understand in a new way why assessment should be focused on enabling students to develop the identities, both individual and collective, that will help them thrive in the future. The success of this process relies on building identities that consolidate students’ accomplishments rather than cataloging their deficiencies.

Tom Newkirk in his book, Minds Made for Stories: How We Really Read and Write Informational and Persuasive Texts(2014) makes the case that human brains are set up to make sense of experiences through narrative structures. An illustration he uses is that if you give someone two words, “banana” and “vomit,” the human brain instantaneously composes a narrative of someone eating a banana and then throwing up. Even factual, informational text requires a narrative mindset in order to be understood and recalled. He cites psychologists, writers, literary theorists, and philosophers to make his case. In his analysis our brains are cause and effect machines. And cause and effect is the conceptual infrastructure of story/narrative. Newkirk is talking about literacy—reading and writing—but it is easy to see that assessment has a narrative quality as well.  The potential in the narrative nature of assessment to empower as well as damage is something we rarely think about or discuss.

For all human beings, mental narratives are closely linked with their identities. In school, experiences and interactions with others develop identities in learners. As their identities develop, learners are engaged in stories or narratives about themselves. These identities and narratives are the reality of how learners see themselves. Their narratives are the stories they internalize about their successes and challenges in learning both in and outside of school. Identity and narrative run like undercurrents through the entire course of a learner’s schooling.

Understanding learning environments (schools, classrooms, work groups) as complex systems reveals how on-going assessment in real time is woven into the fabric of classroom life whether or not we can see it. We have talked about “classroom communities” to the point where the phrase is almost meaningless.  But any classroom, any learning environment, is not just a community; it is an ecosystem. Any time we are teaching, we are in a complex adaptive system whose key features come from its identities, relationships, and information, which work in concert to construct our experiences, i.e., what we perceive as reality. Not only are identity, relationships, and information features of a complex environment such a classroom, they also appear as constituent elements within individuals. For children and young adults, the process of learning and of living itself always contains an undercurrent of identity–of who they understand themselves to be and what they believe they are capable of. Much of this understanding comes through the feedback–formal and informal assessments–they receive from adults as well as peers who communicate “how they are doing.” “How am I doing?” is an unspoken prompt that keeps the learner’s story moving forward.

With this perspective, it is easy to imagine how assessment contributes significantly to the internal narrative each student constructs. As cognitive psychologist and literary critic, Mark Turner, says, “Narrative imagining—story—is the fundamental instrument of thought.  Rational capacities depend on it. It is our chief means of looking into the future, of predicting, of planning, of explaining.” (In Newkirk, 2014, p.19) This narrative is a student’s “story” buttressed by both evidence they internalize about working successfully and evidence they can see concretely in projects, artifacts, and studies they have participated in. Narratives constructed in school detail in large measure who students are currently and who they might become in the future. The future that all students are moving into requires self-initiated learning, increasing autonomy, understanding of self and others, and a mindset that growth is possible through practice experimentation, experiences, and reflection.

All Assessment Is Formative

 

Here is the critical shift we need to make: when assessment is understood as a driver of the narratives that make much of early learning and later achievement possible, then all assessment must be viewed as formative. Formative assessment gives us information along the way about progress being made and points to what needs to come next. This is a narrative the is “co-constructed,” which means that both teachers and learners receive and interpret assessment results and in doing so determine what their next actions will be. Assessment results and how they are interpreted make a major difference in whether learners are motivated to continue their engagement with learning, whether they will take on new challenges, and whether they will develop autonomy in learning new things.

If we view assessment’s true purpose, as a means of identifying and cultivating potential, then we must also be aware that test results, especially in the current educational context, play a central role in the narratives that students construct about themselves and their capabilities. Test scores, which have taken on an exaggerated importance in recent years, are a particular and in many ways narrow stream in these narratives. Few educators have considered this perspective explicitly but many understand it intuitively. They wince as they look at a printout of scores showing “failing” scores for students they know are learning. They sense the hurt these young learners experience when they receive official-looking feedback that they are not good enough. If the ESSA is to be taken as state of the art thinking in assessing learning progress, none of the current generation of policymakers has a glimmer of this understanding. If they did, they would never take the chance of enabling a system that gives so many students consistent messages that they are not capable, that they don’t fit in, and contrasting them with students who perform well on tests that have no track record of predicting success later in life. This new perspective makes clear that we cannot allow assessment systems to undermine completely our worthy and often stated public policy goal of having all children achieve at high levels that will be the envy of the world.

4. The Potential of Redesigned Assessment Systems

 

Forming Resilient Identities

 

“Children are not first and foremost learners; they are first and foremost people living the complexities of their day-to-day lives.” Ann Dyson (1993) quoted in a speech by Rob Tierney LRA Yearbook(p. 36).

 

Language, then, is not merely representational (though it is that), it is also constitutive. It actually creates realities and invites identities…. Just as we actively seek sensory information to inform our construction of reality, we actively seek new information to inform the narrative we are building about who we are and to ensure its genuineness.

Peter Johnston Choice Words: How Our Language Affects Children’s Learning. 2004. 9-10

 

Children who doubt their competence set low goals and choose easy tasks, and they plan poorly. When they face difficulties, they become confused, lose concentration, and start telling themselves stories about their own incompetence.

Peter Johnston Choice Words: How Our Language Affects Children’s Learning.2004. 40

 

The Golden Rule for assessment must be, “First do no harm.” If we ignore the complexity of learners, learning processes, and the environments in which they operate, harmful assessment is not only possible, it is likely. School, typically the classroom, is an environment brimming with stories about learners and learning. How do the stories get constructed? What is their impact? And, most important, how can we focus their impact on creating resilient identities for all learners?

A child’s identity combined with a running narrative or “their story” is an engine of potential that the schooling can have much influence over. Identities are formed through interactions and experiences with others and with the environment. Peter Johnston has written extensively about this in his two books, Choice Words: How Language Affects Children’s Learning(2004) and Opening Minds: Using Language to Change Lives(2012).  Johnston’s work shows how the language that we surround children with and also encourage them to use can shift how they think about themselves, in other words their identities. This is not just individual identity but an identity built through relationships. Johnston describes this as a “collective agency,” which “offers the possibility of developing an identity through affiliation” (p 41). Assessment language is in some ways a subset of this larger set of classroom interactions, but it is pervasive in the midst of the everyday relationships that often contain explicit or tacit information about competence. Assessment in school settings, in fact, is often like a constant stream of information coming at students through both formal and informal channels, from both teachers and peers. It is information about who students are (identity), how well they are doing (a narrative), and how they fit in (relationships).

In learning environments, it makes sense for professionals to imagine how a child’s identity is constantly impacted, think about the impact of this information stream, to see its effects, and to structure assessments, feedback, and conversations in ways that create potential rather than diminish or destroy it.

Shaping Mindsets

 

In fact, every word and action can send a message.  It tells children—or students, or athletes how to think about themselves. It can be a fixed mindset message that says: You have permanent traits and I’m judging them.  Or it can be a growth-mindset message that says: You are a developing person and I am interested in your development.

CarolDweckMindset: The New Psychology of Success. 2006.p. 173

 

Moser’s study showing that individuals with a growth mindset have more brain activity when they make a mistake than those with a fixed mindset, tells us something else very important.  It tells us that the ideas we hold about ourselves—in particular, whether we believe in ourselves or not—change the workings of our brains. Jo Boaler Mathematical Mindsets: Unleashing Students’ Potential Through Creative Math, Inspiring Messages and Innovative Teaching.2016 p. 13

 

Carol Dweck’s research on growth mindsets provides guidance for thinking about how assessment can support students in their development of identity. Mindset as Dweck describes it is part of an individual’s identity. A fixed mindset operates from the notion that everyone has potential and abilities that may be unknown but are fixed. Being smart or capable is something innate that you have. A growth mindset suggests that it is very difficult to even estimate potential. Individuals with growth mindsets are open to challenges, to risks, to experiments, to failure, and to making repeated attempts to succeed. They have the capacity to improve through effort and to surprise us with what they can do over time. Assessment that communicates the on-going even temporary nature of its judgment, reflects the effort and thinking of the learner, and says, when necessary, “not yet, but with work you will be able to….” is the key to building an identity that allows a student to move forward with confidence.

Summative assessments, especially those of a high-stakes nature that say to a student, “Well, here it is. Here’s the amount of ‘intelligence’ our ‘scientific’ measurement (e.g., standardized test) tells us you have.  So now you can either worry about keeping what you have or accept that you will never have enough of whatever this ‘scientific’ measurement says is important for you to have.” That’s the danger of high-stakes tests. They look more important and more permanent in their judgments than they can possibly be. Such judgments kill the growth mindset.

Creating the Narratives for Current and Future Learning

 

“The universe is made of stories, not of atoms.” Muriel Rukeyser, poet

 

“We organize our experience and our memory of human happenings mainly in the form of narrative—stories, excuses, myths, reasons for doing and not doing, and so on.” Jerome Bruner “The Narrative Construction of Reality” Critical Inquiry.Autumn 1991

 

“Narrative is a form or mode of discourse that can be used for multiple purposes—we use it to inform, to persuade, to entertain, to express. ‘It is the mother of all modes’ a powerful and innate form of understanding.” Tom Newkirk Minds Made for Stories: How We Really Read and Write Informational and Persuasive Texts2014 p 6

 

Not only is assessment involved in the process of identity formation, but it is also woven into the narratives that all humans construct about their lives. The effects of assessment can figure in a story that a child may carry around the rest of his or her life—their parents, may carry it, too. Jerome Kagan in his book, The Nature of the Child, put it this way:

My own image of a life is that of a traveler whose knapsack is slowly filled with doubts, dogma, and desires during the first dozen years.  Each traveler spends the adult years trying to empty the heavy load in the knapsack until he or she can confront the opportunities that are present in each fresh day.  Some adults approach this state; most carry their collections of uncertainties, prejudices, and frustrated wishes into middle and old age trying to prove what must remain uncertain while raging wildly at ghosts.”(P 280)

 

Assessment in school often produces the ghosts in an adult’s life. Educators, if they are thoughtful about assessment, have the opportunity place many benign spirits in a child’s evolving knapsack.

The idea that assessment should be a descriptive narrative that advances the direction for learning is not new. The educators who developed the Reggio Emilia program in Italy and those who have been inspired by it have used these concepts for many years. Reggio Emilia is a community-based approach to educating young children that began in Italy after World War II in a village nearly destroyed in the conflict. Reggio educators caution against attempting to replicate the Italian program but suggest that thinking and reflection about how they educate their young children can “inspire” great ideas for other educational programs. One of their most inspirational ideas is “pedagogical documentation,”which is the Reggio version of inquiry about student learning.  A Reggio practitioner describes this type of documentation (or narrative assessment) in this way:

Pedagogical documentation is treated here simultaneously as teacher research into children’s thoughts and feelings and as a design process for invention of curriculum in a specific context. Pedagogical documentation is the teacher’s story of the movement of children’s understanding. The concept of learning in motion helps teachers, families, and policy makers grasp the idea that learning is provisional and dynamic; it may appear to expand and contract, rise, and even disappear. If we were to think of learning as being like a river, capable of flooding and of drying up, or like clouds, massing up and dispersing, we might have an apt metaphor for the ways our minds and bodies work.

 

Pedagogical documentation is a research story, built upon a question or inquiry “owned by” the teachers, children, or others, about the learning of children. It reflects a disposition of not presuming to know, and of asking how the learning occurs, rather than assuming—as in transmission models of learning—that learning occurred because teaching occurred. With standardized curriculum, once teaching has occurred, there is a tendency to assume that learning may be tested. Thus pedagogical documentation is a counterfoil to the positioning of the teacher as all-knowing judge of learning.

 

I avoid the terms assess and assessment here because they imply a range of meanings that I hope to distance from pedagogical documentation—accountability and the judgment of learning. To judge is to remove oneself from participation. If the teacher is removed from relationship to and responsibility for the learning, it becomes solely the learner’s responsibility. The learner who has not learned is then considered to be in jeopardy and a failure. To view the child learner as a failure is, in my view, unethical, violating the rights of children to have a safe learning environment.

 

Conceptualizing pedagogical documentation as teacher research calls upon the teacher not to know with certainty but instead to wonder, to inquire with grace into some temporary state of mind and feeling in children…. In pedagogical documentation, teachers imagine or theorize understanding, present evidence of what they think they see, and check it against others’ analysis and interpretation, all of which can inform their decisions about what to offer children, thus influencing the design of curriculum.

Carol Anne Wien York University with Victoria Guyevskey& Noula Berdoussis “ Learning to Document in Reggio-inspired Education” Early Childhood Research and Practice Volume 13 Number 22011

 

Mining the Benefits of Mistakes

 

“Every time a student makes a mistake in math, they grow a synapse.”

Carol Dweck in Jo Boaler Mathematical Mindsets: Unleashing Students’ Potential Through Creative Math, Inspiring Messages and Innovative Teaching2016 p. 11

 

When I have told teachers that mistakes cause your brain to spark and grow, they have said, “Surely this happens only if students correct their mistakes and go on to solve the problem.” But this is not the case. In fact, Moser’s study shows us that we don’t even have to be aware we have made a mistake for brain sparks to occur. Then teachers ask me how this can be possible, I tell them that the best thinking we have on this now is that the brain sparks and grows when we make a mistake, even if we are not aware of it, because it is a time of struggle; the brain is challenged, and this is the time when the brain grows the most.

Jo Boaler Mathematical Mindsets: Unleashing Students’ Potential Through Creative Math, Inspiring Messages and Innovative Teaching2016 p. 11-12

 

Jason Moser and his colleagues in a study of college students concluded (in research Jo Boalerdescribes) that when performing simple tasks mistakes cause electrical activity in the brain that is not observed when students give correct answers. Moreover, this electrical activity is even greater in students who can be described as having a growth mindset; that is, those students who are open to making mistakes, experimenting, doing things over, and believe effort results in improved performance. This new insight from brain research provides food for thought about assessment and what it might look like.  In China, Boaler observed a classroom in which the children who are selected to share their work are those who have made mistakes. This is an intentional aspect of this particular classroom culture. It is an approach that seems counterintuitive to those of us who have experienced American classrooms where assessment often focuses on giving tests and quizzes on which students make as few errors as possible. When we call on students, we often wait until we get the correct answer moving past incorrect responses. Our entire current accountability system (as well as many hours in the classroom devoted to practicing for tests) is based on avoiding mistakes and incorrect answers.

A portion of assessment, particularly assessment that involves classroom discussions, should focus on how interesting and revealing mistakes can be. Science educators for years have studied children’s misconceptions about everyday phenomena and suggested such misconceptions when uncovered are the place to begin instruction. These are explorations that can reveal a variety of thought processes and pathways to solutions and insights. Children need to be drawn into a process that allows them multiple attempts to think about and work through mistakes and misconceptions. It also helps to build routines into assessment that allow for discussion and exploration of mistakes. Mistakes and paths to new ways of thinking (strategies) should be celebrated and documented.

The nature of what we ask students to do in assessment systems needs to change so that students are supported in the steps and stages of learning where mistakes tend to occur whether it is with explicit outcomes and skills or in projects of longer duration and complexity.  Teachers need to think aloud demonstrating how they work through mistakes and to encourage students to do the same creating a culture of reflection about how mistakes can strengthen the brain. Assessment approaches need to change dramatically so they not only allow for mistakes, but also encourage risk-taking, experimentation, possible failure, and developing projects or study over time.

5. Student Voice and Ownership in Assessment: Rubrics and Portfolios

 

An important goal of any assessment system is that students themselves internalize standards for what constitutes quality work and understand the process of how to work independently or on a team to accomplish a goal over time. Planning for student voice in and ownership of assessment criteria and standards helps students to develop their own sense of what constitutes quality work, to monitor their own progress, and to internalize to processes associated with quality work.

An even more important goal in assessment is recognizing that students need ultimately to control and shape their own assessment narratives. Their involvement in assessment—what constitutes quality for them, what processes work for them, identifying their most significant learning experiences and projects—supports them in telling their own stories.

Portfolios as Narratives

 

“… An individual aware of himself or herself as a narrator in the individual’s life is more prepared to act deliberately on his or her behalf to alter the arc of his or her life story.” “The Breath of Life: The Power of Narrative” A. Sedun and M. Skillen English Journal 104.5 (2015): 102

Since their introduction as tools for collecting and analyzing student work in the 1990s, portfolios have followed two tracks: one that resembles a “work folder” containing examples of what someone (typically for some district accountability system) has deemed a student’s “best work” and a more organic version of portfolios that tells a story of learning that might include in addition to finished work annotations, reflections, process notes, multiple drafts, false starts, frustrations, and plans for future work. These narrative portfolios frame and make sense of the past and speculate on what the future might hold for the learner. Most importantly, such portfolios carry a learner’s “voice.”

Such story-telling or narrative portfolios go beyond “writing folders” and touch all areas of study.  They are rich with written work, photos, videos, annotations, blog entries, and other electronic artifacts that provide a documentary record of student projects, thought processes and reflections over time. When learners are interviewed about these portfolios there is evidence that they tell at least two kinds of stories (Fenner 1995). One story is about the work a learner has completed. This portfolio is retrospective. It captures what the artifacts show about what the learner has accomplished in the past. Learners will go through their artifacts and describe them as “postholes” (Tierney 1994) showing the new things they were able to do. Another type of story learners tell with their portfolios is prospective. The artifacts have been selected and organized to show what the learner hopes to be able to do in the future; the artifacts are markers for aspirations. A set of drawings showing many design revisions is offered as proof that the learner can be a designer in the future. Learners using a multiple intelligences framework in compiling portfolios also reveal similar variations in the stories they tell. Some use the artifacts to document what their strongest intelligences are.  Other learners use their portfolios to “audit’ which intelligences they feel are well developed and which intelligences they would like to develop further. With all these examples of portfolios, those who participate in the conversations about them will hear the development and consolidation of identity as well as the learners’ growing sense of agency. Viewing portfolios as a narrative vehicle allows learners to frame, analyze, and make sense of major and minor events in their lives.

 

Redeeming Rubrics from the Rubric Generator Abyss

 

Let us start by banning all “rubric generators” and prefab rubric downloads. Rubrics are another good idea that has gone rogue in a bad way. Rubrics as they were originally created by Writing Project assessment veterans like Miles Myers (circa 1980) were descriptive. Teachers in a collaborative setting would read samples of student writing across a range of grades from an entire school district. Following a reading and sorting process, teachers developed descriptions of various types of writing at different levels of skill and sophistication. The descriptions showed both a developmental trajectory and pointed the way to what kinds of instruction students needed to become more skillful writers at any developmental level. The publication of this type of analysis always included the rubric, i.e., the descriptions teachers had written for each level and type writing, and samples or exemplars that illustrated each description. Although there were always commonalities across the descriptions as they were developed each year, each analysis was a unique description of the writing that had been submitted.

Originators of the rubric concept like Miles Myers and Ed White always believed that rubrics were descriptive (rather than prescriptive) and were derived from actual samples of work. Generating rubrics from samples of work rather than borrowing someone else’s checklist from the internet is a practical way to conduct classroom assessment of more complex work. Even more powerful is the idea of involving learners in the creating of rubrics (i.e., standards) for their own work.  This process invites learners to consider the qualities of the good work they have seen or are envisioning.  In their conversations, learners investigate the processes that enable good work, capture their own language to describe quality, and open the door to revisiting and revising the work and projects that they care most about. This is another instance to see complexity at work with identities (“We create quality work”), relationships (“We support one another in achieving quality work), and information (“Here are the key aspects of quality work that we have discovered together”). These features come together in a system that improves itself as the learners themselves mature in their relationships and understanding. This is a “zone” of shared growth in which learners use what they are mastering to pull one another “up.”

Using Technology to Tell Learners’ Stories

 

Digital cameras, computers and smartphones have become commonplace in many classrooms. Some schools even have “bring your own device” days. Technology obvious introduces many new sources of information and avenues of communication into the classroom. In assessment, however, there have been many instances where technology has not inspired learning and teaching in personalized and empowering ways. In the name of “personalization” or “accountability,” electronic gradebooks and learning management systems have sometimes encouraged teachers to use lessons that amount to electronic workbooks. Digital rubric generators have destroyed the concept of what rubrics are supposed to capture by making them generic and frequently unanchored to samples and exemplars connected to the learners and products the rubrics are supposed to be assessing.  Commercial apps, often free to teachers, have made it easier to create multiple choice assessments that can be graded and recorded electronically in a seamless process. These are all examples of using the right tools to do the wrong things.

Technology can also be an invaluable tool in assessment. This is true particularly in portfolio work where students establish their identities as competent individuals and collaborators by constructing narratives that capture reflections of their challenges, successes and aspirations—who they are currently and who they hope to become in the future. Technology in assessment, especially in the context of portfolios and formulating a body of work can give learners a sense of agency. As teacher and blogger, Katherine Hale observed at a 2016 ISTE conference presentation, a learner can use technology to “change his or her story.”

In the world of computer applications—apps—there have been some exciting breakthroughs that help teachers and students themselves tell their learning stories. One such tool, Seesaw, sets up digital portfolios that capture artifacts, video episodes, and other examples of learning, as well as student or teacher commentary. The Seesaw app allows for collection of evidence of learning around key learning outcomes set by a school district, teacher, or the students themselves. Parents and caregivers have access to their child’s on-going profile of learning and can see and hear examples of the thinking and problem solving their child engages in. The app is ideal for parent conferences that can be either student- or teacher-led. The system works with children as young as age 5 who can use the system to collect evidence of their own learning and reflect on it. Seesaw was recognized by the American Association of School Librarians (AASL) in their “Best Apps for Teaching and Learning 2015.” The AASL reviewer noted the Sawsaw app can, “Give students ownership of their own space to create & record what they learn. Students can add text and voice recordings to journal items to reflect, explain, and develop their academic voice. A great teacher resources sectionis available on the Seesaw site.” Seesaw offers a free version and an enhanced version available to individuals or districts on a yearly subscription basis.

 

  1. Establishing a Professional Culture for Assessment

 

What if, instead of marginalizing human judgment in the assessment of learning, we honor it? What if we admit that a test, no matter how valid, reliable, and aligned, is simply not up to the task because all it can do is measure, and what we need requires something more? What if we build a system around human judgment that minimizes its vagaries while bolstering its strengths?

James Nehring.“We Must Teach for ‘Range’ and ‘Depth,’” Education Week, August 25, 2015

 

There are many reasons that we have lost our way in assessment.  Some are political.  Some are commercial. Some have to do with how we think of ourselves as professionals and what our roles and responsibilities are. James Nehringsuggests that assessment systems need to reconnect with what professionals in other areas do; these professionals rely on human judgment to make important decisions about the lives of others. Medicine, for example, deploys many of measures to assess physical health (blood tests, CT scans, x-rays) but one still has to sit down with a doctor to have her explain what the results might mean and what to do about them. The “value-added” component the patient receives in this situation is the human judgmentof the medical professional and his skill in selecting and carrying out treatment.  This human judgment is guided by what medical professionals have determined is the “standard of practice.” Educators, as researchers James Stigler and James Hiebert in The Teaching Gap(1999) observed, we need to think more about and collaborate more on educational “standards of practice;” this is especially true for assessment of learning.

Education, like public health, is a large complex adaptive system. Complex systems arise and are maintained through identities, relationships, and information.  Assessment contributes to a good portion of the information circulating in this complex educational system. The relationship of education professionals to this information has become increasingly distant as the human dimension of such information gathering and analysis has been ceded to tests.  These tests are scored in remote locations, use poorly understood algorithms, follow arbitrary cut scores for proficiency, offer scant detail about actual performance, and adhere to unworkable timeframes. This sort of quantitative analysis fits well in the world of finance and investment where information is tracked in nanoseconds and computers make decisions without human intervention based on codes written by so-called “quants.”  However, such formulas and decision charts have little value in the much messier world of gauging the learning–literally the creation of new synaptic networks–in children and young adults. In human-centered professions, well-trained practitioners agree upon standards of practice and make judgments about progress, success, failure, and what constitutes quality work.

Well-trained educational practitioners must regain control of the assessment systems used with students. At the same time that they take responsibility for identifying and making judgments that are most important for students’ learning, they must also make the criteria for these assessments and observations transparent and recognizable to external audiences through standards, descriptions, explanations, exemplars and community events.

Ideas and designs have existed for some time for professionalizing assessment, making it human rather than machine centered and focusing on the kinds of learning that can protect and advance our democracy and quality of life in the future. It’s not a coincidence that two of the most successful research-based programs Reading Recovery and The National Writing Project use assessments rooted in professional judgment and, in the case of The National Writing Project, the power of external audiences.  In the 1980s Miles Myers who was a past president of the National Council of Teachers (NCTE), the California Federation of Teachers and a driving force behind the National Writing Project —in short, a master teacher—led efforts to help teachers form assessment communities to make reliable judgments about the quality of student writing.  Later he worked with a group of educators and the NCTE to publish a standards and exemplars series showing how to assess and categorize learners’ work that was complex, authentic, and thoughtful. Myers urged schools to address the need for accountability by creating galleries of their learners’ work that had been assessed by professionals so that standards for learning could be accessible and visible to parents and the larger community.  Myers viewed questions of assessment, standards and accountability as part of a larger responsibility that educators, specifically classroom teachers, have to be engaged in on-going inquiry as part of their teaching practice.  Here’s what he said in a commentary on school culture:

I … believe that the best school reform would be the institutionalization of teacher research and inquiry at each K-12 school site.  What does institutionalizing inquiry mean?  For one thing, it means that teachers would not leave accountability and assessment to professional evaluators.  Teachers themselves could development assessments—otherwise known as instruments for data collection—for reporting to the public what their school sites are accomplishing.  I remain perplexed by those who are willing to grade students but who are not willing to develop better ways to report to the public what is happening in K-12 schools.

 

Institutionalizing inquiry also means that we are willing to collect data about the content of what we teacher and to have a collective about whether the content we teach represents our vest collective thinking about schooling.  Because students are mandated to attend particular schools, public bodies are asking for descriptions of the curriculum content teachers teach.  This discussion at the national, state, and school site level is called the standards discussion.  There are teachers who wish to leave this discussion to policymakers, fearing that a discussion of this kind at any level will lead to mandates beyond those affecting students. Of course, a little institutional inquiry will reveal very clearly the mandates that already exist—requirements for graduation, for instance. I am puzzled by those who to do teacher research and, at the same time, who wish not to have a standards investigation. These pervasive attitudes toward assessment and standards represent serious obstacles to any effort to institutionalize inquiry in K-12 schools and to enhance the moral and intellectual authority of K-12 classroom teachers. Teachers who have established their moral and intellectual authority can free themselves from brain-numbing routine and demand the time to do professional work. [Emphasis added]

 

What is striking about Myers’s commentary is its view of how the teaching profession might mature. This same professional maturity is reflected in the Reggio Emilia approach to “pedagogical documentation.” The current crisis in public education is very much marked by the loss of teachers’ “moral and intellectual authority.” Myers’s commentary was written a number of years ago. Sadly, Miles Myers is no longer with us.  But his ideas are timely and important. We are facing a crisis in assessment.  The hijacking of the professional rights and responsibilities of teachers to make informed judgments about learning demands we keep Myers’s ideas alive.

It should be noted that despite the tendency for staff development to be “data analysis” sessions centered unapologetically on “how to get to students to answer more standardized test questions correctly” many professionals in classrooms and schools still have good ideas about how to make assessment humane, reasonable, and responsive for learners and teachers. Good teachers continue to know what their students are learning before a formal assessment is administered and to value those professional experiences that help them be more effective with the most important aspects of learning. These are the true leaders of “school reform.”  Kathie Marshall a 34-year veteran of classroom teaching and peer coaching illustrates this in a description of her coaching experience:

Only when a team of our teachers looked at data to develop an action-research question, try out strategies, and discuss the results, did I see real teacher buy-in and pedagogical growth. One year we created a unit for research-based instruction in writing revision. The next year we examined student discussions as an effective strategy for second-language learners. I also helped lead an after-school teacher group where we investigated effective questioning strategies. In all this work, the teachers valued the opportunity for collaborative learning. Instructional expertise grows exponentially when teachers are provided time to learn together. District-mandated formative assessments, state-mandated summative assessments, or other externally directed data-gathering initiatives just don’t pack the wallop that collaboratively developed instruction and assessment do.

“What Data-driven Instruction Should Really Look Like” Kathie Marshall Teacher Magazine. June 3, 2009

 

 

Collective Responsibility in Designing Assessment Systems

 

Effective professional autonomy in today’s schools is not individual, but collective. It is not just autonomy that protects us from unwanted interference, but autonomy to do important things together[emphasis added]. Collective autonomy is about constant communication and circulation of ideas in a coherent system where there is collective responsibility to achieve a common vision of student learning, development and success.

Andy Hargreaves “Autonomy and Transparency: Two Good Ideas Gone Bad” in Flip the System: Changing Education from the Ground Upeds. J. Evers and R. Kneyber. 2016

 

            It is past time to wrest assessment from the statistical models that are cloaked in false certainty about “value-added” and instead work on creating assessment systems that build student capacity rather than destroy it. That a teacher’s actions in a human-centered assessment process can shift development, change mindsets, and help create new neural connections for learners is cause for great optimism about public education’s capacity to help learners regardless of how many or how few advantages they have had outside of school. Assessment at this level of sophistication develops most fully if there is an intense, clinical focus on the science and craft of assessment within the professional culture of the school.

 

A new foundation for student assessment, one that rests on the competence and collaboration of professionals with various levels and types of expertise, must be created.  Collectively and collegially these professionals can assist and support one another to transform assessment by:

 

  • Understanding learning from multiple perspectives including biology, psychology, neuroscience, and anthropology;
  • Knowing the characteristics and phases of learners’ development and understanding how to accelerate learning
  • Observing learners’ work and performance keenly;
  • Understanding how to involve learners in assessing their own work;
  • Developing assessment practices from a growth mindset perspective;
  • Providing feedback in the course of assessment that fosters reflection and improvement both for individuals and groups;
  • Using narrative principles to highlight the growth and accomplishments of individuals and groups, documenting performance and reflecting upon it often,
  • Constructing and internalizing recognizable standards of performance for various groups of learners based on their ages and interests;
  • Collaborating on developing learners’ own assessments of and reflections about their work systematically collected over time;
  • Communicating and interacting with learners, parents, and the community regarding performance standards; and
  • Using sound principles of evidence gathering to produce information that can be shared in profiles of group, school and/or district learning trends and accomplishments.

 

The Potential Power of Assessment Networks

 

One way to circulate ideas, evidence and insights freely throughout the profession is through professional networks .  . . [The network} moves knowledge and practices around by itself through units of work on writing, observational tools for looking at student engagement and so on … the key cultural characteristic of autonomous professionals whose work hangs together coherently is that of incessant communication about values, priorities, practice, problems and results.

Andy Hargreaves “Autonomy and Transparency: Two Good Ideas Gone Bad” in Flip the System: Changing Education from the Ground Upeds. J. Evers and R. Kneyber.  2016

 

The point of the network is for ideas and practices to circulate in ways that improve the practice of teachers who are otherwise isolated from one another by vast distances. Its purpose is not to filter initiatives down from the top.

Andy Hargreaves “Autonomy and Transparency: Two Good Ideas Gone Bad” in Flip the System: Changing Education from the Ground Up eds. J. Evers and R. Kneyber.  2016

 

What would it look like if the professional stance that Miles Myers envisioned were distributed across a network of professionals who were connected electronically by a common vision rather than through the accident of school assignment? An assessment network could be powerful in demonstrating how to make judgments and give feedback on complex projects in a systematic way. Not only could teacher collaborate in setting standards for quality work but students could contribute as well.  On-site juries and interview teams are used in places like High Tech High. A network of experienced teachers could be instrumental is demonstrating to a wide audience of stakeholders and interested others how valid, authentic assessment tasks are constructed and reliable judgments about the quality of complex work are made.

 

How Parents Can Lead the Way

 

With the Opt-Out movementgaining momentum, this may be the ideal time to help parents and the general public see that there are better ways to learn about what our students are learning. But the professionals—teachers and administers in school buildings and districts— also need to step up and take the opportunity using their experience, craft knowledge, and research findings to recapture assessment from corporate and political influences. Corporate and political entities should be providing the time and tools that professionals need not dictating what they will assess, how they will do it, and what the results mean.

Parents individually or in groups need to approach school district officials, in particular board members and superintendents, and explain that they are interested in results, but they are results regarding the kinds of things they understand will be important for their children in the future. Here’s the futuristic list of skills venture capitalist/ innovator Ted Dintersmithcame up with. Many parents might prefer to see assessment results for these skills rather than test score print out or difficult to decipher school accountability” report card”:

  • Teach students cognitive and social skills
  • Teach students to think
  • Build character and soul
  • Help students in a process of self-discovery
  • Prepare students to be responsible, contributing citizens
  • Inspire students through the study of humanity’s great works
  • Prepare students for productive careers

Many teachers are already focused on developing these skills. What is lacking is the official imperative to give up all the standards checklists, entries into electronic gradebooks, and test alignment prep, and focus instead on gathering and documenting this kind of information and experience. Parents who are part of the Opt Out movement are ideally positioned to start this movement.  They’ve already determined they aren’t getting information they care about through standards-based testing, school rankings and the like. They should have the right to get assessment information about the things they care about and value in their children’s education. This is a whole new and much needed avenue for discussions about “school choice.”