Featured

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Principled Assessment Design

A key aspect of formative assessment is that teachers collect and interpret samples of student work or analyze items to diagnose where students are in their learning. Busy teachers are faced with two choices. They can count the responses answered correctly by topic and move on; or stop, grab a cup of steaming hot coffee, and spend time analyzing features of items and student work that are associated with different stages of student cognition in the content area.  This latter approach takes a lot of time (and a lot of coffee).

If a state’s theory of learning regarding how students grow to proficiency is not available or widely publicized it is difficult for the teacher to align assessment tasks to reveal where a student is located on a continuum of proficiency and beyond. The teacher cannot predict where a student is currently along that continuum. Therefore, instructional adaptations and judgments regarding whether a student has mastered standards become less certain.

How likely is it that two teachers who use the same curriculum materials and standards in different areas of a state create classroom assessments to investigate a continuum of proficiency in the same way? Even when teachers carefully use state standards and the same curriculum to guide teaching, instruction, and assessment, differences in levels of mastery judgments can and do occur. Standards and curriculums are necessary and essential foundations to support student learning, but insufficient, to support equitable learning opportunities. Learning opportunities are often determined on the ground by teachers who know students best, often based on their classroom assessment results. However, my 19 years of working with teachers, curriculum specialists, and professional item writers have taught me different stakeholders have different interpretations of what the journey to proficiency looks like. These different perspectives are likely to drive different instructional decisions. If we want to accelerate student learning, we need a common evidence-based framework to help us identify where students are in their learning throughout the year that is used in all parts of the state’s educational ecosystem.

Four persons are using their right hands to make a square knot  by each hand holding the other persons wrist. The viewer sees the human knot from above. The ethnicity of the different hands implies persons of African, Asian, Hispanic, and Caucasian descent ,

In my first blog of this series, I argued our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences.  In this blog, I discuss specifically how we can create instructionally sensitive and useful assessments. This design framework requires the use of evidence, both in the design of the assessments and in analysis of the item level results against the design, to support the claim that the learning theory being described is reasonably true.  This evidence-centered design branch is called principled assessment design using Range ALDs (RALDs).

Principled Assessment Design

Principled assessment design using RALDs has been proposed as an assessment development framework that centers a state’s stakeholders in first defining its theory of how student knowledge and skills grow within a year. Learning science is used where research evidence exists along with teachers, researchers, and item writer judgements. During this process, stakeholders define the contexts, situations, and the item types which best foster student learning and allow students to show their thinking as they grow in expertise. They develop companion documents that provide guidelines for how to assess. When items are field tested, additional analyses beyond what is standard in the measurement industry are conducted to check alignment of items to the learning theory, and the state collects diverse teacher feedback on the utility of the score interpretations and companion documents for classroom use, all before the assessment ecosystem is used operationally. The vision for this framework is to align interpretations of growth in learning across the assessment ecosystem using initial evidence from large-scale assessments and teacher feedback to support or revise materials as needed to ensure the system is fully aligned for all students.

Range ALDs

RALDs describe the ranges of content- and thinking-skill difficulty that are theories of how students grow in their development as they integrate standards to solve increasingly more sophisticated problems or tasks in the content area. They are essentially cognition-based learning progressions. Why? States neither prescribe curriculum nor the order in which to sequence and teach content.  RALDs are intended to frame how students grow in their cognition of problem solving within the content area agnostic to the areas of local educational control. They are critical for communicating how tasks can align to a standard and elicit evidence of a student’s stage of learning, but the thinking a student is showing does not yet represent proficiency. Thus, the student’s cognition still needs to grow as shown in Figure 1.

The image shows a learning progression inside four cells demarcated with Levels 1 - 4.  Above the progression the text reads "Within a single standard can be ranges of cognition that can be defined from content features, depth of knowledge, and the context of the items. Under the first two cells of the progression is an arrow moving from level 1 to level 2. Under the arrow a text box says "Students scoring in Level 1 can answer explicit easier content in grade-level text (often explicit detail questions). To grow, they need to learn how to make low-level inferences which often involves paraphrasing details in their own words to make their own connection.
Fig. 1. Example of an RALD and interpretive guidance.

Range ALDs hold the potential to reduce the time it takes teachers to analyze the features of tasks that only require students to click a response, which have become pervasive in our instructional and assessment task ecosystem. Teachers can quickly match the features of tasks to the cognitive progression of skills to identify where in the continuum of cognition an item is intended to show student thinking. Teachers can then identify where students are in their thinking and analysis skills, by matching the items a student answers correctly to the cognition stage. Even better, district-created or teacher-created common assessments could pre-match items to the continuum to save valuable time. Such an approach allows teachers, districts, and item writers to use the same stages of learning when creating measurement opportunities centered in how thinking in the content area becomes more complex. This supports personalizing opportunities to measure students at different stages of learning in the same class. For example, students in Stage 1 in text analysis need more opportunities to engage in inferencing rather than retrieving details from the text.

The notion of adapting measurement and learning opportunities to the needs of a student is a principal of learning science. RALDs are intended to help teachers estimate the complexity of tasks they are offering students and compare them with the range of complexity students will see at the end of the year on a summative assessment. Interestingly, such an approach aligns to the Office of Educational Technology’s definition of learning technology effectiveness. “Students who are self-regulating take on tasks at appropriate levels of challenges, practice to proficiency, develop deep understandings, and use their study time wisely” [23, p. 7]. While the teacher can target the tasks to the students, the teacher can also empower students to practice making inferences as they read texts independently and together in the classroom. Curriculum and instruction set the learning opportunities; the tasks are used to locate a child on their personal journey to proficiency.

How and when does a state collect validity evidence to support such an approach?

Scott et al., wrote that assessments support the collection of evidence to help construct and validate cognition-based learning progressions. Under the model I describe, the state is responsible for collecting evidence to support the claims that the progressions are reasonably true. The most efficient place to do this is through the large-scale assessments it designs, buys and/or administers. This approach requires an evidenced-based argument that the RALDs are the test score interpretations, and the items aligned or written to the progressions increase in difficulty along the test scale as the progressions suggest. Essentially, to improve our system we are using the assessment design process to

  • define the criteria for success for creating progressions at the beginning of the development process,
  • use learning science evidence where available, and
  • collect procedural and measurement evidence to empirically support or refine the progressions.

The first source of validity evidence is documenting the learning science evidence used to create the progressions in the companion document for the assessments that teachers, districts, and item developers use. Such evidence sources often describe not only what makes features of problem solving difficult, they often suggest conceptual supports to help students learn and grow. This type of documentation is important to support content validity claims centered in an assessment that does more than measure state standards. This is an assessment system designed to support teaching AND learning.

Item difficulty modeling (IDM) is a second way to collect empirical evidence. When conducting IDM studies, researchers identify task features that are expected to predict item difficulty along a test scale, and these are optimally intertwined in transparent ways during item development as shown in Figure 2. It is critically important to specify item types for progression stages because of research that suggests item types are important not only in modeling item difficulty but also in supporting student learning.

The image shows a learning progression inside four cells demarcated with Levels 1 - 4.  Below the Level  1 cell is a box that denotes the use of DOK 1 items and the use of MC items. Below the Level 2 cell is a box that denotes the use of DOK 2 items and the use of EBSR items. Below the Level 3 cell is a box that denotes the use of DOK 3 items and the use of EBSR items. Below the Level 4 cell is a box that denotes the use of DOK 4 CR items and (multiple pieces of evidence)..
Fig. 2. Example of an RALD integration with item specifications to show what types of items and cognitive demand intersect with evidence statements from Schneider, Chen, and Nichols (2021)

A third approach to validating progressions is to look at identifying items along the test scale that match and do not match the learning theory and make decisions about either removing or editing the non-conforming items or editing the non-conforming progressions. The process I am describing is iterative. It can be done during the development and deployment of a large-scale assessment simply by rearranging many of the traditional assessment development processes into an order that follows the scientific process. I believe this process is less difficult than we think. We simply need to get in a room together with our cups of coffee and outline the process and evidence collection plan before beginning the adventure! The conscious choices of uniting our assessment ecosystem centered in a common learning theory framework with transparent specifications and learning science evidence is what makes assessment development principled (page 62). That is, task features, such as constructed response items, are strategically included by design in certain regions of the progression. Being transparent about such decisions and sharing the learning science evidence upon which decisions are based allows teachers to use assessment opportunities in the classroom as opportunities to support transfer of learning.  Transfer of learning to new contexts and scenarios within a student’s culture is critical for supporting student growth across time. This, in turn, ensures that teachers and students are using the same compass, and they are framing their interpretations of what proficiency represents in similar ways which promotes equity. This also allows large scale assessments to contribute to teaching and learning rather than being solely relegated to a program evaluation purpose. It is incumbent upon all of us to ensure that students get equal access to the degree of challenge the state desires for its students.

The synergy between assessments and the learning sciences begins with the notion that assessments can be designed to support teaching and learning. We must have the goal of showing validity and efficacy evidence that such assessments are designed to and actually do support teachers. To produce such assessments, we use principled approaches to assessment design and work to improve the score interpretations. We collect sources of evidence that are growing in use to evaluate if more nuanced score interpretations can be supported. We provide professional development for pre-service and in-service teachers on using the materials, and critically, we collect information regarding whether, if used accurately, RALDs help teachers analyze student thinking in the classroom. In the final blog of this series, I am going to explore embedding this framework into the Understanding by Design curriculum process. I have my coffee pot and creamer ready to go!

A steaming cup of black coffee is in a white mug on a white plate with coffee beans on the white plate and wood table.

Why We Should use SLOs to Support Student Learning Recovery

Student learning objectives (SLOs) are intended to be a teacher-centered reflection process about supporting student learning over the course of a year or throughout the duration of a course. This is a particularly important process as we work to recover from and persist through continued learning disruptions as a result of the pandemic. Many teachers are beginning this year still exhausted from the last, and SLOs can feel like a bureaucratic tool rather than a formative framework the teacher self-creates as a powerful tool to support teaching and learning. How can we reframe this thinking?

Growth by Nick Youngson CC BY-SA 3.0 Alpha Stock Images

SLOs should naturally be a part of what teachers already do. Many schools collect baseline information about what students know and can do at the beginning of their learning for the year. Baseline information arrives in the form of large-scale assessment results, interim assessments, pre-tests targeting the SLO learning goal, or optimally multiple measures that give a holistic view about student achievement as well as contextual features of the student as learner (i.e., IEP, LEP or 504 considerations). The purpose of the baseline data is to help a teacher discover what a student can do at the beginning of the year. Building a profile of where the student is currently is the starting point for next instructional steps that are optimized for the student.

SLOs are about personalizing instruction to students

Investigate this second grade teacher’s interim reading data.

Figure 1: Reading Scores from a Grade 2 Classroom  

Students along a test scale from a scale score of 150 to 200.

While we seldom think about score interpretations of data in this way, and assessments do not often give us information in this context, the reality is that students come to us in very different stages of mastery in the standards. Some students need to master critical standards from earlier grades. Other students are likely ready to learn standards from the next adjacent grade. The grade in which a student is enrolled, at times, tells us more about what a teacher is expected to teach than what students are ready to learn. State standards are a framework regarding what students should master each year to be on track to be college and career ready. However, as state assessment results demonstrate, not all students exit each grade in the proficient and advanced achievement level. Therefore, we can expect that students enter our classroom in different stages of learning, and as a result, they have different needs. Students who are represented in red with labels denoting they are likely functioning in first grade standards need more intensive opportunities to master precursor standards in first grade in addition to mastering second grade standards. Students in green likely need curriculum compacting. A single unitary pace of instruction will not serve these students equally well if our goal is to grow each and every student in our classroom.  SLOs can help us slow down and think about a plan for differentiation.

Visualizing Differentiation at a Mile High Level

As we reflect upon what it means to increase student knowledge and skills by differentiating to support students, the teacher must visualize what differentiation will look like. Students who are reading like a typical beginning Grade 1 student may need instruction which focuses on sound-symbol relationships, decoding, and reading fluency to support reading comprehension. At this stage, of reading development, we focus reading comprehension probes on the student’s ability to identify “textually explicit information” in the form of “definitions, facts, or details.” (the quotes show where I am pulling from and adapting from the NAEP 2019 Reading Framework). These types of probes align to the Common Core Standard “Ask and answer such questions as who, what, where, when, why, and how to demonstrate understanding of key details in a text.” Students who are reading like an advanced Grade 3 student need instructional tasks that ask them to make “complex inferences”. Complex inferences are inferences that require using evidence from multiple sections throughout a single text in the early grades. For these advanced readers, it is imperative they be moved beyond responding to literal, surface level comprehension questions that ask them to identify textually explicit information, make simple inferences based on supporting details within a single paragraph, or recount a story. These students already have this ability with grade appropriate text.

What does this mean for the four students in the middle of the classroom distribution? These are the students for whom a typical instructional pace supports student growth. Instruction geared to the average expectations will meet these students’ needs. But for the other nine students the typical instructional pace will be too difficult or too easy, and they will not have their learning optimized unless time is set aside to personally support each child in their current stage of reading development. Thus the teacher may create the following SLO learning goal: Grade 2 students will read grade appropriate texts with fluency and accuracy, and they will demonstrate comprehension by describing the main character’s perspective regarding the story problem using details from across different sections of the text.

SLO Learning Goals are Big Ideas

Almost all SLO models use baseline data as an input into the development of the SLO Learning Goal, which is called the SLO Objective Statement in South Carolina. The National Center for Educational Assessment (NCIEA, 2013) defined the SLO learning goal as “a description of the enduring understandings that students will have at the end of the course or the grade based upon the intended standards and curriculum.” Marion and Buckley (2016) posited that the SLO Learning Goal should be based upon high leverage knowledge and skills, often referred to as a “big idea” of the discipline, and this big idea should integrate several key content standards. Riccomini et al., (2009) wrote that big ideas should form the conceptual foundation for instruction. They are teacher prioritized concepts students should understand because they form the point of departure for students to connect current and future learning with previous learning. For example, Riccomini et al. noted that fractions are precursor skills to ratio and proportional reasoning, but these concepts are often taught discretely so that students do not see and use these connections in their reasoning as they solve real world multistep problems. The big ideas that we frame for a SLO learning goal can also be considered a single grade-level competency under a competency-based framework.

SLO Learning Goal Criteria

As teachers create their SLO Learning Goal, you might compare it to the following criteria. The SLO Learning goal

  • is measurable; it includes explicit, action verbs.
  • requires that students engage in deep demonstrations of their thinking in the content area.
  • contains the key content competencies a student should demonstrate by the end of the year.
  • connects and integrates multiple critical standards that are central to the discipline. You have evidence to support this claim documented.
  • will elicit student reasoning at or above the cognitive demand levels denoted by single state standards in isolation.
  • grain size is appropriate to the amount of instructional time you have with the students.

Analyzing the SLO Learning Goal

Content Competencies

The SLO Learning Goal can be broken down further into measureable content competencies. Reading with fluency and accuracy is typically accomplished through assessments that measure how many words a student reads per minute and what percent of words in a minute were accurate. However, a critical component of accuracy is whether or not the student has the foundational skills to decode unfamiliar words. While fluency needs to be at a rate in which a student can remember and restate what was read, the accuracy component likely needs to be the more heavily weighted instructional decision maker of the two, with careful attention to the student’s decoding ability. The next content competency denoted in the SLO is that the students will demonstrate comprehension by describing a main character’s perspective regarding the story problem.

Measureable Verbs and Cognitive Complexity

The verb describe is measureable by allowing students to verbally provide the information to the teacher or through writing. Because writing skills are slower to develop than reading and because the measurement goal is comprehension of what was read, the goal is to allow either mode of showing comprehension. That is, if the child is not yet able to demonstrate the complexity of his thoughts in writing, equal credit is afforded to sharing of thought complexity verbally.

With either mode of demonstration (verbal or written), the SLO goal is intended to elicit evidence that the student can support her answer using details from the text. In a cognitive framework developed by Norm Webb, this would be a Depth of Knowledge 3 or 4 goal (depending upon the text type), as it engages the student not only in drawing a conclusion but also supporting the conclusion with multiple pieces of evidence from the text.

Context

The SLO Learning Goal also provides context that helps us visualize what a full culmination of the learning goal looks like. The context provided is that the student is expected to use details (evidence) from across different sections of the text. This is context because it describes the conditions under which a child can provide a character’s point of view. This denotes the most advanced state of demonstrating the learning goal. An easier and arguably proficient presentation for a Grade 2 student of this same skill would be for a child to provide a character’s perspective on how to solve a problem using a detail or details from within a single section of text.

The SLO Learning Goal requires that the students engage in a deep, authentic demonstration of their thinking. However, the context of when the student can show this skill allows you to see where the child is in their development of reading comprehension.  As a teacher considers how to teach and measure this SLO Learning Goal, there must be a consideration of which texts elicit the necessary target evidence both for instruction and assessment.  What Grade 2 appropriate texts show a character’s thinking or point of view on how to solve a problem across the text? The Magic Tree House series immediately comes to mind.  The SLO Learning Goal connects and integrates multiple critical standards that are central to the discipline, will elicit student reasoning at or above the cognitive demand levels denoted by single state standards in isolation, and represents enduring understandings that will support students in the next grade.

There are multiple paths forward to developing SLO Learning Goals based on the same baseline data. The purpose of the SLO is to assist teachers in planning ahead in recognizing the need for and developing a general plan for differentiating instruction for students in different stages of learning. This is especially critical in supporting students through on-going learning disruptions. This blog should not be taken to mean that a teacher can or should never teach all students at the same time. What we are doing in the SLO process is carefully creating a scaffolding plan which is explicitly described in this Grade 2 Common Core Standard, “By the end of the year,[students will]  read and comprehend literature, including stories and poetry, in the grades 2-3 text complexity band proficiently, with scaffolding as needed at the high end of the range.”

 Let’s stop thinking of SLOs as a bureaucratic tool. Let’s think of SLOs as they are intended to be:  a formative framework to support teaching and learning.

If you would like an SLO Learning Goal Planner click here.

If you would like to see a different SLO Learning Goal for Grade 2 students, a fifteen minute overview with teacher-created examples is shown here: https://vimeo.com/151169470.   

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Understanding by Design

In this third and culminating blog on the topic of synergizing assessment with learning science, I advocate that we unify our educational ecosystem through a common theory of learning to ensure we accelerate, recover, and personalize learning opportunities for each student. To accomplish this vision of what public education can and should look like, we can consider working at the intersections of design-based research, principled assessment design, and Understanding by Design as shown in Figure 1, with teams of experts in accessibility, assessment, curriculum, diversity, instruction, and learning science. As interim assessment providers predicted, we are likely to see additional evidence that proficiency in mathematics was more strongly influenced by the pandemic than reading. We need a plan for the years ahead.

Figure 1: Synergizing Instruction and Assessment with Learning Science

In the first blog of this series, I argued that our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences. We need to create assessments that help teachers understand the stage of cognition within the content area in which the student is presently functioning. The synergy between assessments and the learning sciences begins with the premise that large-scale summative assessments can be designed to support teaching and learning.

In my second blog of this series, I described the general design framework that makes use of evidence, both in the design of the assessments and in the analysis of the item level results against the design (i.e., the score interpretations), to support the claim that the learning theory being described is reasonably true.  This evidence-centered design branch is called principled assessment design using Range ALDs

When we have validity and efficacy evidence that the state summative assessments are designed to and do support teachers by providing reasonably accurate score interpretations, we are ready to begin the next stage of the process, which is the focus of this blog.

We need to support school districts in embedding the learning theory and corresponding evidence statements into their curriculums through Understanding by Design. This third step is critically important. Why? Because assessments alone do not change an educational system.

The Planning Stages of Understanding by Design

Understanding by Design (UBD) is a principled approach to curriculum planning. Curriculum is ideally designed to ask students to produce increasingly sophisticated outputs upon which learning opportunities are based. Both curriculum and assessments are based on the desired outcomes of what students should know and be able to do. We want to use the same evidence statements and theories of learning for curriculum and assessment development if we want to create a coherent educational ecosystem that focuses on equity and growing students to proficiency and beyond. UBD is at its core a three-stage planning framework to help curriculum designers think through curriculum and assessment design. These stages are shown in Figure 2.

Figure 2: The Stages of Backward Design used in UBD

When a state makes a commitment to develop its statewide assessments using the processes described in principled assessment design based on Range ALDs,  Stages 1 and 2 of the UBD framework are essentially complete. The state has shared the desired outcomes for students and validated that the evidence collection framework is a reasonably true representation of how students are likely to increase in sophistication along the proficiency continuum. Thus, districts and teachers have access to the same validated evidence framework as test designers to support them in identifying where students are in their learning throughout the year. This is a critical step in creating an equitable educational system. Such an endeavor also allows district stakeholders and teachers to spend their precious time

  • planning for effective and engaging learning activities,
  • evaluating instructional materials against evidence statements in the Range ALDs to investigate students’ opportunities to learn at levels of cognition that represent proficient and advanced stages, and most critically,
  • creating connected instructional and assessment tasks based on the state’s theory of learning.

That is, district curriculum specialists and teachers can focus their time on Stage 3 of the UBD framework.

Stage 3: Planning Learning Experiences and Instruction

A growing chorus of measurement and learning progression experts argue that high-quality assessment tasks are interchangeable with high-quality instructional tasks: they are two sides of the same coin. Both can be used to support learning and transfer. Instructional tasks give students an opportunity to learn, and assessment tasks show students can transfer what was learned to a new scenario independently.

When a student succeeds on a task independently, teachers should be encouraged to provide the student with a more sophisticated task within the progression of learning within the unit that is the teacher’s focus. We cannot give each student in a classroom the exact same performance task if we want to accelerate student learning. Students come to us in different stages of learning and with differing needs in terms of the depth and length of opportunities they need to master a particular stage of cognition within the content area. Therefore, we want teachers to provide tasks to each student that are aligned to a stage of the learning target progression that the individual student needs to grow. The focus for the teacher is to facilitate learning by providing feedback to help the student close the gap between her present level of performance and the next stage of sophistication. It is for this reason we want to encourage each student to revise his work. The focus for districts, and perhaps the state, is to provide the authentic tasks aligned to the progressions.

Under such an adaptive classroom model, instructional and assessment tasks can have formative or summative uses, depending on the student and teacher actions and what the child is able to do independently verses with support.  Because learning targets have explicit progressions, connections across tasks based on evidence statements in those progressions will intentionally support student growth in achievement by offering multiple opportunities to learn across time and across progressions. When a student successfully responds to a task associated with a particular stage of the progression, the student is ready to move to the task associated with the next stage. Moreover, students with 504s and IEPs are naturally included in the process because the support they need to show what they know is built in, for example, by allowing them access to text-to-speech or additional scaffolding which is considered context in the Range ALD development framework.

It is the intentional planning and creation of additional difficulty across tasks by purposefully increasing content difficulty, cognitive complexity, integrating additional standards, and perhaps moving from single to multiple stimuli that is the hallmark of proficiency in many states. Wiggins and McTighe discuss the need to organize and sequence student learning experiences for maximum engagement and efficacy (p. 220) in the UBD process. Validating the Range ALDs using learning science processes of iteration allows them to meet their intended interpretation claim: Range ALDs describe the ranges of content- and thinking-skill difficulty that are theories of how students grow in their development as they integrate standards to solve increasingly more sophisticated problems or tasks in the content area. They are essentially cognition-based learning progressions.

Its About Supporting Learning in the Classroom

Creating a state-level educational ecosystem based in cognition-based learning progressions helps teachers better understand where students are in their learning and suggests likely pathways in which students need to be guided to help them develop the ability to engage in far transfer. If we take the time to ensure we have evidence to support score interpretations on the large-scale assessment, such claims become useful to teachers in the classroom because Range ALDs provide an informative tool to support curriculum and learning activities. Teachers can

  • align tasks they administer to students to the cognition-based progression stages and
  • match authentic student responses to the cognition-based progression stages.

It is the score interpretations that are critical for defensibility, not just scale scores. Creating such a system allows us to provide professional development for teachers in using evidence to understand learning. This process requires training teachers on how to align assessment tasks to Range ALDs, and that assessment tasks can be different but interchangeable if the evidence elicited by different tasks are the same. This allows tasks to be personalized to student interests, culture, accessibility needs, and ability to support increased student engagement.

We must challenge ourselves to create more efficient, equitable educational ecosystems that allow teachers to focus on analyzing where the student is and what the student needs next following a common and validated theory of learning such as is shown in Figure 3. We can and should allow large-scale summative assessments to contribute to teaching and learning rather than use them simply to evaluate teachers, schools, and districts without providing substantive information to help inform next steps.

Figure 3. Theory of Action centered in a Common Theory of Learning

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Preamble

Across the country, we see evidence that students are learning at slower rates than in years past, particularly in mathematics. For example, Curriculum Associates’ researchers found an additional 6% of students were not ready to access on-grade instruction in mathematics in Fall 2020 compared to historical trends. My former colleagues at NWEA found student achievement in mathematics in Fall 2020 was about 5 to 10 percentile points lower than in Fall 2019. In South Carolina, where 62% of students were eligible for the National School Lunch program in 2019, the projections of the percentage of students whose learning is on track for proficiency by the end of this year is notably lower than in years past. Most worrisome–we know we are missing students. To paraphrase Robert Fulghum, who wrote “All I Really Need to Know I Learned in Kindergarten,” to optimize learning recovery we need to hold hands, stick together, and work to accelerate learning.

Supporting learning recovery requires that we (1) optimize learning tasks to the student’s stage of development to target learning experiences just where the student needs support, (2)  facilitate student growth to the next stage of sophistication by fostering and rewarding self-regulation, and (3) treat all assessments formatively.

To accomplish these three goals, we must rethink assessment development and use. To be effective tools for accelerating student learning, assessment development and use must be synergized with findings and processes from the multidisciplinary learning sciences field. This is important if we want both classroom and large-scale assessments to serve teaching and learning, not just accountability. Why? Because together we must center our focus on understanding and cultivating the cognition students need to show for more advanced knowledge and skills in the content area that represents College and Career Readiness.

In this three-part blog series, I argue our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences.  I am going to introduce the learning sciences, talk about design-based research, and show connections to assessments developed to understand how students grow in cognition in the content areas from the learning sciences field. In my next blog, I will show examples of synergizing classroom and large-scale assessments using principled assessment design, which is similar to the learning sciences design-based research approach. Finally, I will connect this work to curriculum design, most especially, to Understanding by Design.

Learning Sciences

The field of learning sciences is multidisciplinary. It often focuses on understanding and exploring how learning occurs in real-world contexts that increasingly are examined through technology. Sommerhoff et al., (2018) defined learning sciences in this way:

[The} learning sciences target the analysis and facilitation of real-world learning in formal and informal contexts. Learning activities and processes are considered crucial and are typically captured through the analysis of cognition, metacognition, and dialog. To facilitate learning activities, the design of learning environments is seen as fundamental. Technology is key for supporting and scaffolding individuals and groups to engage in productive learning activities (p. 346).

Learning scientists often distinguish between recall of facts and deeper conceptual knowledge.  They also focus on the contexts and situations in which students learn and show their thinking as they grow in expertise. Cognition theories are important. Situating learning in personally relevant contexts (that include the student’s culture) with sufficiently complex, authentic, and interesting tasks that facilitate learning are also focal points. While the learning sciences field is large and encompasses researchers from areas such as cognitive psychology, computer science, the content areas such as science, mathematics, reading among others, ironically psychometrics, the field in which I work, is not often included or discussed.

As practitioners and theorists, O’Leary et al. (2017) noted that psychometricians and test developers are focused on the technical evidence in support of score interpretations and test score uses that they develop. This does not mean, however, that the test developer interpretations are useful to teachers, or that test developers always validate the actual score interpretations. We seldom collect evidence that teachers find proposed test score interpretations instructionally useful in advance of creating tests, and whether interpretations describe student behaviors in ways that are helpful to instruction.  There are, however, a growing group of psychometricians who, like myself, recognize the ways we develop assessments needs to evolve to provide better information for teachers and parents. These evolving practices are similar in strategy to the design-based research practices discussed in the learning science literature at points in the assessment development cycle. The synergy between instructionally useful assessments and the learning sciences begins with the notion that assessments have the goal of showing validity and efficacy evidence that they are designed to and actually do help teachers understand and predict how learning occurs to support instructional actions.

Design-Based Research

Design-based researchers often do two things at the same time. They put forth a theory of learning and collect evidence to determine if a theory can be supported or uncovered through iteration. The goal of such research is to develop evidence-based claims that describe and support how people learn. To investigate such theories, learning scientists carefully engineer the context and evidence collection in ways that support an intended positive change in learning. Sommerhoff et al. (2018) show in their network analysis of learning sciences that what we want to understand are areas of student cognition, learning, and motivation (among others); these are the outcomes of import. These areas are what we want to make inferences about as we observe and teach students.  Learning scientists use design-based research, assessment, and statistics (and other techniques) as methods of investigating these outcomes.

The merger of what we want to understand to support students and how we use assessment and design-based research to collect evidence for such inferences is exemplified by Scott et al. (2019). They describe the following design process.

  1. Researchers use qualitative methods and observations to identify the various ways students’ reason about the topic of interest as they develop mastery, including vague, incomplete, or incorrect conceptions.
  2. The findings are ordered by increasing levels of sophistication that represent cognitive shifts in learning that begin with the entry level conceptualization (lower anchor) and culminate with the desired target state of reasoning (the upper anchor).
  3. Middle levels describe the other likely paths students may follow.
  4. When possible, the reasoning patterns described in the intervening levels draw from research in the cognitive and learning sciences on how students construct knowledge.
  5. Assessment instruments are the tools that researchers use to collect student ideas to construct and support the learning framework.
  6. The tasks students are asked to engage in on the assessment elicit the targeted levels of sophistication that represent the concepts of the hypothesized learning progression.
  7. Evidence is found to support, disconfirm, or revise the progression.

Shepard wrote that assessment developers and psychometricians need to know the theory of action underlying learning, and she noted, “a certain amount of validation research is built into the recursive process of developing learning progressions.” Design-based research has some overlap with a newer design-based methodology for creating large-scale educational assessments called principled assessment design. This approach can also be used for classroom assessments. Examples of a PAD approach will be the focus of my next blog. In the meantime, here is a graphic foreshadowing where we are going to unify our educational ecosystem to ensure we accelerate and recover learning opportunities for students efficiently, together. We all can contribute to creating systems that support better understanding of where students are in their learning and what they likely need next. Let’s hold hands, stick together, and do this!

Student building a robot
Connecting Design Based Research, Principled Assessment Design, and Understanding By Design.

Creating Engaging Relationships with Students Online

Elise Ince

University of South Carolina

E-teaching requires learning new online skills: how to administer an exam from a distance, how to share documents, hold office hours, use Zoom, and poll students. To help teachers in this matter, one can count on your administration to provide a plethora of guidance, how to videos, and other “important information” documents. However, one basic aspect of e-teaching is often left aside: how do you develop a relationship with students and keep them engaged during e-learning?

When face-to-face, a teacher can easily connect with students by looking straight in their eyes. This is not possible on camera, so no student will feel I am talking to him or her personally. Zoom etiquette documents are readily available and provide the basis for a civilized class (find a clean, quiet space, be on time, don’t walk around or start conversations with household members, etc.) Yet, following this etiquette dutifully can’t ensure engagement with an on-screen 2D stamp-size teacher.

Consciously Connecting with Students

What can I do, as a teacher, not to appear too distant and detached from my students? Connecting with them is essential for learning to occur. Connecting with students requires skill and constant effort on my part when they are in a classroom, let alone through distance learning. I decided it was fundamental to start creating ties with students before classes even began.

Creating a Video

I created a short introductory video using the free platform Animoto.com describing who I am, where I come from, where I studied, and where I have worked. I shared the video with the students ahead of the first class. By sharing my background, I hoped to appear less of a unidimensional figure. I did not want students to feel intimidated. Although I am the least intimidating person you can imagine, unless you have seen me move, enter or leave a room, you can’t understand or guess my personality. I showed a picture of my family, my pets, mentioned my hobbies, and even showed what my office looks like this semester. I was honest. I let my students know this is my first experience teaching online, and with COVID nothing is what it used to be. But I wanted them to know I was ready to start this new adventure with them!

Setting up Moments of Interaction

By making it explicit that e-learning is new for all of us, I hoped to decrease the power distance. My first goal was to appear accessible, and my second was for us to get to know each other. To that end, I asked students to create a short video on the free platform Flipgrid (https://info.flipgrid.com/)  before the first class meeting. I asked students to introduce themselves briefly, then play “Two Truths and a Lie.” Each student was asked to present three statements. Everyone had to guess which statement was false, and type it in the video comment section. This also helped me monitor that I had engaged each student and connected him or her with others. A lot of fun was had by everyone trying to guess the lie while learning interesting facts about one another. Students then revealed during the first class which statement was false and which ones were true.

Embracing Pets

Part of the challenge with any new situation is to embrace the positive. From day one, I noticed that pets were all over the screen: curious, loving, and wanting to participate. I noticed how whenever we would mention pets, students would welcome the break and connect with each other. I decided to include pets formally in my class by asking students to email me a picture of their pets. I make sure in each lecture to randomly place these pictures and let the owner present his or her pets for a couple of minutes. I believe it provides a welcome break, helps students connect, adds warmth to a format that is painfully dry and cold, and helps spread positive affect to the rest of the lecture. In that same vein, I encourage students to share their personal experiences (e.g., “What is the weirdest food that you have ever eaten?”). I ask them to fill out an information card at the start of class. I take note of interesting facts or experiences (semester abroad, fluency in another language, specific hobby, etc.) and refer to these whenever appropriate. It is important to highlight the human dimension, which we tend to forget when we look at a screen.

Engaging with Short Response Questions

I make sure to provide very detailed slides, more detailed than if I were in a classroom. This way if a student is momentarily distracted, he or she can easily catch up and will not feel lost or helpless. I also plan regular breaks within the 75-minute long class. An easy way to re-engage students is to ask them questions via polls. I have begun using “polleverywhere”  (https://www.polleverywhere.com/plans/k-12, a free platform for educators), because it allows for more dynamic visuals. For instance, when I ask students to define a concept in five words, I can use the option “word cloud” or “open-ended” to show students’ responses which appear on the screen in real time and in an organized fashion (e.g., same words will have a bigger font). Students are often curious to see their answers compared to others.

Prompting Student Thinking

Throughout the lecture, I often ask questions, limit cold calls which I find intrusive, and make sure I allow plenty of time for students to answer open-ended questions. I often show short videos, but I always give them a task while watching (e.g., report any details that you thought are worth mentioning) to share with me. This way students are warned that they should watch the video actively and not passively. I show pictures or graphs related to my lesson and ask students to comment on those. I also use popcorn questions, which should be answered with one word only. Finally, I strongly encourage, but do not require, students to have their cameras on.

There is a fine line between preserving student privacy and making sure they stay engaged. I encourage “cameras on” by saying hello to students with cameras on individually as they enter the virtual room. I remind them often to feel free to let me know if they can’t have their camera on. Many of my students warn me in advance if they cannot turn on a camera and explain their situation. High expectations combined with compassion and understanding have led to a limited number of black screens. The changes I have made in my courses for e-teaching have also led me to have a different perspective about my students lives. Thinking back on these changes, I told my students that I might keep the pet picture idea, even when we are back in the classroom. It does me as much good as it does them.

The author

Dr. Elise Chandon Ince is an associate professor in the Department of Marketing at the Darla Moore School of Business at the University of South Carolina. Her research examines how consumers process marketing material and marketing claims in the area of linguistics (language structure, meaning, phonetics and fluency). Her research has been published in the Journal of Consumer Research and the Journal of Marketing Research. She serves as a reviewer for several journals and is on the editorial review board of the Journal of Consumer Psychology.

Leveraging Classroom Assessment to Accelerate Student Learning

Do you measure student growth in learning or measure how much a student learned based on the learning targets from an assessment? I’m asking for a friend.  In a year where we are worried about catching students up to pre-pandemic levels of achievement, could we optimize the use of grading practices to accelerate learning? Assessment can be used to grow as well as document student learning at a point in time; we just have to shift how we use it!  

Does Grading Help or Hurt?

Researchers have found evidence that suggests that historic grading strategies can have negative effects on student learning. When every assignment is a summative assessment for a student, grades reflect a race to learn at the pace of the teacher and other students in the class. Whether face-to-face or remote, we may not be aware of silent challenges students face. And in a remote environment, students are having to learn new skills beyond content.

Remote Learning: The New Skill Set

“Computer Keyboard” by BigOakFlickr is licensed under CC BY-SA 2.0

Remote learning requires a related, yet different set of skills than learning face-to-face. The organizational load for both teachers and students is increased. For students who are still developing executive functioning skills (the ability to manage time, prioritize and complete tasks, and adjust to new routines) remote learning provides students an increased set of challenges in showing what they know. Gone is the teacher as organizer and reminder in the classroom with calendars on the walls or upcoming assignments on the board. Which students needs these supports? They are likely among the students who are not turning in assignments on time or not showing achievement at the same level remotely as they did in person.

Ryan and Deci (2020) examined research related to self-determination theory which posits that

  • students need to feel like they have some control or choices in their learning;
  • students need to feel competent; and
  • Students need to feel connected to learn.

When students miss assignment deadlines, we often provide grade deductions. This practice may not support students’ feelings of control, and we provide a covert message that they are not competent. We can flip this message with some small changes to assessment and grading policies!

From Penalty to Praise

What if we gave all assignments to students at the beginning of the week (or grading period)? What if we said, “You get a 90 (A) for getting the content correct?” If you choose to follow the recommended submission schedule, you get an extra 10 points for turning the assignment in on time! Think about how this changes the narrative for the student.

Students who are still developing executive functioning skills can earn an A if the assignment is late within the grading period. So can the student who is juggling their own learning as well as the learning of their sibling. Such a shift assures that students feel a sense of competence, of success, and receive a grade based on what they have learned. The list of assignments in advance coupled with recommended turn in dates also provides a structure that offers students the challenge of being on time, incentivizes it through positive reinforcement, and allows students some wiggle room on when they complete their work. This allows students, who are juggling more than we know, opportunities to practice how to manage their time and an opportunity to grow. It reinforces the behaviors we want. Allowing remote learners opportunities to connect (Fun Friday breakout rooms) if sufficient numbers of students follow the recommended submission times can also ensure students have an opportunity to get to know their remote learning classmates, especially at critical school transition points (e.g., first year of middle or high school).

Shifting Grades from Control to Choice

Retakes

Some grading policies stipulate students may retake a limited number of assignments for partial credit. And while this policy is far better than denying any retake opportunities, does such a policy help your students grow to proficiency?

Kornell and Rhodes (2013) found that most learners evaluate their own learning based on the test they just took. If a student perceives his learning as lacking, his course grade becomes the evaluation of self-as-learner. We want students to recognize they control their own ability to learn. Therefore, we have to help them take feedback from the assessment, correct misconceptions, and try again. If classroom tests are not treated as a natural part of the instruction, feedback, and assessment loop, they function as mini-high stakes assessments. Allowing students to bank their grade or work to master the learning target through a retake process allows students to engage in formative assessment as a partner with the teacher, encourages self-regulation, and has been shown to increase student learning.

Retakes as a Feedback Loop

Rigor

State assessments often release achievement level descriptors to show how student knowledge within a standard increases from more novice to sophisticated states. These descriptors also are intended to show how standards integrate with other standards. Sequencing items by these states of complexity (called achievement levels though it is better to use the levels without the labels in the classroom) on pretests or on early unit homework assignments can help students (and you) identify where students are successful, by comparing easier or more complex content within the standard.

Students who are successful on more difficult, complex content are likely ready to move on. Allowing students to pick, for example, which six out of ten items to complete for homework can tell you whether students are making accurate judgements of their own abilities. For example, students who choose to answer the difficult, complex items and do so correctly often understand their own learning. Asking their perspective on what they need, supports their autonomy.

For students who are less confident in answering on-track or advanced items which purposefully require increased levels of critical thinking skills, choosing less or more difficult items allows students to experiment with the concept of desirable difficulty. We want students to challenge themselves with harder items to optimize their long-term learning; while at the same time, we want to scaffold more complex items for them in ways that they begin to regulate their own effort towards a desirable level of rigor over time.  That is, students should have not only the opportunity to learn rigorous material, they should have the time and multiple practice opportunities to do so.

Relevance

Testing supports learning. While we sometimes create pre-test study guides for students, growth in learning and retaining concepts is better supported by practice quizzes when compared to studying alone. Multiple-choice quizzes are better activators of student learning than studying alone, and short answer questions and essays function better than multiple choice in helping students learn and retain.

Because how you assess often influences the degree to which students retain information, setting up quizzes where students self-test and get feedback on each question accelerates learning. It is also important to bring back previously learned material (interleaving) on quizzes and tests to support students retaining the information. Interleaving allows you to measure growth in learning targets over time because you are providing students multiple opportunities to demonstrate proficiency which requires use of learned information outside of the unit of instruction.

While frequent low stakes quizzes have been shown to increase learning over business as usual, it is also critical to have students use writing to support and synthesize their learning. Performance tasks that require students to integrate and synthesize their learning are essential for growth, and not just in English Language Arts or Social Studies. For example, projects investigating how algebra is used in real life coupled with practice using the equations in a real-life scenario that are of interest to a student outside of the classroom serves to help her reflect on what she has learned, makes content relevant, and requires her to engage in more complex, critical thinking.

We intend to encourage students to accelerate and be accountable for their own learning. Assessment is a learning opportunity as well as a measure of learning. Assessment and grading practices that include retakes, rigor, and relevance serve to support student growth in addition to documenting what was learned.

Quiz yourself!

Which practices do you use to help students accelerate their learning?

  1. I provide a list of all assignments due at the beginning of the semester or weekly to assist students in judging how many assignments they will have for the week to support time management, and I provide a recommended pacing for submission.
  2. When I give an assessment, I code items on the assessment on my answer document so I can investigate the range of skills from easy to complex a student answered correctly based on state achievement level descriptors, so I can monitor and grow the complexity of what students are able to do across time.
  3. I use frequent tests worth small point values and feedback to help student’s move information and processes into their long-term memory.
  4. I bring back important concepts on tests and performance tasks to ensure students are retaining and growing in the skills that are most essential for the next grade.
  5. I allow students to retake assignments when they have not demonstrated mastery to ensure they have both the opportunity and time to learn.

Assessment can be used to grow as well as document student learning; we just have to shift how we use it.  

@mcschneiderphd

Creating Support Systems for the Use of Learning Progressions

Teachers report they need more sophisticated and nuanced support systems to understand and facilitate student learning. These supports go beyond state standards, the district curriculum and pacing guide, and published textbook materials. How do I support this claim? Evidence!

I have been lucky enough to gather evidence about what teachers want through empirical studies in collaboration with states, through research collected with a nationally representative sample of science educators, through findings of grant funded research, and through conversations with the many brilliant teachers with whom I have worked or who have taught my own children.

Here is what I have learned: Many teachers want

  • prioritized standards that signal what is most important to monitor as children progress throughout the year coupled with a rough sequence of coverage;
  • a learning and evidence management system that helps them track and measure student growth over time, easily;
  • examples of what proficiency looks like, authentically; and
  • standardized exemplar tasks aligned to state standards that they can use, if they want to, to help understand what standards look like in action.

Other sources (e.g., Heritage et al., 2009; Schneider & Andrade, 2013) from formative assessment research also suggest the following would be helpful:

  • supports for monitoring student learning over time in a way that focuses teachers not on the number of correct responses a child is providing, but rather, whether those responses represent more sophisticated reasoning and content acquisition than was observed previously;
  • supports in interpreting student work and using that evidence to take instructional action; and
  • differentiated supports that honors where teachers are in their own development and not one size fits all.

Some policy makers are largely focused on tracking growth based on the content and curriculum of the student’s grade of record.  This is certainly a fair perspective given how teacher and school accountability systems are set up. But consider this alternative perspective:

  • To engage in formative practice, we need to identify where a child is.  
  • To measure growth, we have to allow teachers to explore what the child is thinking in the content area centered in what has and has not yet been taught.

Visualize a ruler measuring 12 inches. The ruler is consistent across time. It has equal increments on a continuum from 0 to 12. Now consider that 0 to 1 inch could be what we want students to learn in kindergarten; 1 inch to 2 inches could be what we want students to learn in first grade; 2 inches to 3 inches could be what we want students to learn in second grade and so forth. If a second-grade teacher only measures and teaches from 2 to 3 inches, our educational system misses the student who started the year at 1.5 inches. Our system misses the student who starts the year at 3.2 inches. We miss understanding where all students are and only capture growth of most students. Moreover, because even the most advanced students are not perfect, we allow a student at 2.8 to spend the year waiting for the 20% of the curriculum he or she needs. Or we miss seeing that such a child, in truth, already is at 3.0.

Can we allow a teacher to use the entire ruler so they can focus on what a child needs? I think we could. But we also have to build support systems around teachers that remove the ambiguity of what the targets look like so teachers can move agilely to diagnose and meet student needs.

Learning progressions are an underpinning foundation of such a support system as shown in Figure 1.

Figure 1. Intended Use of a Progression Based System adapted from Schneider and Johnson (2019)

Learning Progressions

Learning progressions can be a support for instructional actions because they offer likely instructional pathways of what students need to master across time. They provide the big picture of what should be learned across a year, support a teacher’s instructional planning, and act as a touchstone for formative assessment (Heritage, 2008). Smith, Wiser, Anderson, and Krajcik (2006) defined a learning progression as the description of the increasingly more sophisticated ways of reasoning in the content domain that follow one another as a student learns. Clements and Sarama (2014) noted learning progressions (frequently referred to as learning trajectories in their work) describe levels of student thinking. Clement and Sarama also noted students need instructional activities to help students progress along the continuum as a component of a learning progression. Learning progressions can be the foundation to support teachers and students, but coupled with standards, curriculum and pacing guides, are not enough.

Furtak, Morrison, and Kroog (2014) advised “tools alone will not help teachers realize shifts in practice.” Rather tools, the progression and the tasks, are a starting point. Learning progressions must, in their view, not only describe how students learn (this is learning science), they must be an interpretive aid in analyzing that information (this is formative practice), and a support for using the information for action. We want the action a teacher takes to be the right action for a student. To accomplish this goal districts and states need to plan policy supports, professional development, and exemplar tools.

  1. Tools to Help Teachers Collect Accurate Information about Student Learning

We need to provide additional methods of communicating the intent of the standards to teachers. Teachers need to see exemplar tasks tied to learning progressions so that teachers have supports for recognizing assignments that elicit the thinking of students in a particular stage of development. Sequences of sample tasks contextualized along a learning progression can help teachers visualize what developing, approaching, on-track, and advanced assignments in the standards look like. If we want teachers to differentiate for students, we need to provide instructional and assessment tasks that support them in doing this. We have to provide instructional and assessment tasks that are technically sound so that they elicit the right evidence that teachers can use to locate students into the correct progression stage. Using tasks that are too easy, too difficult, or technically flawed means teachers will not be able to make the right instructional decisions for each student. Any decision we make on where to locate a student in instruction is high stakes for the student.

Tools to help teachers collect accurate information about student learning are best created when teams of experts in learning science, assessment development, accommodations and accessibility, content experts, and curriculum come together and unite. A team of experts that includes teachers ensures we are eliciting the right evidence for instructional decision making. A team of experts that includes accommodations and accessibility experts ensures we are not creating, unintentionally, barriers to accessing content or students being able to show what they know.

What does a learning progression and task system look like? Here is one example.

2. Tools to Help Teachers Making Accurate Inferences About a Student’s Present Level of Performance

In addition to collecting evidence of student learning from purposefully created assignments, teachers need to make the right decision about where to place the student. They have to analyze and interpret the information collected. Existing research evidence suggests that because this is a time consuming, complex task, teachers rarely analyze student work at an individual student level. Teachers have reported they tend to analyze student learning at a holistic-classroom level using average test scores on assessments as the primary data point (Schneider & Meyer, 2012; Hoover and Abrams, 2013) or they interpret the percent of 100 of an assignment.

Information about the average child does not help a teacher diagnose gaps, confusions, or beauty in thinking for a single student (Schneider & Andrade, 2013). Such an approach has been shown to cause decreases in achievement over time (Schneider & Meyer, 2012).  In addition, percent out of 100 on an assignment does not tell you if the student responded to an easy or complex task that differentiates a novice or advanced student. Task characteristics are central to understanding where students are located along a continuum.

States or district leaders will likely want to organize the collection of student work exemplars from these tasks. They will want to provide short professional development training videos that can be accessed on the fly. Short videos can showcase how student responses are matched to a progression. Providing exemplar work does three things.

  • Exemplar student work aligned to learning progressions shows teachers authentically what student growth looks like.
  • Exemplar student work illuminates what it looks like when a child reaches the state or district’s definition of proficiency.
  • Exemplar student work helps teachers identify students in more novice or sophisticated states of development.

Teachers have to center their analysis on identifying what the child can do. This helps answers the question, “Where is the student currently?” To move students forward in their learning efficiently, you also have to know, “what does the progression of knowledge and skills look like for a child to reach the expectations for students by the end of the year?”  It is not that teachers cannot do this. Many do. But couldn’t we make their job easier and faster?

Progression descriptors describe where students are in their learning and how students likely learn. Progression-based tasks show teachers what assignments look like that elicit student thinking representative of a stage of development. Exemplar student work of each stage shows us the evidence of what students can do when they are in a particular stage. Student work helps teachers recognize when one student is more advanced in their thinking than another. Together these tools can support a likely pathway to inform explicit instructional actions. Short professional development videos targeting next steps aligned to each stage would also likely be supportive. Why does each teacher need to determine this independently? There is power in a team of experts also supporting suggested next steps that teachers can use or not, as they determine what is best for their own students.

3. Tools for Triangulation of Evidence

Creating direct and explicit connections among learning progressions, progression-based tasks, and authentic examples of student work from each stage of development, support teachers in quickly analyzing student work. They match student work to a stage of development. A match is not a stopping point for decision making. It is a call to administer tasks from the next stage to discover if another match can be made. This is done until a match to a stage of the progression cannot be made. This is where the student will need to begin instruction. The combination of the matches that represent more sophisticated levels of development along with data from summative or interim assessments provide the triangulation of evidence and, importantly, validation to support understanding where a child is in their learning and what the child needs next.

Interim data (or “summative” data because data can be formative or summative based on how you use it) can suggest in the beginning of the year where the teacher might want to start checking along a progression. For example, for an advanced student, teachers might want to start checking student skills in the middle, higher area of the progression moving forward or backwards to locate the child as needed. During the year, such data points intended for triangulation can confirm progress the teacher sees in the classroom or disconfirm it. Triangulation should spur inquiry when different sources of evidence of student learning do not converge. For example, sometimes, students can demonstrate more advanced skills before easier skills. This can be especially true for students with particular types of learning differences. It is important to foster the advanced thinking in such situations rather than gravitate to the novice stages.

4. Evidence-Management Systems Centered in Policy Decisions

Teachers ideally need a learning and evidence management system so that the authentic student work evidence can be stored digitally at the teacher’s fingertips and used as a reference to analyze student work for the same progression across time. Ideally, such a management system would also allow teachers to access examples from other grades quickly and easily. If a student cannot access the grade level progressions and triangulation of evidence suggests the student is early novice stage of content understanding, should the teacher be able to find learning opportunities and progressions from the next lower adjacent grade? This is an important policy decision because it likely influences how a learning and evidence management system is configured. And it influences what the teacher feels he or she has the latitude to be able to do.

States and districts might consider messaging that not all students may need instruction just in the grade level standards. Not all students may need to be measured on just what has been taught. While learning progression tasks can be administered sequentially and embedded into the curriculum, their potential efficacy is diminished with such an approach. Teachers need to be able to diagnose where students are outside of a pacing guide if we want to ensure equity of both having an opportunity to learn and an opportunity to grow. We want all students to have what they need.

  • Many teachers want prioritized standards that signal what is most important to monitor as children progress throughout the year coupled with a rough sequence of coverage.
  • Many teachers want a learning and evidence management system that help them track and measure student growth over time, easily.
  • Many teachers want to know what proficiency looks like, authentically.
  • Many teachers want standardized exemplar tasks aligned to state standards that they can use, if they want to, to help understand what standards look like in action.

Can we make their job a touch easier?

References

Clements, D. H., & Sarama, J. (2004). Learning trajectories in mathematics education. Mathematical Thinking and Learning, 6(2), 81–89.

Furtak, E. M., Morrison, D., & Kroog, H. (2014). Investigating the link between learning progressions and classroom assessment. Science Education, 98(4), 640–673

Heritage, M. (2008). Learning Progressions: Supporting Instruction and Formative Assessment. CCSSO. Washington, DC.

Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative classroom assessment? Educational Measurement: Issues and Practice 28(3), 24–31.

Hoover, N. R. & Abrams, L. M. (2013) Teachers’ instructional use of summative student assessment data. Applied Measurement in Education, 26(3), 219-231.

Schneider, M.C., & Andrade, H. (2013). Teachers’ and administrators’ use of evidence of student learning to take action. Applied Measurement in Education, 26(3).159–162.

Schneider, M.C. & Meyer, J.P. (2012). Investigating the efficacy of a professional development program in formative classroom assessment in middle school English language arts and mathematics. Journal of Multidisciplinary Evaluation, 8(17). 1–24.

Smith, C. L., Wiser, M., Anderson, C. W. & Krajcik, J. (2006). Implications of research on children’s learning for standards and assessment: A proposed learning progression for matter and the atomic molecular theory. Measurement: Interdisciplinary Research and Perspectives, 4(1&2), 1–98.

Mastery Progressions and Competency Determinations During COVID-19: A Local Lens

On March 30, 2020, the South Carolina Department of Education released their memorandum, “COVID-19 Grade Reporting Guidance.” The guidance, from my perspective, encourages districts and teachers to implement mastery learning by allowing students opportunities to revise and resubmit work to increase their grade.

Here is the key phrase I believe is central to the department’s implied expectation of formative practice.

  • Students should have an opportunity to demonstrate mastery to improve a course grade.

Here is the key phrase I believe that makes it important to think in progressions.

  • The semester grade should be composed of all third quarter grades, as well as those grades deemed appropriate by the district to assure competency or provide remediation.

In other words, districts and teachers are being asked to consider and have conversations centered in determining the most important and high leverage skills students will need to be successful in the next grade. The setting up of a structure to support these conversations includes

  1. a mindset that grades can encourage learning when they are changeable by a student who closes the learning gap from where they are now to where we want them to be. I write about that mindset here.
  2. deep conversations between adjacent grade level teams, within and across buildings. Does the competency requirement mean the student should have sufficient knowledge, skills, and abilities to be ready for success in the next grade or the students should have sufficient knowledge, skills, and abilities to access the expectations for the next grade?
  3. Developing a process to determine which grades are appropriate to assure the policy definition of competency.
  4. Ensuring everyone has a strong understanding of mastery learning concepts.

This structure and these conversations are not only critical in this moment and hour. These conversations will be essential to help teachers and students be successful when students return to their learning communities forever changed by this moment in history.

The equity of opportunities to learn right now are staggeringly different depending on life circumstances. The opportunity to learn depends on access to the internet, devices, and tools that give students access to instruction. The opportunity to learn depends on having significant supports at home. We have children’s parents working on the front lines in the health care industry and restocking our grocery stores, each with potentially limited time to support their child’s learning at home. Students will come back to their learning community (brick and mortar or virtual) next year with dramatic differences in readiness to learn their grade level content. I believe thinking about the structure for mastery and competency for this year and next year is essential to help us move forward. We need a plan of how to move forward as a learning community that supports all students. We need to recognize how hard teachers are working and how much they care. We need to work together.

Digging into the Work Ahead

Defining progressions of mastery and a determination of competency across a set of high leverage big ideas in the grade is critical. Now and next year we must prioritize what is being taught to allow kids the best chance to get back on track in their learning. We need to sequence not all standards but the most important, high leverage standards. We need to focus on the standards most related to proficiency. Here is an example of a teacher already engaged in progressions and showing what demonstrations of learning look like through student work.

Mastery is the successful demonstration of a learning target, that is, mastering a single stage of a progression. Thus, from the example progression of student work, a student has mastered stage one when you are able to match the evidence of student work to the progression example.  Proficiency is the stage of the progression in which the student has integrated multiple, important standards. In this example, proficiency would be stage 4. Competency on the major work of the grade is a policy determination regarding whether the student has shown sufficient, developmentally appropriate evidence of learning across multiple progressions (e.g., reading and writing) to be ready for the next grade.  Note, the critical importance of policy and content in this discussion. The student may not be proficient in all progressions or in any progression, but he or she may still be ready to move to the next grade with sufficient skills to access and catch up. Thus, competency in this context has a specific meaning to our current situation. The definition has to be set through informed educational policy.

How to we begin?

The following example is an adaption of text from Chapter 2 of my book with Robert Johnson. I will also provide links to other examples that were developed with support from NWEA and my colleagues there that help show examples of the work to be done. To begin, teachers might optimally come together and identify standards related to a high leverage learning goal called a big idea. While this can be done individually by teacher, it is better when teachers collaborate and confirm their thinking with each other in grade-level groups and with teachers in the next higher adjacent grade. In our current situation, I suggest having lead teachers work with district staff as a team or in grade-level teams within a school for the big idea and progression development. Then administrators can roll out a consistent process to guide everyone in applying and syncing their locally delivered assignments and assessments to the progressions. These progressions mean teachers still have control of their assignments and where they are in instruction. These progressions also mean we have a common definition applied equally across a district or school.

Here is an example big idea: Grade 3 students will use digital sources and multiple texts to gather information and write their own informational text on a personal choice topic.

Notice when looking at the bulleted standards below that gathering information encompasses clusters of related standards.

The use of listening and reading standards is purposeful. It is also critical when students may not have access to all their needed accommodations or adult supports in our new virtual learning environment. Listening to content on an informational topic before reading text about that topic can build readiness for students to decode more advanced vocabulary. It also allows students who are in earlier stages of reading development to acquire sufficient content knowledge to engage in the coming writing task which is the ultimate focus of this big idea. (I am working from the framework we have to get devices and internet to all kids some how as a lesson from COVID-19).

Students use the content information when writing. Teachers use the student’s abilities to write main ideas with supporting details and the student’s organization of ideas as evidence to draw a conclusion that the student is able to gather information and write an informational text. Thus, if we want to think about a progression we have to sequence standards that build to our proficiency goal for the big idea. Here is the example sequence for the Grade 3 big idea.

  • “Determine the main ideas and supporting details of a text read aloud or information presented in diverse media and formats, including visually, quantitatively, and orally.”
  • “Determine the main idea of a text; recount the key details and explain how they support the main idea.”
  • “Use information gained from illustrations (e.g., maps, photographs) and the words in a text to demonstrate understanding of the text (e.g., where, when, why, and how key events occur).”
  • “Compare and contrast the most important points and key details presented in two texts on the same topic.”
  • “Describe the relationship between a series of historical events, scientific ideas or concepts, or steps in technical procedures in a text, using language that pertains to time, sequence, and cause/effect”
  • “Write informative/explanatory texts to examine a topic and convey ideas and information clearly.”
    • Introduce a topic and group related information together; include illustrations when useful to aiding comprehension.
    • Develop the topic with facts, definitions, and details.
    • Use linking words and phrases (e.g., also, another, and, more, but) to connect ideas within categories of information.
    • Provide a concluding statement or section.
    • Write informative/explanatory texts to examine a topic and convey ideas and information clearly.

What does this help us see?

Notice when we sequence and connect the standards in this way, we also are illuminating key differences in expectations of student thinking in earlier and more advance stages of learning. You can begin to see a trajectory of instructional tasks that move students forward. You can see that students could choose topics of interest so the work is meaningful. You can see some standards are precursors to others.

In our example, the first three standards become precursor skills to comparing and contrasting. And these comparison skills are central to understanding and analyzing sequence and cause/effect in texts or about topics which influence the child’s ability to describe (and more importantly infer) relationships in a series of historical events, scientific ideas, or concepts. Finally, you can see a path which could include student choice on topics related to what student needs to learn in science and social studies.

As students are closer to being near the proficient stage on a progression, you can begin to see that they also have the readiness to integrate across content areas. As teachers, we can begin to see how we might become more efficient in compacting the curriculum that is the major work for the grade if we build strategic instructional and assessment tasks. The creation of progressions (also called pathways or trajectories by some) and associated rubrics that move students forward and allow students the opportunities to revise and resubmit is critical. But it also takes SIGNIFICANT time and planning. We need to work together, help each other, and figure out a reasonable, fair process for this year and next.

When I do this work with teachers, I see teachers being brilliant in different, but equally effective ways. Some groups of teachers like to copy and paste standards together electronically (I suggest working together in Google Docs which makes this easy to do in a virtual conference call). Some like to literally print out and cut up state standards and put them together like a jig saw puzzle.

  • What is critical is the organizing and documenting of the sequence that helps you think about how skills develop and accumulate across a year.
  • What is important is thinking about which standards (or sets of standards) may represent interpretations of proficiency. In each content area, identifying three to four big ideas and developing associated progression stages for them can cover many of the standards. Prioritize. We won’t hit everything.
  • What is essential is looking at assignments and student work from assignments, and matching them to the progression stages.  Why?

This is a way to determine which assignment grades are optimal and appropriate to assure competency or suggest remediation is needed.

The competency determination

The competency determination has two parts:

  1. First, decide which stage of each the progression students should reach to be ready for the next grade based on the district’s definition of competency.
  2. Second, look across the progressions and determining what is reasonable.

If a compensatory decision model is created, the stage at which a student is located for each progression could be combined in such a way that mastering a stage at a higher level for one progression could balance end of year mastery of a lower stage of a different progression. Such a decision model acknowledges that students may master progressions at different rates or focus on one progression more than another in the current learning environment. We need to consider some students will be working independently with little adult support. I hope the process I have described can be useful to you in refining and prioritizing what you really need students to know by the end of the year. You might also be thinking that having a worksheet outlining the process might be useful, and if so, I am happy to provide you some electronic documents.  This takes time but school does not end until June. This structure and process can be refined over the summer to facilitate and expedite helping students catch up in the fall.

And on behalf of myself and my family, thank you educators for all you are doing.

Note: While I was employed from 2002-06 by the SCDOE, I do not represent the department. These thoughts are my personal interpretations of the guidance and my personal, professional suggestions.

We All Need Grace: The Case Against Zero’s

If you ever want a lively educational debate, ask a teacher or administrator about his or her grading system for missing or late work.  What is your policy? Based on survey data, teachers and principals frequently believe it is ethical and appropriate to give zeros for missing or late work (Green, Johnson, Kim, & Pope, 2007; Johnson, Green, Kim, & Pope, 2008). Often we believe by applying such a penalty, we are teaching students to be responsible citizens.

I understand; there are times that deadlines REALLY matter. In my world, applying for a conference presentation, applying for grant funding, and getting test scores back to teachers on time are all examples of non-negotiable deadlines! But what if applying a penalty in the classroom creates more problems than it solves?

The Penalty Problem for the Teacher

Johnson, Green, Kim, and Pope (2008) argued that educators who modify grades based on late or missing work do not accurately measure or communicate the student’s level of mastery of learning targets. In such a situation we conflate a behavior (or executive function deficit) with achievement. Giving the student a zero for late or missing work makes it impossible for the teacher to determine what the student independently knows because there is no evidence of student performance on a learning target. When a zero is averaged into the class mean or the student’s mean to make instructional adaption decisions, the data we use for decision making is flawed. Flawed data hinders robust instructional decision making. Flawed data does not tell us how well the student is learning the state standards.

While we should not center our focus solely on our own teacher effectiveness measure, we might consider that student achievement measures are a component of Student Learning Objective (SLO) systems. SLOs are centered in teachers providing opportunities for student growth in achievement. Allowing students to miss an assignment and take a zero consequently means we are allowing a student to miss mastering foundational precursor content and skills that need to be leveraged in upcoming units. If this is not true, then why require the work? Allowing students to miss an assignment does not help students grow.

The Penalty Problem for the Student

In talking with a group of fifth-grade teachers recently, I asked them why they had moved away from giving students zeros for missing assignments.  Here is what they said,

“We want students to learn.”

“Some kids care about their grades and others don’t. They need to do the assignments.”

Research evidence supports both these teachers’ observations and their practice. High performing students tend to focus on grades because they don’t want to fail. Often their grade confirms their own self-image of “smart.” However, grades can have the opposite effect for students who struggle.

The use of penalty grading (which includes zeros for late or missing work) may contribute to students falling behind. Assessment is a learning opportunity as well as a measure of learning. Assessment helps students move information into their long-term memory for transfer. Students in earlier stages of learning, who are already frustrated and struggling, have almost no way to recover from a zero. It makes it easier to give up. These students (and all students) need models of perseverance. When we set high expectations and insist that students must submit their assignments and learn we show all students are valued. So, what should we do with missing or late work? Here are some thoughts:

  1. Have a documented policy on the treatment of missing and late work to support clarity of expectations.
  2. Separate the behavior from the assessment of learning. Place a zero in the gradebook for the missing work, initially. Accept and grade the assignment until the non-negotiable deadline (e.g., grades are due) has been reached. Find a behavior modification tool that works for you and the student. For example, if your school uses lunch detention, consider giving lunch detention for missing your deadline. If the student has a parent who will withhold the “all powerful video game console,” send an email.
  3.  Develop a checklist of all assignments to be graded at the beginning of the nine weeks. Make sure students have it in their binder. Hold a learning conference with students as needed to review what is missing and to support the student in getting it turned in.
  4.  Allow students to resubmit assignments during a make-up window. Ask that each resubmission also include a reflection on what mistakes were made originally and how those mistakes have been corrected. If you feel students are abusing the make up policy, you can always constrain this to students who do not have a B or higher (because B’s or higher are essential to moving on).

It is Not Always Just Kids

No one is more focused on grades than teachers and administrators in their own graduate school courses, especially in a classroom assessment class! And guess what? Teachers and administrators turn in assignments late too. At the end of the day, we all need grace. You, me, and our students.

References

Green, S., Johnson, R., Kim, D, & Pope, N. (2007).  Ethics in classroom assessment practices: Issues and attitudes. Teaching and Teacher Education, 23(7), 999-1011.

Johnson, R., Green, S., Kim, D., & Pope, N. (2008).  Educational leaders’ perceptions about ethical assessment practices.  The American Journal of Evaluation, 29(4), 520-530.

Rubric Descriptors that Support Student Growth: Focus on Feedback and Action

Rubrics, a common classroom and high stakes tool to measure student learning, tend to describe desired performance qualities of student work on the right and deficits on the left. Oftentimes, rubric developers attempt to quantify student errors by counting or using descriptors such as “numerous,” “frequent,” or “many.” Often such descriptors target the lowest scoring students on the rubric. We often think about rubrics describing the low performers on the left-hand side and the high-achievers on the right-hand side. What if we flipped our perspective?

Students who we often classify as low achieving are typically in earlier stages of content understanding than their peers. The tools these students use to respond to grade level content are often less sophisticated because they have not yet mastered earlier essential skills. If we recognize that there are common performance characteristics of students in earlier stages of learning verses more advanced stages of learning on a topic, we have an opportunity to recognize what students are doing correctly, support their growth through additional practice opportunities that give them credit for rework, and activate them as learners rather than implicitly labeling them as lacking.

The potential for adverse impact

My proposition is that the rubric language we use centered in what students cannot do harms both students and teachers. It harms our students in more novice learning stages by messaging, “You can do almost nothing that is of value to me (the teacher).” This language likely unintentionally shames and shuts down students in their learning. Often students in earlier stages of learning need support in persevering so they can master the content. They need more guided practice, and they need more success. Moving from “many” errors to “some errors” does not provide the student feedback on what to do next, other than fail less. What if we described what succeeding more looks like?

Rubric language centered in a deficit model harms teachers. It misdirects our own cognitive task. When we look at student work, our first question always needs to be, “What does this work show the student CAN do correctly?” We need to compare the context of when a child is successful in the skill verses not. For example, when does the child use a period and when does she not? Often students in third grade are still in early stages of writing development. They may place periods after simple sentences, but when they write a compound or complex sentence, their sentence boundaries become undefined. This often happens as students begin to share more complex ideas. If we don’t look at when students are successful, we miss the why. We miss the next instructional steps: (a) encourage the complexity and (b) handle the sentence boundaries and other such issues at a later revision step.

Building an Actionable Pathway to Mastery

Rubrics should be an actionable feedback pathway from the teacher to the student towards a clearly defined learning goal. Such an approach describes where a student is in their learning and likely next actions the student should take to improve. If rubrics are used as an actionable pathway to mastery, they can become powerful tools of instruction. Describing a pathway to mastery centered in CAN, with an expectation of revision, creates opportunities for students to be successful in moving towards the learning target. We honor where the student is in his learning. We honor that students learn at different rates. We foster a student’s agency in her education by allowing her an opportunity to revise and advance.

Most students (novice and advanced students alike) need opportunities to practice the habits and characteristics of work at the mastery level and beyond in order to internalize and automate those desired skills. Too often we frame and evaluate opportunities to learn from a classroom summative context, forgetting that practicing those harder skills is what fosters growth. Allowing students to revise for mastery supports removing shame from unsuccessful attempts, inspires perseverance, and fosters a culture in which making mistakes is part of learning. It also allows learning not to be a race at the pace set by the teacher. When we create a rubric based on a feedback model of what the student can do and likely next steps to move up on the pathway, we do several things. First, the process of using such rubrics and examples of student work allows us to model the success criteria. Second, we help students become experts in their own learning. They may self-assess and observe the successive learning targets they must meet to get to mastery. Finally, we demonstrate for students how to be successful. Not only is this good formative assessment practice, it is also a hallmark of high-quality instruction.

Creation Steps

Consider this target: “Grade 6 band students will improvise an 8-measure duple rhythm pattern in the same tempo as the prompt on their instrument.” This is often done in a call and response context. What evidence do you need to hear to draw a conclusion that a student can do this?

1: Document the specific qualities of performance you want to hear. Avoid words such “bad” or “good” to describe student work and avoid counting based descriptors such as “many, frequent, and sometimes.” In my example, the goal is to elicit a student-improvised, eight-beat rhythm pattern that is different from the prompt while maintaining the prompt context —the same tempo, meter, and types of rhythm patterns. I expect to hear students respond to the prompt on their instrument using any combination of sixteenth notes, eighth notes, quarter notes, half notes, and corresponding rests because this has been the focus of instruction. I expect students to begin their response on the downbeat following the 8-measure prompt and to be a complete musical thought with a beginning, middle, and an end.

2. Brainstorm the types of performances that you expect to hear of students in earlier stages of learning.

When developing my pathway, I consider what students in earlier stages of rhythm development might do. They might:

• repeat the teacher-delivered phrase with consistent tempo, and some will do so with inconsistent tempo.

• perform a pattern with a tempo and meter that changes so rapidly, the I cannot establish a context for the performance to give back a response.

• not feel the phrase as a beginning, middle, and end and will therefore play less than or more than eight beats which does not lead to a sense of finality.

• perform a phrase as a beginning, middle, and end with an established tempo in the beginning but rush near the end.

3. Brainstorm the types of performances that you expect to hear of students in more advanced stages of learning.

A more advanced student might perform a different eight-beat pattern at the same consistent tempo and meter as the prompt, have a clear beginning, middle, and end, and incorporate rhythms that are traditionally more difficult such as a dotted eighth-sixteenth note.

4. Sequence what you have brainstormed regarding what students CAN do from earlier stages of learning to more advanced.

As I sort and sequence my brainstorming, I often notice gaps in my descriptions. I develop other levels of performance, I edit, I iterate. I also investigate theories of how students learn a particular concept and corresponding research evidence. This type of practice informs how to sequence features of student growth. You want your pathway to mastery to include important waypoints along the continuum (scale) of learning, and you want to write the rubric to the students.

5. Informally try out your pathway to mastery rubric with your students during instruction using a different instructional prompt that has similar characteristics as your assessment prompt. With music you want to record student improvisations so your class can co-develop and refine the rubric with you. In writing or in mathematics, for example, these responses can be captured digitally or on paper. Sharing student work (without students knowing who) during this rubric refinement time allows students to analyze what they did correctly. It allows you to provide feedback to the class for each stage. For example, if some students are rushing the last few beats would working with a metronome support the student? Do they rush or slow down on sixteenth notes because they are working to coordinate their tonguing with what they hear in their heads? Would practicing tonguing sixteenth-note and eight-note patterns support them? Should more advanced students be introduced to learning a more advanced rhythm? As you listen to performances together, explicitly match the stage of learning to your pathway. Brainstorm with students what the next action might be. This reinforces the notion that everyone in the class has more to learn. Ensure you add the feedback in your rubric.

Here is an example descriptor based on the stage where I identified the student might not feel the phrase as having a beginning, middle, and end and will therefore play less than or more than eight beats which does not lead to a sense of finality.

Descriptor: You performed a different pattern than the prompt using a combination of sixteenth notes, eighth notes, quarter notes, and half notes. Your pattern had a beginning and a middle, but because you used fewer than or more than 8 beats, you want to listen to your recording and count how many beats your improvisation had. Practice having the rhythmic motion of your response leading to a sense of closure at the end of the eight-beats. Listen to call and response sample performances.

The connection to the literature on feedback

The pathway to mastery rubric that I am describing commingles the purposes of a rubric to measure student learning with practices of effective feedback. This approach provides students information on how to move towards mastering the learning target (Hattie and Timperley, 2007). It reinforces the criteria for success by documenting what students did correctly and provides suggested follow-up actions.

When you are ready to administer your assessment prompt record the student response. Match the performance to the pathway. Return both the pathway to mastery rubric and the recording to the student. The students can connect the criteria and the processes to their own work, they can work to refine, and they can return to try again. You, in the meantime, have the opportunity to move on in instruction, if you would like. In my own classroom, I allowed additional attempt opportunities before school, after school, and on preset Fridays during the month. Using such an approach fosters a culture that privileges self-regulation. Students can choose to bank their grade or work to master the learning target. The key is to be open to changing students grades as they move up the pathway to mastery rubric (and with an improvisation, you want their rhythm pattern to be different and improved upon follow up).

Flipping our perspective and creation of rubrics from a deficit model of assessment to formative model of assessment helps us recognize students who are in earlier stages of content understanding. This in turn helps us understand why students need more opportunities to revise, and what students likely need to learn next. If we want students to grow in their knowledge and skills we have to honor what they know and can do. We have to give them multiple opportunities to meet the success criteria and recognize there is continuum of readiness for tasks in our class. That continuum effects how long it may take a student to master the learning target. We may assess less under this model by focusing on big ideas, and we might instruct more because we are flipping assessment tasks into instructional opportunities. This process also means we need high quality, engaging tasks for students. We also need to reconceptualize ourselves as coach. This is a transition from more traditional grading practices (which I will address in my next blog), that allow students to be left behind, to more modern ones that move students forward. I know. This is easier said than done, but isn’t the outcome worth it?