Featured

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Principled Assessment Design

A key aspect of formative assessment is that teachers collect and interpret samples of student work or analyze items to diagnose where students are in their learning. Busy teachers are faced with two choices. They can count the responses answered correctly by topic and move on; or stop, grab a cup of steaming hot coffee, and spend time analyzing features of items and student work that are associated with different stages of student cognition in the content area.  This latter approach takes a lot of time (and a lot of coffee).

If a state’s theory of learning regarding how students grow to proficiency is not available or widely publicized it is difficult for the teacher to align assessment tasks to reveal where a student is located on a continuum of proficiency and beyond. The teacher cannot predict where a student is currently along that continuum. Therefore, instructional adaptations and judgments regarding whether a student has mastered standards become less certain.

How likely is it that two teachers who use the same curriculum materials and standards in different areas of a state create classroom assessments to investigate a continuum of proficiency in the same way? Even when teachers carefully use state standards and the same curriculum to guide teaching, instruction, and assessment, differences in levels of mastery judgments can and do occur. Standards and curriculums are necessary and essential foundations to support student learning, but insufficient, to support equitable learning opportunities. Learning opportunities are often determined on the ground by teachers who know students best, often based on their classroom assessment results. However, my 19 years of working with teachers, curriculum specialists, and professional item writers have taught me different stakeholders have different interpretations of what the journey to proficiency looks like. These different perspectives are likely to drive different instructional decisions. If we want to accelerate student learning, we need a common evidence-based framework to help us identify where students are in their learning throughout the year that is used in all parts of the state’s educational ecosystem.

Four persons are using their right hands to make a square knot  by each hand holding the other persons wrist. The viewer sees the human knot from above. The ethnicity of the different hands implies persons of African, Asian, Hispanic, and Caucasian descent ,

In my first blog of this series, I argued our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences.  In this blog, I discuss specifically how we can create instructionally sensitive and useful assessments. This design framework requires the use of evidence, both in the design of the assessments and in analysis of the item level results against the design, to support the claim that the learning theory being described is reasonably true.  This evidence-centered design branch is called principled assessment design using Range ALDs (RALDs).

Principled Assessment Design

Principled assessment design using RALDs has been proposed as an assessment development framework that centers a state’s stakeholders in first defining its theory of how student knowledge and skills grow within a year. Learning science is used where research evidence exists along with teachers, researchers, and item writer judgements. During this process, stakeholders define the contexts, situations, and the item types which best foster student learning and allow students to show their thinking as they grow in expertise. They develop companion documents that provide guidelines for how to assess. When items are field tested, additional analyses beyond what is standard in the measurement industry are conducted to check alignment of items to the learning theory, and the state collects diverse teacher feedback on the utility of the score interpretations and companion documents for classroom use, all before the assessment ecosystem is used operationally. The vision for this framework is to align interpretations of growth in learning across the assessment ecosystem using initial evidence from large-scale assessments and teacher feedback to support or revise materials as needed to ensure the system is fully aligned for all students.

Range ALDs

RALDs describe the ranges of content- and thinking-skill difficulty that are theories of how students grow in their development as they integrate standards to solve increasingly more sophisticated problems or tasks in the content area. They are essentially cognition-based learning progressions. Why? States neither prescribe curriculum nor the order in which to sequence and teach content.  RALDs are intended to frame how students grow in their cognition of problem solving within the content area agnostic to the areas of local educational control. They are critical for communicating how tasks can align to a standard and elicit evidence of a student’s stage of learning, but the thinking a student is showing does not yet represent proficiency. Thus, the student’s cognition still needs to grow as shown in Figure 1.

The image shows a learning progression inside four cells demarcated with Levels 1 - 4.  Above the progression the text reads "Within a single standard can be ranges of cognition that can be defined from content features, depth of knowledge, and the context of the items. Under the first two cells of the progression is an arrow moving from level 1 to level 2. Under the arrow a text box says "Students scoring in Level 1 can answer explicit easier content in grade-level text (often explicit detail questions). To grow, they need to learn how to make low-level inferences which often involves paraphrasing details in their own words to make their own connection.
Fig. 1. Example of an RALD and interpretive guidance.

Range ALDs hold the potential to reduce the time it takes teachers to analyze the features of tasks that only require students to click a response, which have become pervasive in our instructional and assessment task ecosystem. Teachers can quickly match the features of tasks to the cognitive progression of skills to identify where in the continuum of cognition an item is intended to show student thinking. Teachers can then identify where students are in their thinking and analysis skills, by matching the items a student answers correctly to the cognition stage. Even better, district-created or teacher-created common assessments could pre-match items to the continuum to save valuable time. Such an approach allows teachers, districts, and item writers to use the same stages of learning when creating measurement opportunities centered in how thinking in the content area becomes more complex. This supports personalizing opportunities to measure students at different stages of learning in the same class. For example, students in Stage 1 in text analysis need more opportunities to engage in inferencing rather than retrieving details from the text.

The notion of adapting measurement and learning opportunities to the needs of a student is a principal of learning science. RALDs are intended to help teachers estimate the complexity of tasks they are offering students and compare them with the range of complexity students will see at the end of the year on a summative assessment. Interestingly, such an approach aligns to the Office of Educational Technology’s definition of learning technology effectiveness. “Students who are self-regulating take on tasks at appropriate levels of challenges, practice to proficiency, develop deep understandings, and use their study time wisely” [23, p. 7]. While the teacher can target the tasks to the students, the teacher can also empower students to practice making inferences as they read texts independently and together in the classroom. Curriculum and instruction set the learning opportunities; the tasks are used to locate a child on their personal journey to proficiency.

How and when does a state collect validity evidence to support such an approach?

Scott et al., wrote that assessments support the collection of evidence to help construct and validate cognition-based learning progressions. Under the model I describe, the state is responsible for collecting evidence to support the claims that the progressions are reasonably true. The most efficient place to do this is through the large-scale assessments it designs, buys and/or administers. This approach requires an evidenced-based argument that the RALDs are the test score interpretations, and the items aligned or written to the progressions increase in difficulty along the test scale as the progressions suggest. Essentially, to improve our system we are using the assessment design process to

  • define the criteria for success for creating progressions at the beginning of the development process,
  • use learning science evidence where available, and
  • collect procedural and measurement evidence to empirically support or refine the progressions.

The first source of validity evidence is documenting the learning science evidence used to create the progressions in the companion document for the assessments that teachers, districts, and item developers use. Such evidence sources often describe not only what makes features of problem solving difficult, they often suggest conceptual supports to help students learn and grow. This type of documentation is important to support content validity claims centered in an assessment that does more than measure state standards. This is an assessment system designed to support teaching AND learning.

Item difficulty modeling (IDM) is a second way to collect empirical evidence. When conducting IDM studies, researchers identify task features that are expected to predict item difficulty along a test scale, and these are optimally intertwined in transparent ways during item development as shown in Figure 2. It is critically important to specify item types for progression stages because of research that suggests item types are important not only in modeling item difficulty but also in supporting student learning.

The image shows a learning progression inside four cells demarcated with Levels 1 - 4.  Below the Level  1 cell is a box that denotes the use of DOK 1 items and the use of MC items. Below the Level 2 cell is a box that denotes the use of DOK 2 items and the use of EBSR items. Below the Level 3 cell is a box that denotes the use of DOK 3 items and the use of EBSR items. Below the Level 4 cell is a box that denotes the use of DOK 4 CR items and (multiple pieces of evidence)..
Fig. 2. Example of an RALD integration with item specifications to show what types of items and cognitive demand intersect with evidence statements from Schneider, Chen, and Nichols (2021)

A third approach to validating progressions is to look at identifying items along the test scale that match and do not match the learning theory and make decisions about either removing or editing the non-conforming items or editing the non-conforming progressions. The process I am describing is iterative. It can be done during the development and deployment of a large-scale assessment simply by rearranging many of the traditional assessment development processes into an order that follows the scientific process. I believe this process is less difficult than we think. We simply need to get in a room together with our cups of coffee and outline the process and evidence collection plan before beginning the adventure! The conscious choices of uniting our assessment ecosystem centered in a common learning theory framework with transparent specifications and learning science evidence is what makes assessment development principled (page 62). That is, task features, such as constructed response items, are strategically included by design in certain regions of the progression. Being transparent about such decisions and sharing the learning science evidence upon which decisions are based allows teachers to use assessment opportunities in the classroom as opportunities to support transfer of learning.  Transfer of learning to new contexts and scenarios within a student’s culture is critical for supporting student growth across time. This, in turn, ensures that teachers and students are using the same compass, and they are framing their interpretations of what proficiency represents in similar ways which promotes equity. This also allows large scale assessments to contribute to teaching and learning rather than being solely relegated to a program evaluation purpose. It is incumbent upon all of us to ensure that students get equal access to the degree of challenge the state desires for its students.

The synergy between assessments and the learning sciences begins with the notion that assessments can be designed to support teaching and learning. We must have the goal of showing validity and efficacy evidence that such assessments are designed to and actually do support teachers. To produce such assessments, we use principled approaches to assessment design and work to improve the score interpretations. We collect sources of evidence that are growing in use to evaluate if more nuanced score interpretations can be supported. We provide professional development for pre-service and in-service teachers on using the materials, and critically, we collect information regarding whether, if used accurately, RALDs help teachers analyze student thinking in the classroom. In the final blog of this series, I am going to explore embedding this framework into the Understanding by Design curriculum process. I have my coffee pot and creamer ready to go!

A steaming cup of black coffee is in a white mug on a white plate with coffee beans on the white plate and wood table.

K-12 Accommodation Policies in a Digital Age: Is it Time for A Change?

In the space of two years, historical educational accommodation policies for K-12 students set at the state and federal level have become increasingly difficult to implement in practice. Why in many states might guidance that accommodations used when taking a statewide assessment be the same ones the student uses during typical classroom instruction and assessment need to be amended? The answer is simple on the surface and complex underneath.

A pad of paper and an alarm clost surrounded by scattered numbers.
Photo by Black ice on Pexels.com

Since 2020 our nation has seen a sudden increase in the use of digital technology for district assessments and classroom instruction and assessment. Digital technologies for classroom instruction and classroom assessment frequently are not required to provide the same accommodations, designated supports, or universal tools as many high stakes testing programs. This places IEP and 504 teams in the tricky position of either (a) breaking guidance to remove barriers in the student’s ability to show what he, she, or they know and can do, or (b) strictly following guidance which likely makes the student’s educational experience one of frustration. The latter approach also can provide inaccurate information about the student’s true abilities. Given that it is the beginning of the year, it is a great time to prepare for productive IEP/504 meetings!

Accommodations, Designated Supports, and Universal Tools

The Americans with Disabilities Act (ADA, 1990) defines an accommodation as, “…changes to the regular testing environment and auxiliary aids and services that allow individuals with disabilities to demonstrate their true aptitude or achievement level on standardized exams or other high-stakes tests.” While accommodations are often considered through a testing lens, students who use accommodations for testing also need those same auxiliary aids and services during instruction as they read, write, and listen and engage in classroom assignments to learn.

Designated supports, as defined by the National Center for Educational Outcomes (NCEO), are features that can be used by any student as determined by an educator or school-based team. Universal tools are digital supports that may be available to all students such as expandable text and items, highlighters, zoom, and breaks. In many digital provider systems, there is often not a difference between a universal tool or a designated support; therefore, tools such as a highlighter, strikethrough, closed captioning, or magnification are provided as a standard feature that all students can choose to use. However, there is often an inconsistent approach taken to accessibility options offered across providers which makes standardizing accommodations and designated supports challenging in today’s digital classroom.

The User Experience

Consider a student with a print-based disability who needs text enlarged to accurately and more efficiently access the content. Prior to COVID, this student would typically request the teacher enlarge the print on the worksheets and passages used during an assessment or instruction. The student’s large-scale assessment administered on computer or in print used 14-point font as a standard practice and thus the student historically selected the printed option. District benchmark assessments administered on paper were enlarged. The 504 Team could generally follow the guidance of a consistent approach for classroom instruction and classroom, district, and state assessments.

Here is what the student reports life is like two years later coupled with measurement observations.

Student Reported Access Challenges and Workarounds

Technology UsedWhat the Student ReportsWhat is Being MeasuredWhat the Student Says Works
Large print books or e-booksLarge print books or e-books allow font to be increased and are primarily used at home. For books read in school, a second copy is purchased privately to meet these needs.Text comprehension and text analysisThe student also downloads and listens to audio books for pleasure and prefers this approach. His mother frequently retypes assignments in Word where the font can be enlarged so the student can access the assignment for school projects.
Instructional WebsitePassages do not zoomText AnalysisThe student copy-pastes passages into either a Word document to enlarge and print or copy-pastes into text-to-speech software depending on how much time is available; additional time
Teacher-created AssignmentsStudent reports the teacher often uses 8-to-10-point font with a two-column format to present classroom assignments.Mastery of standards while reading visually crowded material.  Student has a friend read the assignment or delays work until the weekend where his mother will retype and place in Word; additional time
District purchased assessment software for classroom and district useStudent reports the zoom functionality is difficult to use. The text expands off the screen and the automated answer document covers a large portion of the text. The student must scroll back and forth from sentence to sentence and loses his place.Mastery of state standards, eye trackingPrint tests and enlarge; additional time
District purchased norm-reference assessmentStudent reports the zoom functionality is difficult to use. The text expands off the screen and the student must scroll back and forth from sentence to sentence using a line reader to keep track of where he is. The student takes triple the time of the typical student to complete the assessment. No print option is available.Content area understanding in core classes, eye tracking, and sustained attentionInvokes text-to-speech for mathematics and takes the Reading test without accommodation
State AssessmentComputer administered with text that enlarges and expands along with the items. As the text enlarges it wraps so the student reports accessing efficiently.Mastery of state standardsAdditional time

In the scenario above, not only can historical accommodation guidance not be followed, but there are also sometimes additional elements that are added to the construct for a student with a disability that is not present for a typical student. Without careful user testing with students, a universal support can potentially create a different problem for a student and not solve the problem the digital provider was seeking to fix.  While this scenario describes experiences from a single student interview, students who are English Language Learners, with hearing impairments, who are blind, and/or have severe cognitive disabilities may all face similar or more daunting challenges.

Male hand hovering over a mouse with a pad of paper in the background and a keyboard in the foreground.
Photo by Lex Photography on Pexels.com

What is an IEP/504 Team to Do?

Come to the IEP/504 meeting with test administration manual accommodation sections bookmarked, log-ins to educational technology systems, and the student, when possible. Investigate each digital provider’s accessibility options with the student and then determine which accessibility option works best based on what is provided and the student’s self-reported needs. If the student is not able to communicate her needs, her teachers and parents must make the best decisions based on observations of what does and does not work. This is not always doable in a 30-minute meeting so prework may be necessary. Parents, be an advocate for your child but also be sensitive of teacher time. The teacher likely has multiple students with IEPs and 504’s.  Note the use of text-to-speech in the scenario above. Text-to-speech is technically a change in what is being measured if the purpose of the task is to read. It should be used with care.

What is a State and District to Do?

Review guidance from the NCEO who documents the needs of students with disabilities and provides policy guidance to districts and states. Such policy guidance can be applicable equally to instructional providers and assessment providers. States and districts might consider conducting focus groups with students with disabilities before or after purchasing software to gain perspectives on what features work well for students. Students can tell you which systems have what accessibility tools, which tools they need that are not present, and what works and does not work. Specialists in working with these students can then teach students and teachers how to work around different tools. For example, if a digital tool has long passages, magnification tools that do not function well for the student, and no text-to-speech tool, sometimes it is just easier for the student to copy and paste a passage into a freely available text-to-speech reader. Is this ideal? No. But it is also not reasonable to expect a student to work in a format that requires him to take triple the amount of time to complete an assignment as his peers. Time tradeoffs are described by college educated adults with print-based disabilities as a central tenant of compensation that led to their success when responding to researcher interview questions.

What is a Digital Provider to Do?

Digital providers must balance enhancement requests against their ongoing roadmap and make tradeoffs. Designing products with accessibility in mind from the beginning is often less expensive then retrofitting.  Investigate universal design. Products used for instruction often do not have the same legal requirements as a high-stakes test. Interview students, watch them user test products, and have them rank what is most important. Reach out to adults in your workforce that also have similar profiles for feedback, even if you need to use anonymous surveys. This can help product managers make decisions about tradeoffs. It can also help us remember that both teachers and students are the users of educational technology.

Final Thoughts

Because we live in a world that is moving rapidly and staying digital, we have the opportunity, over time, to be more accessible than ever. Recorded lectures that have closed captioning that can be accessed as needed have the potential to support students when they are ready to learn and working to learn. Lectures should not be thought of as one and done. It is an exciting time! Is it time for our accessibility policies at the federal, state, district, and school level to become more nuanced? I think so.

Focus IEP/504 teams on how tools work for the child and perhaps soften the language of standardized accommodations across instruction and assessment. Provide teachers professional development on accessibility tools, what they can do to help, and accommodation profiles that are associated with different disabilities. Foster students as partners on their IEP/504 team so they are ready to self-advocate in college and their career. This blog represents the experience of a single interviewed student. I used text-to-speech to assist me in proof “reading.” I encourage advocates and adults with disabilities to share your experiences of what works and what does not in positive, proactive ways. Please feel free to reply to the blog! It is only through sharing that understanding grows and policies change. And if you are a parent of such a child, good luck with that IEP/504 team meeting. I hope this helps.

From Multiple Choice to the Beyond!

Case Studies in English Language Arts: Part 2

In the first blog of this series, I argued that when teachers use multiple-choice items as the predominate way of measuring student learning, it is difficult (to near impossible) to uncover what students are thinking. Many students enter and exit their grade in roughly the same relative stage of proficiency in the state standards. This is one reason that many accountability systems now privilege growth (gains made along a test scale) in addition to status (proficiency).

If we want to accelerate student learning, we need to provide students the opportunities they need to learn in order to grow. To do so, we must uncover the types of evidence these students use to draw conclusions. This helps us identify where students are in their development. We must then support them in practicing the thinking skills present in next stage of development. We must think in learning progressions, use learning science, and provide the opportunities each student needs to master the desired learning target. In this blog I argue that we need to engage students more frequently in writing to support their growth. I share textual evidence characteristics that are signals of where students are in their learning. By looking for these signals, you can determine what thinking skills and abilities students likely need to master next to become proficient in textual analysis.

Deeper learning prepares students to work collaboratively and master challenging academic content. https://images.all4ed.org/license/

Performance tasks help students grow

Performance tasks can be product or process based. They help students integrate knowledge, skills, and abilities across standards. In the classroom, they are often tools to ensure students are engaging in more complex levels of thinking and learning. When they are used consistently and increase in complexity, they give students multiple opportunities to learn and show you if they are growing in their thinking skills across time. Studies with university students have shown that short-answer tests produced greater gains in students learning course material than multiple-choice tests. In an important study titled “Authentic Intellectual Work and Standardized Tests: Conflict or Coexistence,” researchers found that Grade 3, 6, and 8 students who had consistent opportunities to engage with more complex tasks—i.e., required writing, connected to the real world to support relevance and engagement, and focused on higher levels of cognitive complexity— performed higher on large-scale assessments than peers who did not have such opportunities. This suggests writing frequently, removing recall (i.e., detail) questions from reading assessments and instructional activities once students demonstrate they can do this skill, and focusing on the location of evidence that students use to answer a question correctly helps you use your time and students’ time wisely.  

Performance task considerations

Context is the condition under which a child can demonstrate a particular skill, such as making an inference. Context is the content-based nuance or scaffolding that affects the difficulty of the task. One research-based finding is that the location and clustering of evidence in the text a student must use to draw a conclusion affects the difficulty of the inference. Thus, when you are creating a performance task to foster student’s abilities to make an inference, you may want to consider borrowing or adapting a stimulus, at least initially. Analyze where the evidence is in the text for each possible inference you want a student to make. Inferences range from simple to complex. What type of inferences can a student make and using which types of evidence?

The good news is that rich texts often allow us to ask different inference questions that target students in different stages of development. Thus, you can create tasks for students that align with their specific needs, often from the same passage.

Here is an example of a text analysis progression for Grade 6 students synthesized from research. By looking at a passage, a question/task, and the evidence the student must use to answer the question correctly, you can align the task to a student’s current stage of development. That is, you are looking at the text-task interaction. To discover which student needs the text-task interaction (hereafter referred as the task) you have identified, find the student who is in the next earlier stage of the progression.

Student’s Current Stage of DevelopmentEvidence Characteristics
The student working to access on-grade text analysisidentifies explicit textual evidence to answer literal comprehension questions in which the evidence used is in a single sentence or two adjacent sentences.
The student working at the beginning stages of on-grade text analysismakes simple inferences that are a restatement of evidence the text says explicitly and is often found within a paragraph.
The student approaching on-grade text analysismakes simple inferences that are a restatement of evidence the text says explicitly and can often cite a single piece of evidence within a paragraph in support.
The student engaging in early on-grade text analysismakes inferences that are a restatement of evidence the text says and can often cite more than one piece of evidence using adjacent paragraphs in support.

provides interpretations of author’s purpose or character traits and motives that are more obvious.
The student engaging in year end on-grade text analysismakes sophisticated inferences by locating, citing, synthesizing, and interpreting relevant pieces of evidence that are not adjacent in the text to respond to tasks such a development of a theme, changes in character traits, effects of setting on plots, and multiple meanings.
The student engaging in an advanced on-grade text analysismakes complex inferences by locating, synthesizing, and interpreting multiple pieces of relevant evidence that are not adjacent in the text and uses these skills to analyze development of themes and the effects of changes in character traits, setting and setting changes, and multiple meanings have on plot development or conflict.

uses complex inferencing skills to make comparisons or connections across texts of the same or different genre.
Text Analysis Learning Progression
Student writing

Adapting Assignments

Samples of state assessments with released items can be a nice place to borrow passages for a performance task to probe student thinking in your classroom, especially for older elementary and middle school students. You can also do the same adaptions with short stories in the literature textbook you use. The passages will typically align to the state’s text complexity expectations, and the passages will be sufficiently rich to address and target multiple standards at different levels of difficulty.  This released Grade 6 paired passage (found on pages 2–5) is an excellent text source to understand where students are in their thinking. In this context, we are using the passages only.

We want to give students time to complete such a task. In earlier parts of the year, having students engage in an essay developed over a week as homework is a nice option.  Students need to engage in essay writing monthly or more across years to become fluent in the writing and analysis process. Students might be asked to annotate the text first. This provides the teacher an opportunity to discover what each student independently notices quickly. For students who are in earlier stages of reading comprehension, make the task easier. Ask them to annotate the first passage and then write a paragraph regarding what they infer to be the theme along with the corresponding evidence they used to develop that inference. Make the task a bit harder for other students. Ask them to write about how the author develops the theme in the first text. In this way, all students are getting access to on-grade text; however, the task each student is given is targeted to what the student needs to do next to grow. By looking at the evidence they use in their writing, you can tell if students are moving forward.

Another approach is to sequence the tasks. Optimally you provide feedback about what the student needs to do next. If there are two pieces of evidence to support an answer, can the student find one or the other? Once the student has successfully responded to the first task of identifying the theme and written a paragraph with sufficient evidence, the student can then be given the task of tracing the author’s development of the theme along with that corresponding body of evidence. The student should be encouraged to revise as needed to meet your expectations.

The more complex version of such a task sequence is below. In this situation, the teacher purposefully chose to ask the students to focus on annotating elements of narrative text because the measurement goal was to elicit inferencing and analysis skills rather than lower-level skills such as restating explicit information. The teacher wanted to check what the students annotated before they began writing. If a student was not able to pick up key elements of narrative text the author used to set up the conflict and resolution in each text, the student would not be able to engage in the writing task sufficiently to make it meaningful. Thus, the teacher is evaluating if the student analysis skills are sufficient before increasing the cognitive load in which the student is asked to share evidence, inference, and analysis while organizing thoughts in writing.

Here is the task that was given to students.

The writers of “The Cave” and “The Climb” have shared themes in their texts.

Step 1: Perform a close read of each passage. As you identify an element of narrative text that we have studied, highlight the element. Each element you find should be highlighted in a different color and labeled along with your inference. Share your annotations with me before moving to the next step.

Step 2: Write an essay analyzing how the authors each develop their theme. Compare similarities or differences in how the theme is developed. Use evidence from the texts and your learning progression to develop your essay.

Here is a sample student response from this assignment. Compare what the student wrote to the original multiple-choice questions (pp. 6-11). Do you think this student would have answered most of the multiple-choice questions correctly?  Why? Where does the student fall along the text analysis learning progression? What additional information does the teacher have now to guide instruction that he would not have had using just multiple-choice questions? Notice, at no point have we discussed grading the students. This is an opportunity to learn. Students who are challenged at the right level of complexity so they can see their own development are more likely to engage in and focus on their own learning. The goal is to help each child find success, and we are focused on the formative process.

Based on this student’s response, you might ask the student and her peers in a similar stage to develop an essay in which they compare how each author develops the parent perspective found in “Mother to Son” by Langston Hughes and “Roll of Thunder Hear My Cry” by Mildred Taylor using evidence from both texts. Such a task integrates the three standards below (in addition to writing standards) and increases in complexity. Perhaps, the group of students compare their perspectives and writing. Can the students, as a group, find the key pieces of evidence that you, the teacher, would use? Often writing helps students think and make connections they would not otherwise. After comparing their work with one another, they likely have new information to add to their own analysis. This book is selected with the assumption that it is the focus of a novel study around the same time, and the poem is a “cold” read.

CCSS.ELA-LITERACY.RL.6.9
Compare and contrast texts in different forms or genres (e.g., stories and poems; historical novels and fantasy stories) in terms of their approaches to similar themes and topics.

CCSS.ELA-LITERACY.RL.6.6
Explain how an author develops the point of view of the narrator or speaker in a text.

CCSS.ELA-LITERACY.RL.6.4
Determine the meaning of words and phrases as they are used in a text, including figurative and connotative meanings; analyze the impact of a specific word choice on meaning and tone.

Given that this book and poem are frequently studied in middle school classrooms, how might you take these two texts and adapt tasks to support students in different stages of reading development? Engaging the student with a task from a novel that is studied is a precursor to asking students to perform a parallel task in a novel of their choice as a project.

Who are the students needing to grow?

When you see students increase scores along the district’s interim assessment scale, but their year-end predicted achievement level does not change, their scores decrease, or their achievement stays “flat” we want to pause and reflect. Take the time to identify what evidence students are using to support their thinking of what they read.

Students across all levels of achievement can find themselves stuck and not growing.  Perhaps in your probing you find that students are able to retrieve explicit details from the text, but they are unable to make a low-level or simple inference. For these students they need the next stage of text-task interaction opportunities in which we invite them to make a simple inference that uses evidence found within a single sentence or single paragraph. Perhaps you have students who are able to make simple inferences, and in order for them to grow you need to locate text-task interaction opportunities in which you invite them to make an inference that relies upon drawing evidence across two or more paragraphs that are next to each other, and you ask them to articulate (cite) how they know. Perhaps you have students for whom you need to locate text-task interaction opportunities in which you invite them to make inferences that will rely on drawing upon evidence across paragraphs that are separated, and you ask them to write a paragraph or an essay describing how they know.

Writing takes students more time because it is more cognitively demanding. It also helps students transfer learning to new situations. (For students who need support getting their ideas on paper, using text to speech software as a beginning step can be helpful).  By focusing students on the types of tasks they need for growth, we accomplish more. This allows you and your students to use time effectively. Students in more novice stages of learning are not left perpetually drowning. Students in more advanced stages of learning are not left perpetually with busy work that does not encourage them to analyze and critique.

Need to liven things up? Allow students to choose an essay, presentation, or a podcast when you are ready to formally assess them; but, use the same rubric. Once students can compare and contrast evidence from different sections within a text, they also have the ability to begin to compare, contrast, and evaluate evidence across multiple texts to engage in more sophisticated, difficult tasks. You might still use multiple-choice items in your classroom on rare occasions because they are fast and efficient for you, but it is my deepest hope you move them from multiple choice to the beyond!

To Multiple Choice and Beyond!

Case Studies in English Language Arts: Part 1

As a teacher, how do you know if your students are on track for meeting the expectations for year-end performance in the state standards? During the year we may see that many students spend large amounts of time in a similar stage of learning whereas other students grow. When you see one of these three scenerios –(a) students increase scores along the district’s interim assessment scale, but their year-end predicted achievement level does not change, (b) students’ scores decrease, or, (c) students are  staying the same across time – it is time to dig in. In such a situation, you have spent three to four months teaching new topics, without a corresponding substantive change in student cognition in the content area. Thus, we need to investigate where students are functioning in their thinking.

Students who are growing to become proficient in their state standards are demonstrating sufficient mastery of a state’s academic content standards at particular levels of difficulty, integrated with related standards, at particular levels of cognition (higher level thinking skills). In reading where the goal across years is to transition students from learning to read, to literal comprehension, to text analysis, it is possible to take students through new content at lower levels of cognition. When we see test data that essentially suggests students are not growing much, if at all, across time we have to ask the question, “Why is the child stuck?” Once we have this figured out, we can determine next instructional steps.

How are you having students show their thinking? For students who remain in the same stage of learning in the standards, we need to move quickly and shake things up to support their growth. One fast approach is to use multiple choice items less and use questioning, constructed response, and extended response items more! It is often expedient to ask students, “How do you know?” when you are on a fact-finding mission. Allow me to elaborate to demonstrate this idea more concretely.

Text Complexity

Perhaps, you have run across the book The Emerald Atlas by John Stephens. This text is coded as a Lexile level of 720. States often use the ubiquitous Lexile Framework® for Reading offered by MetaMetrics or other metrics to help teachers track if they are using grade-appropriate texts. They also use it to help ensure ELA assessments are differentiating text complexity across grade levels. This novel falls in the moderate text complexity range for Grade 3. A partner tool for the quantitative evaluation of text complexity is shown here. The qualitative evaluation of texts helps teachers, curriculum designers, and assessment developers think about the characteristics of text sections that make interpreting text more or less complex for students.

Foundational reading skills are needed to support growth in reading comprehension. However, with state assessments, by Grade 3 the expectation is that students have transitioned from learning to read to reading to learn. While it would not be unusual to ask a Grade 3 student to read aloud to check foundational reading skills, it can also be a good idea to listen to students read out loud (privately as students increase in age), regardless of grade, who are not growing as expected. It is a way to investigate decoding skills, and most critically, it allows you an opportunity to probe what students are thinking on the fly as they read.

Cognition

Here, we have a child who is reading The Emerald Atlas come to the following two sentences of a paragraph.

“Gabriel’s band had entered through the dark northern end of the city. Two Screechers standing sentry had been felled by arrows, another by Gabriel’s falchion.”

There are two interesting words a teacher could choose to ask the student about in these sentences to determine what the student understands in their literal understanding of the text: sentry and falchion. Asking students about vocabulary words in context is a component of almost every state’s  standards. For example, in the Common Core Standards we see

CCSS.ELA-LITERACY.L.3.4.A
Use sentence-level context as a clue to the meaning of a word or phrase.

CCSS.ELA-LITERACY.L.4.4.A
Use context (e.g., definitions, examples, or restatements in text) as a clue to the meaning of a word or phrase.

Given that we have two word choices, which path should a teacher choose?

Valencia, Wixson, and Pearson posited that reading is found at the intersection of the text and task. They noted in their research that we understand reading comprehension development by focusing on the task the student is asked to perform while at the same time noticing the amount and location of the textual evidence, the degree of abstractness in textual evidence needed to perform the task, and the reader. That is, we cannot look at these elements in isolation.

A student in earlier stages of reading comprehension development might be asked,  “Why are the bad guys standing in the dark?” to determine if the student can infer the meaning of sentry. The student who makes the inference is doing so using evidence located between two consecutive sentences within a single paragraph. This type of interaction makes the inference easier. A student who is in a more advanced stage of reading comprehension might be asked, “What is a falchion?” Thus in the same paragraph and in nearly the same reading moment we have opportunities that can lead us to different evidence trails within the text that help us understand what and how a student is thinking while reading. This is why Valencia et al., (2014) do not support interpreting the text complexity of the entire passage as the main driver of when a student can demonstrate mastery of a standard. In this real-life example, the latter path was chosen.

As it turns out, the Grade 3 child was not really sure what a falchion was. Though there are hints in the entire paragraph. The Screechers were felled (child must infer killed). The men were armored but moving silently. If the men were moving silently, they likely needed to kill the guards standing sentry (lookout) quietly. What beside arrows could be used to quietly kill guards on lookout? Perhaps the child who can answer the question correctly at this point could do so using other connections. We don’t know how to interpret what a student understands until we investigate the chains of evidence available and that the student used to respond to the task correctly.

Two pages later the word appears again!

“There was a soft twang beside her as Gabriel released his arrow…” “There was a volley of rifle fire, the thick swoof of a dozen arrows taking flight, the broken thudding as they found their targets, and all was chaos and shouting. Dropping his bow, Gabriel pulled the falchion off his back, gave a great, bellowing cry, and leapt through the gap in the wall.”

The child looks up as says, “A falchion is a broad sword kind of a thing.”

“How do you know?”  

“He pulls it off his back.”

“Why not a rifle, it could have been on his back too. How did you know?” the teacher probes.

“Well, first the name; it sounds weird. Also, when they [the main characters] were going through the maze [9 chapters earlier] he [Gabriel] unwrapped that bag, and it had that weapon that looked like a big machete. “

In the response, this student is citing evidence from Chapter 10 in the book, and he is connecting this to separated evidence much later in the text in which the character pulls the weapon off his back. It is the how the student came to the correct answer and where the evidence is located within the text that shows us the sophistication of the student’s thinking. This aligns to the NAEP Reading Framework Complexity Model  of “Integrate/Interpret” because the student is making an inference by making connections across situations. It is very hard to glean where the evidence is located a student is using to make an inference from a multiple-choice item. For this student the teacher decided to begin moving the student into the text dependent analyses process.

Deeper learning prepares students to work collaboratively and master challenging academic content. https://images.all4ed.org/license/

Are you over-relying on multiple choice items?

While multiple-choice questions are often used on large-scale assessments because they are fast and efficient, many states have moved to incorporating a variety of item types, including technology-enhanced items and short constructed response items, in addition to multiple choice items, to support better measuring more complex thinking skills. Constructed response items have been found to be significantly more difficult than multiple choice items. Technology-enhanced items have been found to elicit similar levels of cognition as short constructed response items. Large-scale assessments can play an important role in our assessment ecosystem, but remember as a teacher you have other assessment types in your toolbox.

If you are spending much of your instructional and assessment cycles focused on measuring student learning through a singular format in the classroom, then there is a limit to the flexibility of thinking you elicit from students. Students may not have the opportunities they need to have sufficient practice with higher level thinking skill interactions with the texts that they read. In the classroom it becomes almost impossible to give students these critical growth opportunities without using performance tasks. It also makes it quite difficult for you to understand where students are in their abilities to gather evidence from texts.

In the next installment of this blog, how to adapt assignments and a sample of student work from such an adaptation will be provided so you can see it can be easy and fast for you to collect more robust evidence types that support older students. These are tasks that are meant for formative processes, where the focus is supporting students first in analysis then in documenting their work in an essay. Just like Buzz Lightyear, students will hit bumps and snags along their learning journey, but with additional supports and scaffolding, we continue their growth!

Why We Should use SLOs to Support Student Learning Recovery

Student learning objectives (SLOs) are intended to be a teacher-centered reflection process about supporting student learning over the course of a year or throughout the duration of a course. This is a particularly important process as we work to recover from and persist through continued learning disruptions as a result of the pandemic. Many teachers are beginning this year still exhausted from the last, and SLOs can feel like a bureaucratic tool rather than a formative framework the teacher self-creates as a powerful tool to support teaching and learning. How can we reframe this thinking?

Growth by Nick Youngson CC BY-SA 3.0 Alpha Stock Images

SLOs should naturally be a part of what teachers already do. Many schools collect baseline information about what students know and can do at the beginning of their learning for the year. Baseline information arrives in the form of large-scale assessment results, interim assessments, pre-tests targeting the SLO learning goal, or optimally multiple measures that give a holistic view about student achievement as well as contextual features of the student as learner (i.e., IEP, LEP or 504 considerations). The purpose of the baseline data is to help a teacher discover what a student can do at the beginning of the year. Building a profile of where the student is currently is the starting point for next instructional steps that are optimized for the student.

SLOs are about personalizing instruction to students

Investigate this second grade teacher’s interim reading data.

Figure 1: Reading Scores from a Grade 2 Classroom  

Students along a test scale from a scale score of 150 to 200.

While we seldom think about score interpretations of data in this way, and assessments do not often give us information in this context, the reality is that students come to us in very different stages of mastery in the standards. Some students need to master critical standards from earlier grades. Other students are likely ready to learn standards from the next adjacent grade. The grade in which a student is enrolled, at times, tells us more about what a teacher is expected to teach than what students are ready to learn. State standards are a framework regarding what students should master each year to be on track to be college and career ready. However, as state assessment results demonstrate, not all students exit each grade in the proficient and advanced achievement level. Therefore, we can expect that students enter our classroom in different stages of learning, and as a result, they have different needs. Students who are represented in red with labels denoting they are likely functioning in first grade standards need more intensive opportunities to master precursor standards in first grade in addition to mastering second grade standards. Students in green likely need curriculum compacting. A single unitary pace of instruction will not serve these students equally well if our goal is to grow each and every student in our classroom.  SLOs can help us slow down and think about a plan for differentiation.

Visualizing Differentiation at a Mile High Level

As we reflect upon what it means to increase student knowledge and skills by differentiating to support students, the teacher must visualize what differentiation will look like. Students who are reading like a typical beginning Grade 1 student may need instruction which focuses on sound-symbol relationships, decoding, and reading fluency to support reading comprehension. At this stage, of reading development, we focus reading comprehension probes on the student’s ability to identify “textually explicit information” in the form of “definitions, facts, or details.” (the quotes show where I am pulling from and adapting from the NAEP 2019 Reading Framework). These types of probes align to the Common Core Standard “Ask and answer such questions as who, what, where, when, why, and how to demonstrate understanding of key details in a text.” Students who are reading like an advanced Grade 3 student need instructional tasks that ask them to make “complex inferences”. Complex inferences are inferences that require using evidence from multiple sections throughout a single text in the early grades. For these advanced readers, it is imperative they be moved beyond responding to literal, surface level comprehension questions that ask them to identify textually explicit information, make simple inferences based on supporting details within a single paragraph, or recount a story. These students already have this ability with grade appropriate text.

What does this mean for the four students in the middle of the classroom distribution? These are the students for whom a typical instructional pace supports student growth. Instruction geared to the average expectations will meet these students’ needs. But for the other nine students the typical instructional pace will be too difficult or too easy, and they will not have their learning optimized unless time is set aside to personally support each child in their current stage of reading development. Thus the teacher may create the following SLO learning goal: Grade 2 students will read grade appropriate texts with fluency and accuracy, and they will demonstrate comprehension by describing the main character’s perspective regarding the story problem using details from across different sections of the text.

SLO Learning Goals are Big Ideas

Almost all SLO models use baseline data as an input into the development of the SLO Learning Goal, which is called the SLO Objective Statement in South Carolina. The National Center for Educational Assessment (NCIEA, 2013) defined the SLO learning goal as “a description of the enduring understandings that students will have at the end of the course or the grade based upon the intended standards and curriculum.” Marion and Buckley (2016) posited that the SLO Learning Goal should be based upon high leverage knowledge and skills, often referred to as a “big idea” of the discipline, and this big idea should integrate several key content standards. Riccomini et al., (2009) wrote that big ideas should form the conceptual foundation for instruction. They are teacher prioritized concepts students should understand because they form the point of departure for students to connect current and future learning with previous learning. For example, Riccomini et al. noted that fractions are precursor skills to ratio and proportional reasoning, but these concepts are often taught discretely so that students do not see and use these connections in their reasoning as they solve real world multistep problems. The big ideas that we frame for a SLO learning goal can also be considered a single grade-level competency under a competency-based framework.

SLO Learning Goal Criteria

As teachers create their SLO Learning Goal, you might compare it to the following criteria. The SLO Learning goal

  • is measurable; it includes explicit, action verbs.
  • requires that students engage in deep demonstrations of their thinking in the content area.
  • contains the key content competencies a student should demonstrate by the end of the year.
  • connects and integrates multiple critical standards that are central to the discipline. You have evidence to support this claim documented.
  • will elicit student reasoning at or above the cognitive demand levels denoted by single state standards in isolation.
  • grain size is appropriate to the amount of instructional time you have with the students.

Analyzing the SLO Learning Goal

Content Competencies

The SLO Learning Goal can be broken down further into measureable content competencies. Reading with fluency and accuracy is typically accomplished through assessments that measure how many words a student reads per minute and what percent of words in a minute were accurate. However, a critical component of accuracy is whether or not the student has the foundational skills to decode unfamiliar words. While fluency needs to be at a rate in which a student can remember and restate what was read, the accuracy component likely needs to be the more heavily weighted instructional decision maker of the two, with careful attention to the student’s decoding ability. The next content competency denoted in the SLO is that the students will demonstrate comprehension by describing a main character’s perspective regarding the story problem.

Measureable Verbs and Cognitive Complexity

The verb describe is measureable by allowing students to verbally provide the information to the teacher or through writing. Because writing skills are slower to develop than reading and because the measurement goal is comprehension of what was read, the goal is to allow either mode of showing comprehension. That is, if the child is not yet able to demonstrate the complexity of his thoughts in writing, equal credit is afforded to sharing of thought complexity verbally.

With either mode of demonstration (verbal or written), the SLO goal is intended to elicit evidence that the student can support her answer using details from the text. In a cognitive framework developed by Norm Webb, this would be a Depth of Knowledge 3 or 4 goal (depending upon the text type), as it engages the student not only in drawing a conclusion but also supporting the conclusion with multiple pieces of evidence from the text.

Context

The SLO Learning Goal also provides context that helps us visualize what a full culmination of the learning goal looks like. The context provided is that the student is expected to use details (evidence) from across different sections of the text. This is context because it describes the conditions under which a child can provide a character’s point of view. This denotes the most advanced state of demonstrating the learning goal. An easier and arguably proficient presentation for a Grade 2 student of this same skill would be for a child to provide a character’s perspective on how to solve a problem using a detail or details from within a single section of text.

The SLO Learning Goal requires that the students engage in a deep, authentic demonstration of their thinking. However, the context of when the student can show this skill allows you to see where the child is in their development of reading comprehension.  As a teacher considers how to teach and measure this SLO Learning Goal, there must be a consideration of which texts elicit the necessary target evidence both for instruction and assessment.  What Grade 2 appropriate texts show a character’s thinking or point of view on how to solve a problem across the text? The Magic Tree House series immediately comes to mind.  The SLO Learning Goal connects and integrates multiple critical standards that are central to the discipline, will elicit student reasoning at or above the cognitive demand levels denoted by single state standards in isolation, and represents enduring understandings that will support students in the next grade.

There are multiple paths forward to developing SLO Learning Goals based on the same baseline data. The purpose of the SLO is to assist teachers in planning ahead in recognizing the need for and developing a general plan for differentiating instruction for students in different stages of learning. This is especially critical in supporting students through on-going learning disruptions. This blog should not be taken to mean that a teacher can or should never teach all students at the same time. What we are doing in the SLO process is carefully creating a scaffolding plan which is explicitly described in this Grade 2 Common Core Standard, “By the end of the year,[students will]  read and comprehend literature, including stories and poetry, in the grades 2-3 text complexity band proficiently, with scaffolding as needed at the high end of the range.”

 Let’s stop thinking of SLOs as a bureaucratic tool. Let’s think of SLOs as they are intended to be:  a formative framework to support teaching and learning.

If you would like a SLO Learning Goal Planner click here.

If you would like to see a different SLO Learning Goal for Grade 2 students, a fifteen minute overview with teacher-created examples is shown here: https://vimeo.com/151169470.   

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Understanding by Design

In this third and culminating blog on the topic of synergizing assessment with learning science, I advocate that we unify our educational ecosystem through a common theory of learning to ensure we accelerate, recover, and personalize learning opportunities for each student. To accomplish this vision of what public education can and should look like, we can consider working at the intersections of design-based research, principled assessment design, and Understanding by Design as shown in Figure 1, with teams of experts in accessibility, assessment, curriculum, diversity, instruction, and learning science. As interim assessment providers predicted, we are likely to see additional evidence that proficiency in mathematics was more strongly influenced by the pandemic than reading. We need a plan for the years ahead.

Figure 1: Synergizing Instruction and Assessment with Learning Science

In the first blog of this series, I argued that our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences. We need to create assessments that help teachers understand the stage of cognition within the content area in which the student is presently functioning. The synergy between assessments and the learning sciences begins with the premise that large-scale summative assessments can be designed to support teaching and learning.

In my second blog of this series, I described the general design framework that makes use of evidence, both in the design of the assessments and in the analysis of the item level results against the design (i.e., the score interpretations), to support the claim that the learning theory being described is reasonably true.  This evidence-centered design branch is called principled assessment design using Range ALDs

When we have validity and efficacy evidence that the state summative assessments are designed to and do support teachers by providing reasonably accurate score interpretations, we are ready to begin the next stage of the process, which is the focus of this blog.

We need to support school districts in embedding the learning theory and corresponding evidence statements into their curriculums through Understanding by Design. This third step is critically important. Why? Because assessments alone do not change an educational system.

The Planning Stages of Understanding by Design

Understanding by Design (UBD) is a principled approach to curriculum planning. Curriculum is ideally designed to ask students to produce increasingly sophisticated outputs upon which learning opportunities are based. Both curriculum and assessments are based on the desired outcomes of what students should know and be able to do. We want to use the same evidence statements and theories of learning for curriculum and assessment development if we want to create a coherent educational ecosystem that focuses on equity and growing students to proficiency and beyond. UBD is at its core a three-stage planning framework to help curriculum designers think through curriculum and assessment design. These stages are shown in Figure 2.

Figure 2: The Stages of Backward Design used in UBD

When a state makes a commitment to develop its statewide assessments using the processes described in principled assessment design based on Range ALDs,  Stages 1 and 2 of the UBD framework are essentially complete. The state has shared the desired outcomes for students and validated that the evidence collection framework is a reasonably true representation of how students are likely to increase in sophistication along the proficiency continuum. Thus, districts and teachers have access to the same validated evidence framework as test designers to support them in identifying where students are in their learning throughout the year. This is a critical step in creating an equitable educational system. Such an endeavor also allows district stakeholders and teachers to spend their precious time

  • planning for effective and engaging learning activities,
  • evaluating instructional materials against evidence statements in the Range ALDs to investigate students’ opportunities to learn at levels of cognition that represent proficient and advanced stages, and most critically,
  • creating connected instructional and assessment tasks based on the state’s theory of learning.

That is, district curriculum specialists and teachers can focus their time on Stage 3 of the UBD framework.

Stage 3: Planning Learning Experiences and Instruction

A growing chorus of measurement and learning progression experts argue that high-quality assessment tasks are interchangeable with high-quality instructional tasks: they are two sides of the same coin. Both can be used to support learning and transfer. Instructional tasks give students an opportunity to learn, and assessment tasks show students can transfer what was learned to a new scenario independently.

When a student succeeds on a task independently, teachers should be encouraged to provide the student with a more sophisticated task within the progression of learning within the unit that is the teacher’s focus. We cannot give each student in a classroom the exact same performance task if we want to accelerate student learning. Students come to us in different stages of learning and with differing needs in terms of the depth and length of opportunities they need to master a particular stage of cognition within the content area. Therefore, we want teachers to provide tasks to each student that are aligned to a stage of the learning target progression that the individual student needs to grow. The focus for the teacher is to facilitate learning by providing feedback to help the student close the gap between her present level of performance and the next stage of sophistication. It is for this reason we want to encourage each student to revise his work. The focus for districts, and perhaps the state, is to provide the authentic tasks aligned to the progressions.

Under such an adaptive classroom model, instructional and assessment tasks can have formative or summative uses, depending on the student and teacher actions and what the child is able to do independently verses with support.  Because learning targets have explicit progressions, connections across tasks based on evidence statements in those progressions will intentionally support student growth in achievement by offering multiple opportunities to learn across time and across progressions. When a student successfully responds to a task associated with a particular stage of the progression, the student is ready to move to the task associated with the next stage. Moreover, students with 504s and IEPs are naturally included in the process because the support they need to show what they know is built in, for example, by allowing them access to text-to-speech or additional scaffolding which is considered context in the Range ALD development framework.

It is the intentional planning and creation of additional difficulty across tasks by purposefully increasing content difficulty, cognitive complexity, integrating additional standards, and perhaps moving from single to multiple stimuli that is the hallmark of proficiency in many states. Wiggins and McTighe discuss the need to organize and sequence student learning experiences for maximum engagement and efficacy (p. 220) in the UBD process. Validating the Range ALDs using learning science processes of iteration allows them to meet their intended interpretation claim: Range ALDs describe the ranges of content- and thinking-skill difficulty that are theories of how students grow in their development as they integrate standards to solve increasingly more sophisticated problems or tasks in the content area. They are essentially cognition-based learning progressions.

Its About Supporting Learning in the Classroom

Creating a state-level educational ecosystem based in cognition-based learning progressions helps teachers better understand where students are in their learning and suggests likely pathways in which students need to be guided to help them develop the ability to engage in far transfer. If we take the time to ensure we have evidence to support score interpretations on the large-scale assessment, such claims become useful to teachers in the classroom because Range ALDs provide an informative tool to support curriculum and learning activities. Teachers can

  • align tasks they administer to students to the cognition-based progression stages and
  • match authentic student responses to the cognition-based progression stages.

It is the score interpretations that are critical for defensibility, not just scale scores. Creating such a system allows us to provide professional development for teachers in using evidence to understand learning. This process requires training teachers on how to align assessment tasks to Range ALDs, and that assessment tasks can be different but interchangeable if the evidence elicited by different tasks are the same. This allows tasks to be personalized to student interests, culture, accessibility needs, and ability to support increased student engagement.

We must challenge ourselves to create more efficient, equitable educational ecosystems that allow teachers to focus on analyzing where the student is and what the student needs next following a common and validated theory of learning such as is shown in Figure 3. We can and should allow large-scale summative assessments to contribute to teaching and learning rather than use them simply to evaluate teachers, schools, and districts without providing substantive information to help inform next steps.

Figure 3. Theory of Action centered in a Common Theory of Learning

Synergizing Assessment with Learning Science to Support Accelerated Learning Recovery: Preamble

Across the country, we see evidence that students are learning at slower rates than in years past, particularly in mathematics. For example, Curriculum Associates’ researchers found an additional 6% of students were not ready to access on-grade instruction in mathematics in Fall 2020 compared to historical trends. My former colleagues at NWEA found student achievement in mathematics in Fall 2020 was about 5 to 10 percentile points lower than in Fall 2019. In South Carolina, where 62% of students were eligible for the National School Lunch program in 2019, the projections of the percentage of students whose learning is on track for proficiency by the end of this year is notably lower than in years past. Most worrisome–we know we are missing students. To paraphrase Robert Fulghum, who wrote “All I Really Need to Know I Learned in Kindergarten,” to optimize learning recovery we need to hold hands, stick together, and work to accelerate learning.

Supporting learning recovery requires that we (1) optimize learning tasks to the student’s stage of development to target learning experiences just where the student needs support, (2)  facilitate student growth to the next stage of sophistication by fostering and rewarding self-regulation, and (3) treat all assessments formatively.

To accomplish these three goals, we must rethink assessment development and use. To be effective tools for accelerating student learning, assessment development and use must be synergized with findings and processes from the multidisciplinary learning sciences field. This is important if we want both classroom and large-scale assessments to serve teaching and learning, not just accountability. Why? Because together we must center our focus on understanding and cultivating the cognition students need to show for more advanced knowledge and skills in the content area that represents College and Career Readiness.

In this three-part blog series, I argue our development and use of assessments across the educational ecosystem needs to synergize practices with the learning sciences.  I am going to introduce the learning sciences, talk about design-based research, and show connections to assessments developed to understand how students grow in cognition in the content areas from the learning sciences field. In my next blog, I will show examples of synergizing classroom and large-scale assessments using principled assessment design, which is similar to the learning sciences design-based research approach. Finally, I will connect this work to curriculum design, most especially, to Understanding by Design.

Learning Sciences

The field of learning sciences is multidisciplinary. It often focuses on understanding and exploring how learning occurs in real-world contexts that increasingly are examined through technology. Sommerhoff et al., (2018) defined learning sciences in this way:

[The} learning sciences target the analysis and facilitation of real-world learning in formal and informal contexts. Learning activities and processes are considered crucial and are typically captured through the analysis of cognition, metacognition, and dialog. To facilitate learning activities, the design of learning environments is seen as fundamental. Technology is key for supporting and scaffolding individuals and groups to engage in productive learning activities (p. 346).

Learning scientists often distinguish between recall of facts and deeper conceptual knowledge.  They also focus on the contexts and situations in which students learn and show their thinking as they grow in expertise. Cognition theories are important. Situating learning in personally relevant contexts (that include the student’s culture) with sufficiently complex, authentic, and interesting tasks that facilitate learning are also focal points. While the learning sciences field is large and encompasses researchers from areas such as cognitive psychology, computer science, the content areas such as science, mathematics, reading among others, ironically psychometrics, the field in which I work, is not often included or discussed.

As practitioners and theorists, O’Leary et al. (2017) noted that psychometricians and test developers are focused on the technical evidence in support of score interpretations and test score uses that they develop. This does not mean, however, that the test developer interpretations are useful to teachers, or that test developers always validate the actual score interpretations. We seldom collect evidence that teachers find proposed test score interpretations instructionally useful in advance of creating tests, and whether interpretations describe student behaviors in ways that are helpful to instruction.  There are, however, a growing group of psychometricians who, like myself, recognize the ways we develop assessments needs to evolve to provide better information for teachers and parents. These evolving practices are similar in strategy to the design-based research practices discussed in the learning science literature at points in the assessment development cycle. The synergy between instructionally useful assessments and the learning sciences begins with the notion that assessments have the goal of showing validity and efficacy evidence that they are designed to and actually do help teachers understand and predict how learning occurs to support instructional actions.

Design-Based Research

Design-based researchers often do two things at the same time. They put forth a theory of learning and collect evidence to determine if a theory can be supported or uncovered through iteration. The goal of such research is to develop evidence-based claims that describe and support how people learn. To investigate such theories, learning scientists carefully engineer the context and evidence collection in ways that support an intended positive change in learning. Sommerhoff et al. (2018) show in their network analysis of learning sciences that what we want to understand are areas of student cognition, learning, and motivation (among others); these are the outcomes of import. These areas are what we want to make inferences about as we observe and teach students.  Learning scientists use design-based research, assessment, and statistics (and other techniques) as methods of investigating these outcomes.

The merger of what we want to understand to support students and how we use assessment and design-based research to collect evidence for such inferences is exemplified by Scott et al. (2019). They describe the following design process.

  1. Researchers use qualitative methods and observations to identify the various ways students’ reason about the topic of interest as they develop mastery, including vague, incomplete, or incorrect conceptions.
  2. The findings are ordered by increasing levels of sophistication that represent cognitive shifts in learning that begin with the entry level conceptualization (lower anchor) and culminate with the desired target state of reasoning (the upper anchor).
  3. Middle levels describe the other likely paths students may follow.
  4. When possible, the reasoning patterns described in the intervening levels draw from research in the cognitive and learning sciences on how students construct knowledge.
  5. Assessment instruments are the tools that researchers use to collect student ideas to construct and support the learning framework.
  6. The tasks students are asked to engage in on the assessment elicit the targeted levels of sophistication that represent the concepts of the hypothesized learning progression.
  7. Evidence is found to support, disconfirm, or revise the progression.

Shepard wrote that assessment developers and psychometricians need to know the theory of action underlying learning, and she noted, “a certain amount of validation research is built into the recursive process of developing learning progressions.” Design-based research has some overlap with a newer design-based methodology for creating large-scale educational assessments called principled assessment design. This approach can also be used for classroom assessments. Examples of a PAD approach will be the focus of my next blog. In the meantime, here is a graphic foreshadowing where we are going to unify our educational ecosystem to ensure we accelerate and recover learning opportunities for students efficiently, together. We all can contribute to creating systems that support better understanding of where students are in their learning and what they likely need next. Let’s hold hands, stick together, and do this!

Student building a robot
Connecting Design Based Research, Principled Assessment Design, and Understanding By Design.

Creating Engaging Relationships with Students Online

Elise Ince

University of South Carolina

E-teaching requires learning new online skills: how to administer an exam from a distance, how to share documents, hold office hours, use Zoom, and poll students. To help teachers in this matter, one can count on your administration to provide a plethora of guidance, how to videos, and other “important information” documents. However, one basic aspect of e-teaching is often left aside: how do you develop a relationship with students and keep them engaged during e-learning?

When face-to-face, a teacher can easily connect with students by looking straight in their eyes. This is not possible on camera, so no student will feel I am talking to him or her personally. Zoom etiquette documents are readily available and provide the basis for a civilized class (find a clean, quiet space, be on time, don’t walk around or start conversations with household members, etc.) Yet, following this etiquette dutifully can’t ensure engagement with an on-screen 2D stamp-size teacher.

Consciously Connecting with Students

What can I do, as a teacher, not to appear too distant and detached from my students? Connecting with them is essential for learning to occur. Connecting with students requires skill and constant effort on my part when they are in a classroom, let alone through distance learning. I decided it was fundamental to start creating ties with students before classes even began.

Creating a Video

I created a short introductory video using the free platform Animoto.com describing who I am, where I come from, where I studied, and where I have worked. I shared the video with the students ahead of the first class. By sharing my background, I hoped to appear less of a unidimensional figure. I did not want students to feel intimidated. Although I am the least intimidating person you can imagine, unless you have seen me move, enter or leave a room, you can’t understand or guess my personality. I showed a picture of my family, my pets, mentioned my hobbies, and even showed what my office looks like this semester. I was honest. I let my students know this is my first experience teaching online, and with COVID nothing is what it used to be. But I wanted them to know I was ready to start this new adventure with them!

Setting up Moments of Interaction

By making it explicit that e-learning is new for all of us, I hoped to decrease the power distance. My first goal was to appear accessible, and my second was for us to get to know each other. To that end, I asked students to create a short video on the free platform Flipgrid (https://info.flipgrid.com/)  before the first class meeting. I asked students to introduce themselves briefly, then play “Two Truths and a Lie.” Each student was asked to present three statements. Everyone had to guess which statement was false, and type it in the video comment section. This also helped me monitor that I had engaged each student and connected him or her with others. A lot of fun was had by everyone trying to guess the lie while learning interesting facts about one another. Students then revealed during the first class which statement was false and which ones were true.

Embracing Pets

Part of the challenge with any new situation is to embrace the positive. From day one, I noticed that pets were all over the screen: curious, loving, and wanting to participate. I noticed how whenever we would mention pets, students would welcome the break and connect with each other. I decided to include pets formally in my class by asking students to email me a picture of their pets. I make sure in each lecture to randomly place these pictures and let the owner present his or her pets for a couple of minutes. I believe it provides a welcome break, helps students connect, adds warmth to a format that is painfully dry and cold, and helps spread positive affect to the rest of the lecture. In that same vein, I encourage students to share their personal experiences (e.g., “What is the weirdest food that you have ever eaten?”). I ask them to fill out an information card at the start of class. I take note of interesting facts or experiences (semester abroad, fluency in another language, specific hobby, etc.) and refer to these whenever appropriate. It is important to highlight the human dimension, which we tend to forget when we look at a screen.

Engaging with Short Response Questions

I make sure to provide very detailed slides, more detailed than if I were in a classroom. This way if a student is momentarily distracted, he or she can easily catch up and will not feel lost or helpless. I also plan regular breaks within the 75-minute long class. An easy way to re-engage students is to ask them questions via polls. I have begun using “polleverywhere”  (https://www.polleverywhere.com/plans/k-12, a free platform for educators), because it allows for more dynamic visuals. For instance, when I ask students to define a concept in five words, I can use the option “word cloud” or “open-ended” to show students’ responses which appear on the screen in real time and in an organized fashion (e.g., same words will have a bigger font). Students are often curious to see their answers compared to others.

Prompting Student Thinking

Throughout the lecture, I often ask questions, limit cold calls which I find intrusive, and make sure I allow plenty of time for students to answer open-ended questions. I often show short videos, but I always give them a task while watching (e.g., report any details that you thought are worth mentioning) to share with me. This way students are warned that they should watch the video actively and not passively. I show pictures or graphs related to my lesson and ask students to comment on those. I also use popcorn questions, which should be answered with one word only. Finally, I strongly encourage, but do not require, students to have their cameras on.

There is a fine line between preserving student privacy and making sure they stay engaged. I encourage “cameras on” by saying hello to students with cameras on individually as they enter the virtual room. I remind them often to feel free to let me know if they can’t have their camera on. Many of my students warn me in advance if they cannot turn on a camera and explain their situation. High expectations combined with compassion and understanding have led to a limited number of black screens. The changes I have made in my courses for e-teaching have also led me to have a different perspective about my students lives. Thinking back on these changes, I told my students that I might keep the pet picture idea, even when we are back in the classroom. It does me as much good as it does them.

The author

Dr. Elise Chandon Ince is an associate professor in the Department of Marketing at the Darla Moore School of Business at the University of South Carolina. Her research examines how consumers process marketing material and marketing claims in the area of linguistics (language structure, meaning, phonetics and fluency). Her research has been published in the Journal of Consumer Research and the Journal of Marketing Research. She serves as a reviewer for several journals and is on the editorial review board of the Journal of Consumer Psychology.

Leveraging Classroom Assessment to Accelerate Student Learning

Do you measure student growth in learning or measure how much a student learned based on the learning targets from an assessment? I’m asking for a friend.  In a year where we are worried about catching students up to pre-pandemic levels of achievement, could we optimize the use of grading practices to accelerate learning? Assessment can be used to grow as well as document student learning at a point in time; we just have to shift how we use it!  

Does Grading Help or Hurt?

Researchers have found evidence that suggests that historic grading strategies can have negative effects on student learning. When every assignment is a summative assessment for a student, grades reflect a race to learn at the pace of the teacher and other students in the class. Whether face-to-face or remote, we may not be aware of silent challenges students face. And in a remote environment, students are having to learn new skills beyond content.

Remote Learning: The New Skill Set

“Computer Keyboard” by BigOakFlickr is licensed under CC BY-SA 2.0

Remote learning requires a related, yet different set of skills than learning face-to-face. The organizational load for both teachers and students is increased. For students who are still developing executive functioning skills (the ability to manage time, prioritize and complete tasks, and adjust to new routines) remote learning provides students an increased set of challenges in showing what they know. Gone is the teacher as organizer and reminder in the classroom with calendars on the walls or upcoming assignments on the board. Which students needs these supports? They are likely among the students who are not turning in assignments on time or not showing achievement at the same level remotely as they did in person.

Ryan and Deci (2020) examined research related to self-determination theory which posits that

  • students need to feel like they have some control or choices in their learning;
  • students need to feel competent; and
  • Students need to feel connected to learn.

When students miss assignment deadlines, we often provide grade deductions. This practice may not support students’ feelings of control, and we provide a covert message that they are not competent. We can flip this message with some small changes to assessment and grading policies!

From Penalty to Praise

What if we gave all assignments to students at the beginning of the week (or grading period)? What if we said, “You get a 90 (A) for getting the content correct?” If you choose to follow the recommended submission schedule, you get an extra 10 points for turning the assignment in on time! Think about how this changes the narrative for the student.

Students who are still developing executive functioning skills can earn an A if the assignment is late within the grading period. So can the student who is juggling their own learning as well as the learning of their sibling. Such a shift assures that students feel a sense of competence, of success, and receive a grade based on what they have learned. The list of assignments in advance coupled with recommended turn in dates also provides a structure that offers students the challenge of being on time, incentivizes it through positive reinforcement, and allows students some wiggle room on when they complete their work. This allows students, who are juggling more than we know, opportunities to practice how to manage their time and an opportunity to grow. It reinforces the behaviors we want. Allowing remote learners opportunities to connect (Fun Friday breakout rooms) if sufficient numbers of students follow the recommended submission times can also ensure students have an opportunity to get to know their remote learning classmates, especially at critical school transition points (e.g., first year of middle or high school).

Shifting Grades from Control to Choice

Retakes

Some grading policies stipulate students may retake a limited number of assignments for partial credit. And while this policy is far better than denying any retake opportunities, does such a policy help your students grow to proficiency?

Kornell and Rhodes (2013) found that most learners evaluate their own learning based on the test they just took. If a student perceives his learning as lacking, his course grade becomes the evaluation of self-as-learner. We want students to recognize they control their own ability to learn. Therefore, we have to help them take feedback from the assessment, correct misconceptions, and try again. If classroom tests are not treated as a natural part of the instruction, feedback, and assessment loop, they function as mini-high stakes assessments. Allowing students to bank their grade or work to master the learning target through a retake process allows students to engage in formative assessment as a partner with the teacher, encourages self-regulation, and has been shown to increase student learning.

Retakes as a Feedback Loop

Rigor

State assessments often release achievement level descriptors to show how student knowledge within a standard increases from more novice to sophisticated states. These descriptors also are intended to show how standards integrate with other standards. Sequencing items by these states of complexity (called achievement levels though it is better to use the levels without the labels in the classroom) on pretests or on early unit homework assignments can help students (and you) identify where students are successful, by comparing easier or more complex content within the standard.

Students who are successful on more difficult, complex content are likely ready to move on. Allowing students to pick, for example, which six out of ten items to complete for homework can tell you whether students are making accurate judgements of their own abilities. For example, students who choose to answer the difficult, complex items and do so correctly often understand their own learning. Asking their perspective on what they need, supports their autonomy.

For students who are less confident in answering on-track or advanced items which purposefully require increased levels of critical thinking skills, choosing less or more difficult items allows students to experiment with the concept of desirable difficulty. We want students to challenge themselves with harder items to optimize their long-term learning; while at the same time, we want to scaffold more complex items for them in ways that they begin to regulate their own effort towards a desirable level of rigor over time.  That is, students should have not only the opportunity to learn rigorous material, they should have the time and multiple practice opportunities to do so.

Relevance

Testing supports learning. While we sometimes create pre-test study guides for students, growth in learning and retaining concepts is better supported by practice quizzes when compared to studying alone. Multiple-choice quizzes are better activators of student learning than studying alone, and short answer questions and essays function better than multiple choice in helping students learn and retain.

Because how you assess often influences the degree to which students retain information, setting up quizzes where students self-test and get feedback on each question accelerates learning. It is also important to bring back previously learned material (interleaving) on quizzes and tests to support students retaining the information. Interleaving allows you to measure growth in learning targets over time because you are providing students multiple opportunities to demonstrate proficiency which requires use of learned information outside of the unit of instruction.

While frequent low stakes quizzes have been shown to increase learning over business as usual, it is also critical to have students use writing to support and synthesize their learning. Performance tasks that require students to integrate and synthesize their learning are essential for growth, and not just in English Language Arts or Social Studies. For example, projects investigating how algebra is used in real life coupled with practice using the equations in a real-life scenario that are of interest to a student outside of the classroom serves to help her reflect on what she has learned, makes content relevant, and requires her to engage in more complex, critical thinking.

We intend to encourage students to accelerate and be accountable for their own learning. Assessment is a learning opportunity as well as a measure of learning. Assessment and grading practices that include retakes, rigor, and relevance serve to support student growth in addition to documenting what was learned.

Quiz yourself!

Which practices do you use to help students accelerate their learning?

  1. I provide a list of all assignments due at the beginning of the semester or weekly to assist students in judging how many assignments they will have for the week to support time management, and I provide a recommended pacing for submission.
  2. When I give an assessment, I code items on the assessment on my answer document so I can investigate the range of skills from easy to complex a student answered correctly based on state achievement level descriptors, so I can monitor and grow the complexity of what students are able to do across time.
  3. I use frequent tests worth small point values and feedback to help student’s move information and processes into their long-term memory.
  4. I bring back important concepts on tests and performance tasks to ensure students are retaining and growing in the skills that are most essential for the next grade.
  5. I allow students to retake assignments when they have not demonstrated mastery to ensure they have both the opportunity and time to learn.

Assessment can be used to grow as well as document student learning; we just have to shift how we use it.  

@mcschneiderphd

Creating Support Systems for the Use of Learning Progressions

Teachers report they need more sophisticated and nuanced support systems to understand and facilitate student learning. These supports go beyond state standards, the district curriculum and pacing guide, and published textbook materials. How do I support this claim? Evidence!

I have been lucky enough to gather evidence about what teachers want through empirical studies in collaboration with states, through research collected with a nationally representative sample of science educators, through findings of grant funded research, and through conversations with the many brilliant teachers with whom I have worked or who have taught my own children.

Here is what I have learned: Many teachers want

  • prioritized standards that signal what is most important to monitor as children progress throughout the year coupled with a rough sequence of coverage;
  • a learning and evidence management system that helps them track and measure student growth over time, easily;
  • examples of what proficiency looks like, authentically; and
  • standardized exemplar tasks aligned to state standards that they can use, if they want to, to help understand what standards look like in action.

Other sources (e.g., Heritage et al., 2009; Schneider & Andrade, 2013) from formative assessment research also suggest the following would be helpful:

  • supports for monitoring student learning over time in a way that focuses teachers not on the number of correct responses a child is providing, but rather, whether those responses represent more sophisticated reasoning and content acquisition than was observed previously;
  • supports in interpreting student work and using that evidence to take instructional action; and
  • differentiated supports that honors where teachers are in their own development and not one size fits all.

Some policy makers are largely focused on tracking growth based on the content and curriculum of the student’s grade of record.  This is certainly a fair perspective given how teacher and school accountability systems are set up. But consider this alternative perspective:

  • To engage in formative practice, we need to identify where a child is.  
  • To measure growth, we have to allow teachers to explore what the child is thinking in the content area centered in what has and has not yet been taught.

Visualize a ruler measuring 12 inches. The ruler is consistent across time. It has equal increments on a continuum from 0 to 12. Now consider that 0 to 1 inch could be what we want students to learn in kindergarten; 1 inch to 2 inches could be what we want students to learn in first grade; 2 inches to 3 inches could be what we want students to learn in second grade and so forth. If a second-grade teacher only measures and teaches from 2 to 3 inches, our educational system misses the student who started the year at 1.5 inches. Our system misses the student who starts the year at 3.2 inches. We miss understanding where all students are and only capture growth of most students. Moreover, because even the most advanced students are not perfect, we allow a student at 2.8 to spend the year waiting for the 20% of the curriculum he or she needs. Or we miss seeing that such a child, in truth, already is at 3.0.

Can we allow a teacher to use the entire ruler so they can focus on what a child needs? I think we could. But we also have to build support systems around teachers that remove the ambiguity of what the targets look like so teachers can move agilely to diagnose and meet student needs.

Learning progressions are an underpinning foundation of such a support system as shown in Figure 1.

Figure 1. Intended Use of a Progression Based System adapted from Schneider and Johnson (2019)

Learning Progressions

Learning progressions can be a support for instructional actions because they offer likely instructional pathways of what students need to master across time. They provide the big picture of what should be learned across a year, support a teacher’s instructional planning, and act as a touchstone for formative assessment (Heritage, 2008). Smith, Wiser, Anderson, and Krajcik (2006) defined a learning progression as the description of the increasingly more sophisticated ways of reasoning in the content domain that follow one another as a student learns. Clements and Sarama (2014) noted learning progressions (frequently referred to as learning trajectories in their work) describe levels of student thinking. Clement and Sarama also noted students need instructional activities to help students progress along the continuum as a component of a learning progression. Learning progressions can be the foundation to support teachers and students, but coupled with standards, curriculum and pacing guides, are not enough.

Furtak, Morrison, and Kroog (2014) advised “tools alone will not help teachers realize shifts in practice.” Rather tools, the progression and the tasks, are a starting point. Learning progressions must, in their view, not only describe how students learn (this is learning science), they must be an interpretive aid in analyzing that information (this is formative practice), and a support for using the information for action. We want the action a teacher takes to be the right action for a student. To accomplish this goal districts and states need to plan policy supports, professional development, and exemplar tools.

  1. Tools to Help Teachers Collect Accurate Information about Student Learning

We need to provide additional methods of communicating the intent of the standards to teachers. Teachers need to see exemplar tasks tied to learning progressions so that teachers have supports for recognizing assignments that elicit the thinking of students in a particular stage of development. Sequences of sample tasks contextualized along a learning progression can help teachers visualize what developing, approaching, on-track, and advanced assignments in the standards look like. If we want teachers to differentiate for students, we need to provide instructional and assessment tasks that support them in doing this. We have to provide instructional and assessment tasks that are technically sound so that they elicit the right evidence that teachers can use to locate students into the correct progression stage. Using tasks that are too easy, too difficult, or technically flawed means teachers will not be able to make the right instructional decisions for each student. Any decision we make on where to locate a student in instruction is high stakes for the student.

Tools to help teachers collect accurate information about student learning are best created when teams of experts in learning science, assessment development, accommodations and accessibility, content experts, and curriculum come together and unite. A team of experts that includes teachers ensures we are eliciting the right evidence for instructional decision making. A team of experts that includes accommodations and accessibility experts ensures we are not creating, unintentionally, barriers to accessing content or students being able to show what they know.

What does a learning progression and task system look like? Here is one example.

2. Tools to Help Teachers Making Accurate Inferences About a Student’s Present Level of Performance

In addition to collecting evidence of student learning from purposefully created assignments, teachers need to make the right decision about where to place the student. They have to analyze and interpret the information collected. Existing research evidence suggests that because this is a time consuming, complex task, teachers rarely analyze student work at an individual student level. Teachers have reported they tend to analyze student learning at a holistic-classroom level using average test scores on assessments as the primary data point (Schneider & Meyer, 2012; Hoover and Abrams, 2013) or they interpret the percent of 100 of an assignment.

Information about the average child does not help a teacher diagnose gaps, confusions, or beauty in thinking for a single student (Schneider & Andrade, 2013). Such an approach has been shown to cause decreases in achievement over time (Schneider & Meyer, 2012).  In addition, percent out of 100 on an assignment does not tell you if the student responded to an easy or complex task that differentiates a novice or advanced student. Task characteristics are central to understanding where students are located along a continuum.

States or district leaders will likely want to organize the collection of student work exemplars from these tasks. They will want to provide short professional development training videos that can be accessed on the fly. Short videos can showcase how student responses are matched to a progression. Providing exemplar work does three things.

  • Exemplar student work aligned to learning progressions shows teachers authentically what student growth looks like.
  • Exemplar student work illuminates what it looks like when a child reaches the state or district’s definition of proficiency.
  • Exemplar student work helps teachers identify students in more novice or sophisticated states of development.

Teachers have to center their analysis on identifying what the child can do. This helps answers the question, “Where is the student currently?” To move students forward in their learning efficiently, you also have to know, “what does the progression of knowledge and skills look like for a child to reach the expectations for students by the end of the year?”  It is not that teachers cannot do this. Many do. But couldn’t we make their job easier and faster?

Progression descriptors describe where students are in their learning and how students likely learn. Progression-based tasks show teachers what assignments look like that elicit student thinking representative of a stage of development. Exemplar student work of each stage shows us the evidence of what students can do when they are in a particular stage. Student work helps teachers recognize when one student is more advanced in their thinking than another. Together these tools can support a likely pathway to inform explicit instructional actions. Short professional development videos targeting next steps aligned to each stage would also likely be supportive. Why does each teacher need to determine this independently? There is power in a team of experts also supporting suggested next steps that teachers can use or not, as they determine what is best for their own students.

3. Tools for Triangulation of Evidence

Creating direct and explicit connections among learning progressions, progression-based tasks, and authentic examples of student work from each stage of development, support teachers in quickly analyzing student work. They match student work to a stage of development. A match is not a stopping point for decision making. It is a call to administer tasks from the next stage to discover if another match can be made. This is done until a match to a stage of the progression cannot be made. This is where the student will need to begin instruction. The combination of the matches that represent more sophisticated levels of development along with data from summative or interim assessments provide the triangulation of evidence and, importantly, validation to support understanding where a child is in their learning and what the child needs next.

Interim data (or “summative” data because data can be formative or summative based on how you use it) can suggest in the beginning of the year where the teacher might want to start checking along a progression. For example, for an advanced student, teachers might want to start checking student skills in the middle, higher area of the progression moving forward or backwards to locate the child as needed. During the year, such data points intended for triangulation can confirm progress the teacher sees in the classroom or disconfirm it. Triangulation should spur inquiry when different sources of evidence of student learning do not converge. For example, sometimes, students can demonstrate more advanced skills before easier skills. This can be especially true for students with particular types of learning differences. It is important to foster the advanced thinking in such situations rather than gravitate to the novice stages.

4. Evidence-Management Systems Centered in Policy Decisions

Teachers ideally need a learning and evidence management system so that the authentic student work evidence can be stored digitally at the teacher’s fingertips and used as a reference to analyze student work for the same progression across time. Ideally, such a management system would also allow teachers to access examples from other grades quickly and easily. If a student cannot access the grade level progressions and triangulation of evidence suggests the student is early novice stage of content understanding, should the teacher be able to find learning opportunities and progressions from the next lower adjacent grade? This is an important policy decision because it likely influences how a learning and evidence management system is configured. And it influences what the teacher feels he or she has the latitude to be able to do.

States and districts might consider messaging that not all students may need instruction just in the grade level standards. Not all students may need to be measured on just what has been taught. While learning progression tasks can be administered sequentially and embedded into the curriculum, their potential efficacy is diminished with such an approach. Teachers need to be able to diagnose where students are outside of a pacing guide if we want to ensure equity of both having an opportunity to learn and an opportunity to grow. We want all students to have what they need.

  • Many teachers want prioritized standards that signal what is most important to monitor as children progress throughout the year coupled with a rough sequence of coverage.
  • Many teachers want a learning and evidence management system that help them track and measure student growth over time, easily.
  • Many teachers want to know what proficiency looks like, authentically.
  • Many teachers want standardized exemplar tasks aligned to state standards that they can use, if they want to, to help understand what standards look like in action.

Can we make their job a touch easier?

References

Clements, D. H., & Sarama, J. (2004). Learning trajectories in mathematics education. Mathematical Thinking and Learning, 6(2), 81–89.

Furtak, E. M., Morrison, D., & Kroog, H. (2014). Investigating the link between learning progressions and classroom assessment. Science Education, 98(4), 640–673

Heritage, M. (2008). Learning Progressions: Supporting Instruction and Formative Assessment. CCSSO. Washington, DC.

Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative classroom assessment? Educational Measurement: Issues and Practice 28(3), 24–31.

Hoover, N. R. & Abrams, L. M. (2013) Teachers’ instructional use of summative student assessment data. Applied Measurement in Education, 26(3), 219-231.

Schneider, M.C., & Andrade, H. (2013). Teachers’ and administrators’ use of evidence of student learning to take action. Applied Measurement in Education, 26(3).159–162.

Schneider, M.C. & Meyer, J.P. (2012). Investigating the efficacy of a professional development program in formative classroom assessment in middle school English language arts and mathematics. Journal of Multidisciplinary Evaluation, 8(17). 1–24.

Smith, C. L., Wiser, M., Anderson, C. W. & Krajcik, J. (2006). Implications of research on children’s learning for standards and assessment: A proposed learning progression for matter and the atomic molecular theory. Measurement: Interdisciplinary Research and Perspectives, 4(1&2), 1–98.