The following is the fourth of four excerpts from the eBook, How Do I Lead Project Based Learning?, which provides a concrete framework for leading the implementation of project based learning. Although this eBook was written through the lens of project based learning, everything can be applied to all professional learning and instructional shifts, no matter the content. Originally, the eBook’s content was the final chapter of the book, Project Based Learning: Real Questions. Real Answers.
The four drivers of instructional shifts serve as the basis for the eBook: establish relationships and trust, begin with the end in mind, model best practice, evaluate professional learning.
Evaluate Professional Learning
As an Elementary School Principal, my team and I implemented a new multisensory phonics program in kindergarten through second grade. During its first year of implementation, I can recall an eye-opening conversation I had with one of the kindergarten teachers. The majority of the conversation focused on how happy teachers were with the program, and on the ways in which students were excelling at its various strategies: naming sight words, tapping out words on their desks, writing letters in sand, etc.
Then the conversation shifted to the actual goals of the program and how we could find if these goals were being met. Was the goal to get better at the program, or was the goal for students to get better at reading and spelling? And, because the goal was the latter, how could we determine the extent to which the program was moving students in this direction?
In short, we turned their attention to the question we should always ask ourselves whenever we implement a new program or instructional shift: How do we know what we’re doing is working?
In Evaluating Professional Development (2000), Tom Guskey features five increasing levels of sophistication for evaluating professional learning, from lowest to highest: participants’ reaction to professional development, how much participants learned, evaluating organizational support and change, how participants use their new knowledge and skills, improvements in student learning.
The levels in this model for evaluating professional development are hierarchically arranged from simple to more complex. With each succeeding level, the process of gathering information is likely to require increased time and resources. More importantly, each higher level builds on the ones that come before. In other words, success at one level is necessary for success at the levels that follow. (p. 78)
Regarding PBL professional learning (or any other professional learning) the endgame is the impact we have on student learning, which is preceded by changes in teaching. So, we should be able to specify the different forms of evidence that will be used, both quantitative and qualitative, to determine if teaching and learning is improving or has improved as a result of project based learning. Some of these indicators may include: an analysis of Progress Assessment Tools (during projects and after the fact), formative assessments, summative assessments, final products, student reflections, student participation, student observations, classroom walkthroughs, and teacher observations.
To address Guskey’s lower levels, we turn to three forms of assessment.
In keeping with the theme of adult learning mimicking student learning, student learning involves three forms of assessment and therefore the same applies to adults:
Assessment of learning – This is summative assessment in which learners are assessed after the learning has taken place. Example: Students take an end-of-unit test, with no opportunities for redos, retakes, and do-overs. All grades are final.
Assessment for learning – These assessments are formative in nature (non-graded), and their results are used to drive instruction. Example: At the end of a lesson, the teacher gauges students’ progress with an exit ticket. Results are used to differentiate the next day’s instruction.
Assessment as learning – These are self-assessments, in which learners determine where they are and then what they need to do to meet their goals. Example: During independent work, students refer to their learning targets and success criteria to determine where they are in relation to where they need to be. They then adjust their work accordingly.
Out of the three, we can use formative assessment and self-assessment to help to drive our professional learning and instructional shifts.
According to James Popham (2008), “Formative assessment is a planned process in which assessment-elicited evidence of students’ status is used by teachers to adjust their ongoing instructional procedures or by students to adjust their current learning tactics” (p. 6). In modifying this definition for professional learning, it can read, “Formative assessment is a planned process in which assessment-elicited evidence of students’ and adults’ statuses is used by leaders to adjust their ongoing instructional procedures or by adults to adjust their current learning tactics.” In other words, we’re continuously gauging where we are and then adjusting our professional learning accordingly. If this adjustment doesn’t take place, it’s not formative assessment.
While participant surveys can help to kickstart the formative assessment process, we can also learn a lot from walking around a school and getting into classrooms the day after teachers engage in professional learning. This practice often leads to countless conversations that help to inform next steps. In addition, I have found ongoing face-to-face conversations (while seeking to understand others) can easily be the best way to gauge what’s working and what’s not working. Even though we may not always like what we hear, we gain valuable information that can help us to move forward. And, by using this information, we’re building relationships and trust.
If we’re leading instructional shifts from central office, we owe it to our principals (and teachers and students) to regularly gauge the pulse of these shifts, as our decisions impact the cultures of schools that aren’t necessarily “our own.” For example, as a K-12 curriculum supervisor, on about a monthly basis, I met with an elementary leadership team. We frequently began each meeting with an honest analysis of how our current shifts were progressing, as those at the building level are typically more in touch with what’s really happening. Then, based on this information, we collectively planned our action steps.
Self-assessment starts with participants having a clear idea of what they’re supposed to learn and how this learning should impact teaching and learning. Once they know what they’re supposed to accomplish, they can start to (1) self-assess where they are on the continuum of making these changes a reality, and (2) determine their next steps.
To help to establish what participants are supposed to accomplish, we don’t have to look any further than earlier in this eBook. We can use (1) the learning targets from professional learning sessions, or (2) the minimum expectations (micro expectations) for what should be taking place across every experience a school or district defines as project based learning. Then, much like we would do with students, we can set aside time for participants to self-assess (and peer assess) to determine next steps – possibly with the use of assessment protocols.
In Evaluating Professional Development, Guskey reaffirms what I have speculated for a while: the majority of schools and districts gauge the effectiveness of professional learning with nothing more than teacher surveys. As the author informs the reader, “Sadly, the bulk of professional development today is evaluated only at Level 1 [participant reaction], if at all. Of the rest, the majority stop at Level 2 [participant learning]” (p. 86). While there is a time and place for this form of evaluation, we ultimately need to be examining how our efforts impact student learning. Fixating on surveys and participant reaction to evaluate the effectiveness of professional development can be comparable to assessing classroom instruction based on nothing more than student engagement.
We can do better.
After a few weeks, Jobs finally had enough. “Stop!” he shouted at one big product strategy session. “This is crazy.” He grabbed a magic marker, padded to a whiteboard, and drew a horizontal and vertical line to make a four-quadrant chart. “Here’s what we need,” he continued. Atop the two columns he wrote “Consumer” and “Pro”; he labeled the two rows “Desktop” and “Portable.” Their job, he said, was to make four great products, one for each quadrant. (p. 337)
While we roll out instructional shifts, we can think about this story and the extent to which simplicity can help us to move forward. (This strategy helped to lead to Apple’s resounding success.) We need to take a less is more approach to instructional shifts while not ignoring the fact that initiative fatigue is a reality in countless schools and districts, maybe even our own.
Over time…we learned that planting a cell of design-trained, innovative-minded conspirators inside a large organization is not the most effective way to proceed. Innovation needs to be coded into the DNA of a company if it is to have large-scale, long-term impact. (p. 171)
If we strongly believe in the power of our instructional shifts, then we should want them to stick on a systemic scale – so much so, if we were to leave our organization the instructional shifts we’re a part of would still thrive in our absence. To reach this deep level of integration, we need more than innovation cohorts or strategies that move our ambitious people forward while allowing for others to potentially fly under the radar. As Brown implies, this approach ends up deepening but not spreading our pockets of innovation – a problem endured by countless schools and districts. Instead we need to take the stance: “[insert instructional shift] is so important for our students; we’re not going to stop until it becomes a part of what we do.”
Either way, if we only learn during professional development days or when we have to because of an initiative, we’re doing it wrong. Cultures of initiatives are not sustainable; we need cultures of continuous learning. What matters most is everyone is always moving forward, while we’re respectful of the fact not everyone moves at the same pace.
Doug Reeves (2010) tells us, “…effective implementation of any education reform is not dependent upon labels or brands, but upon deep changes in professional practices and leadership decisions” (p. 40). This is the difference between professional training and professional learning. For the most part, we want the latter.
The work is not easy, but when we take shortcuts we create a ceiling for what can happen in our schools, we insult the intelligence of our people, and we shortchange our students. There’s no other way about it: Invest in people, not programs.
More specifically: People over curriculum. Curriculum over programs.People over curriculum. Curriculum over programs. #RealPBL Click To Tweet
- Project Based Learning: 3 Types of Direct Instruction #RealPBL - April 17, 2022
- Getting Started with Project Based Learning #RealPBL - April 11, 2022
- How Do I Lead Project Based Learning? – Evaluate Professional Learning #RealPBL (part 4 of 4) - April 3, 2022