Skip to content

Why performance-based #assessment should have educators on the edge of their seats?

October 4, 2015

In the September 2015 edition of Phi Delta Kappan, Ruth Chung Wei wrote a thoughtful and informative article entitled, Measuring What Really Matters.  The authors make the case that performance-based assessments should be used more widely as a tool for getting feedback on student learning.  Their use could enhance our understanding of whether students are learning what we expect.  They write:

Performance assessments can tap into students’ higher-order thinking skills-such as evaluating the reliable of sources of information, explaining or arguing with evidence, or modeling a real-world phenomenon-to perform, create, or produce something with real-world relevance or meaning.

This would be in contrast to non-performance-based assessments that most primary and secondary school students are inundated with.  For example, worksheets, multiple choice tests or other assessments, that test lower-order thinking skills, are used as a default assessment tool by many teachers.

In their article, they explore what we can learn from past mistakes, especially the technical, practical, and political mistakes that hampered previous attempts by schools to use performance-based assessments as a tool.  Their recommendations are:

  • state assessment and accountability systems should be based on multiple measures of student learning, including locally developed assessments.
  • assessment systems should be coherent (they imply improved teacher professional development geared towards improving practice)
  • systems of assessment should support shared accountability and whole-system improvement (they write about reciprocal accountability)

By reciprocal accountability they mean:

all levels of the system-state, local, school, teacher, and student-are responsible for and must be actively engaged in building the capacity of educational systems to be responsive to the learning needs of all students.

It is interesting to think of the student as a stakeholder in the assessment system.  It seems totally reasonable because assessments are administered to give information about what a student learns.  Shouldn’t the student be in the assessment conversation?  Shouldn’t students take ownership and responsibility for their assessments?  It is their learning we are talking about.  If the answer is yes to these and other questions, then students need to be active players in assessment practices not simply passive participants.

In a Center for Teaching post on assessment, I wrote:

Assessment is a powerful tool in the teachers’ toolbox. It has been shown that effective assessment strategies can influence student achievement more than any other tool at the teachers’ disposal.

I also referenced an article, The Quest for Quality, by Stephen Chappuis,, in which the authors make the case for five elements that go into building a quality assessment program. The five are:

1. clear purpose
2. clear learning targets
3. sound assessment design
4. effective communication of results
5. student involvement in the assessment process

The authors argue for a balanced system of assessment in which the users, teachers and students are assessment literate.  These goals are also embodied in the piece by Chung and her colleagues.  They write:

For such local assessments to become a viable and trustworthy component of multiple-measures assessment system, they require well-designed systems to support technical quality, including design tools-design frameworks, task templates or shells, common rubrics, task specifications, task quality criteria-and an effective system of peer review for validation.

While they argue for this approach at the national and state level, I would argue that these design specifications should be required within every school and every classroom.  To achieve that end, we would have to help teachers learn how to become more effective designers of assessments, as was indicated by Chappuis in their article.

Performance-based assessments, commonly used by teachers who are engaged in project-based learning (PBL), are more authentic because they ask students to utilize a wide variety of learned skills, as well as their knowledge.  They often require students to read critically, think aloud, analyze sources, communicate their understanding, and collaborate with peers.  This example from Edutopia shows how a chemistry teacher uses performance-based assessment in his classroom.



There are numerous examples of performance-based assessment as part of PBL instruction at High Tech High and New Tech Network schools, as well as schools like Illinois Math and Science Academy that focus on inquiry-based instruction.

As Wei and her colleagues write in their conclusion:

But clearly, parents, teachers, and other stakeholders are telling us we need a change.

The change we should not be afraid of puts students at the center of the assessment conversation and teachers in charge of building high-quality, performance-based assessments.



Teaching towards EQ is as important as teaching to IQ!

October 4, 2015

Two interesting videos with Daniel Goleman about his work on Emotional Intelligence.  The first one is an interview.  The second is a talk he gave at an Empathy and Compassion Society conference in 2013.

In the first video, he references the five skills people need to develop, to expand their emotional intelligence.

  • self-awareness
  • managing your emotions
  • motivation…working towards your goals
  • empathy
  • social skills…how well can you handle relationships

He points out that emotional intelligence is not “one thing.”  It is a profile comprised of the strengths and weaknesses we exhibit in the five skills listed above.

In his talk at the Empathy and Compassion in Society conference, he references an “invisible” brain-to-brain circuitry that makes emotions contagious.  He refers to this brain-to-brain circuitry as being important in our understanding of how teams function and how leaders forge a culture of collaboration.  If a team leader is joyful and excited about his or her work, the leader’s emotions can powerfully influence the rest of the team.  How we manage our emotions at work is critical!  See what you think about Goleman’s thougths regarding the emotional intelligence needed for being an effective leader.

Why carving out time for students to reflect on learning is important?

September 7, 2015

John Dewey


“We do not learn from experience… we learn from reflecting on experience.”   -John Dewey

The above quote is widely used in the education community.  After going back and rereading some of Dewey’s works, as well as searching on the internet, I haven’t been able to locate the exact source.  However, in his book, Democracy and Education, he does cover a number of topics that illustrate why he makes the connection between reflection and understanding.

In Chapter 11, Experience and Thinking, of Dewey’s book, Democracy and Education, he writes:

To “learn from experience” is to make a backward and forward connection between what we do to things and what we enjoy or suffer from things in consequence. Under such conditions, doing becomes a trying; an experiment with the world to find out what it is like; the undergoing becomes instruction—discovery of the connection of things.

In this chapter he makes the case that students learn by doing.  The instruction, and hence the understanding, come from a student making connections between ideas or experiences in the learning.  Without guiding students towards making connections and experimenting with the ideas, they are less likely to deeply learn what we want them to master.

Again in Chapter 11 he writes:

In schools, those under instruction are too customarily looked upon as acquiring knowledge as theoretical spectators, minds which appropriate knowledge by direct energy of intellect.

I would imagine this quote would resonate with most educators who face the challenge of “covering the content” versus “building skills and exploring processes” involved in learning _______ (science, math, writing, art, history or a new language).  The danger of being overwhelmed with covering content is that we fail to allow sufficient time for students to reflect on what we want them to learn.

In the subsection of Experience and Thinking, Reflection in Experience, Dewey eloquently explains how understanding is reached as a result of a student reflecting on their experiences.  Through reflection students are able to more effectively “connect the dots,” and connect their experiences to their consequences.  He makes the claim that this very event promotes a student’s understanding.

So the question is this.  Do we carve out sufficient time in a student’s learning experiences to reflect on what we want them to master?  This reflection time can be informal or formal.  Informal activities in which they write about how they understand what they’re learning.  Formal activities where they respond to guided prompts prepared by the teacher and tied directly to the learning outcomes we expect.  In these formal and informal activities our goal should be to help students connect the dots between the ideas and experiences we want them to encounter.

Drivers of #innovation, what works in schools?

August 17, 2015

I am reading an article in McKinsley Quarterly, April 2015, The Eight Essentials of Innovation written by Marc de Jong, Nathan Marston, and Erik Roth.  While the research and conclusions they draw come from work in the corporate sector, I can’t help but look for ways it applies to innovation in a school setting.

The authors’ research leads them to identify eight essentials for creating an innovation culture (see exhibit #1).

  • Aspire
  • Choose
  • Discover
  • Evolve
  • Accelerate
  • Scale
  • Extend
  • Mobilize

Exhibit #2, testing for innovation, provides an explanation of the eight essentials that an organization needs to develop in order to promote successful innovation.

How might these apply to a school setting?


Schools need leaders who are convincing catalysts for purposeful and meaningful change.  Without a leader who has a vision and can mobilize his or her team, schools tend to “stay-the-course,” leaving innovation to chance.


Schools need to decide on the “right programs” to support that will lead to continuous improvement in the learning environments for all students.


“Innovation also requires actionable and differentiated insights.” (page 6).  It strikes me schools need to do a better job of creating partnerships with organizations that know how to innovate and have something to contribute to the process of innovation in a school setting.  What we don’t do particularly well in schools is prototype programs, iterate them along the way, test them in the field and make informed choices about which ones positively impact our cultures.  The author’s write: “One thing we can add is that discovery is iterative, and the active use of prototypes can help companies continue to learn as they develop, test, validate, and refine their innovations.” (page 7)  In schools, we are not great at designing a disciplined approach to “develop, test, validate, and refine.”  Is it any wonder why we struggle with innovation?

At Westminster Schools in Atlanta, where the Center for Teaching is located, we took 300+ faculty, staff and administrators in groups to twelve organizations that are leading the way to change some aspect of the Atlanta landscape.  The goal was to spend time talking with their leaders, learning from their approach, and  identifying what these twelve organizations might have in common so that we might apply what we learned to our leadership in the field of education.


“Established companies must reinvent their businesses before technology-driven upstarts do.” (page 7)  From your own experience in schools, do you think we do a good job of reinventing ourselves so as not to become obsolete?  The authors point out that most companies struggle with “risk tampering” until they find themselves under threat.  Often it’s too late at that point.  For schools, I think we have to re-evaluate our position in the landscape of education to remain relevant for what is interesting and meaningful for students to know, understand and do, while we also stay true to core knowledge, skills, attitudes, and behaviors that all students must acquire to be successful.  One way we can do this is by supporting pilots, experiments, and prototypes outside of the “core curriculum” to discover optimal ways to shape the future of our schools.


How do we get in our own way of innovating in schools?  What are the obstacles to accelerated innovation?  These are questions we should be asking and searching for solutions.  The authors write: “Are managers with the right knowledge, skills, and experience making the crucial decisions in a timely manner, so that innovation continually moves through an organization in a way that creates and maintains competitive advantage, without exposing a company to unnecessary risk?” (page 8)  What about school leadership teams and classroom teachers?  Do we have “right people on the bus” (using Jim Collins terminology) to accelerate innovation in critical areas?  If a school is interested in project-based learning, does it have the right leadership team to understand, promote, and support the effective use of the strategy?  Does it have the right teachers in place to implement the strategy, assuming they are given the right amount of support?  If our answer is no, then we shouldn’t be surprised if the innovation we desire doesn’t take root and grow.


I struggle with how this essential attribute applies to a school.  If an innovation is important to the learning environment of children, then it seems to me there is little to discuss about scaling up.  All teachers and all classrooms should consider embracing the innovation.  Seeding innovation in schools involves supporting a creative, risk-taking teacher who wants to try something new.  If the experiment is successful shouldn’t other teachers seriously consider adopting it as well?  We need to nurture a professional culture where faculty are encouraged to share with each other, learn from each other, and iterate their practice based on the successful experimentation of colleagues?  Since the purpose of schooling is to meet the needs of all students, innovation should impact all students.

In Faculty Forum, our opening week at Westminster Schools, we devoted two-hours of the third day to faculty-led workshops on a variety of topics. The topics ranged from general technology integration, open-air painting, designing a WordPress blog, 3D printing for beginners, STEAM workshop, data management using Google and other tools, and instructional strategies for diverse learners.  The value of this professional development was that faculty learned from their peers and improved their practice in simple ways.  Scaling up using this approach was well received.


The authors write: “Successful innovators achieve significant multiples for every dollar invested in innovation by accessing the skills and talents of others.” (page 10)  In schools, I believe we need to be more open to setting up partnerships with other schools and organizations to leverage available expertise we might not have.  Innovation can be accelerated if we learn to identify the gaps in our organization and pinpoint the resources we need to fill them in.  We have to break down the culture of isolation in schools.

High-performing innovators work hard to develop the ecosystems that help deliver these benefits. Indeed, they strive to become partners of choice, increasing the likelihood that the best ideas and people will come their way.  (page 10)


How do leading companies stimulate, encourage, support, and reward innovative behavior and thinking among the right groups of people? The best companies find ways to embed innovation into the fibers of their culture, from the core to the periphery.”  (page 11)

I think this quote from the authors’ work illustrates the challenge we face in schools.  We have to lead with the goal of building an innovative culture and rewarding faculty who are willing to test the boundaries, looking for ways to improve the learning environment for all students.


@AK12DC Summit 2015: Day 2 learning and sharing

March 29, 2015

K12_Logo_FINAL copy

Atlanta K12 Design Challenge (@AK12DC) Summit 2015 is underway for day 2. Started early on a Saturday morning with eleven design teams and 60+ educators ready to build on Day 1. The schedule:

We started the day reflecting on the work from day 1. A number of teams shared their learnings from day 1.

  • The day provided productive time to work on their prototype.
  • They enjoyed the design thinking refresher on “designing the ideal chair,” especially thinking about it from the perspective of different users (Simpson characters)
  • What they learned is that different types of chairs were designed depending upon the user they were assigned.
  • They felt positive about the storytelling exercise and thinking about their design challenge as a story that has unfolded.

We then went into a session on defining what it means to build a design thinking mindset. What are the elements (see image below) that go into a team or an individual building a mindset that promotes a culture of design thinking?

  • Focus on Humans
  • Be Obnoxiously Curious
  • Be Mindful of Process
  • Embrace Experimentation
  • Iterate Everything
  • Words Matter
  • Show Don’t Tell
  • Inspire and be inspired
  • First and Teach to Fish


The teams were asked to apply their understanding of the mindset elements to a series of hypothetical activities.

The teams ran through a series of three activities. After each activity, we processed what was learned in the conversation. Scott Sanchez (@jscottsanchez), our design facilitator, engaged all teams in the sharing time. One team member from Westminster Schools, Peyten Williams, indicated that the nine elements for a design thinking mindset are an instructive tool to think about how to be a good teacher.

The large group conversation on developing a design thinking mindset was a lively and instructive time for everyone. (see Storify summary of this part of day 2 summit).

Design teams were given time to work on developing their stories for the afternoon session on “storytelling.” Each team will have 5 minutes to construct a story, sharing with the audience their journey. What have been the milestones along the way, what are their learnings, and where are they along the path will be components of their story. The story arc is the template they are using (see below)

story arc

After working on constructing their stories, design teams were paired up to test their story on another team and receive feedback. Using the feedback they prepared for their final presentation of their story after lunch. There was a good deal of energy in the room, as well as some nervousness about getting all the data together into a five-minute story. Quite a task!

The following link to a Storify synopsis of Twitter links during the storytelling time (click here) will give you some sense of what was accomplished. We videotaped the whole day and captured all eleven stories, which will be posted on the website in the coming weeks. Here are a few highlights as well.

It was clear from storytelling that all eleven teams accomplished a great deal along their journey. They all recognized that design thinking has been, and will continue to be, a highly successful tool for helping schools innovate the challenges they face. While it may not be the only process a school uses to address challenges in the 21st Century, it is a process that helps a school build empathy with its users, such as parents, students and faculty, and then applying that understanding to defining, prototyping, iterating, and testing solutions to the challenges (see graphic below). In addition, when the process is used with fidelity it builds a collaborative culture in which people find common goal and shared values.

Design Thinking Map

One of our primary goals in AK12DC is to build and sustain a design thinking mindset (see above) with members of our design teams. Preliminary data from Summit 2015 suggests we are making great progress towards this goal.

We wrapped up Summit 2015 with 70+ participants forming a circle and be christened by Scott Sanchez as Stanford design thinkers.

What it the richest feedback we can offer teachers in an #evaluation?

March 25, 2015

A national conversation continues to take place about teacher evaluation systems, especially in states that submitted a Race-to-the-Top proposal, an Obama Administration initiative designed to fuel innovation in schools.  This competitive grant program required states to submit a plan for how they would retool their current evaluation system, making greater use of student achievement data.  If you follow this extensive body of literature, then you know that value-added models for using student achievement data, the exact percent that student achievement data would count in a teacher’s evaluation, and whether a teacher’s cumulative score would be made public have created all types of conversations, arguments, and potentially lawsuits.

Kate Taylor, in a recent New York Times article entitled, Cuomo Fights Rating System in Which Few Teachers Are Bad, tells the story of the battle between Governor Cuomo and the state’s teachers’ union.  Cuomo wants an evaluation system that tightly aligns a teacher’s performance rating to his or her students’ test scores.  The teachers’ union doesn’t believe this system can give a fair assessment of a teacher’s performance.  She writes:

Around the state, administrators, teachers and parents have been protesting the governor’s proposals, which would both increase the weight of test scores, to 50 percent of a teacher’s rating, and decrease the role of their principals’ observations.

Cuomo, other governors, educational policy makers, and other political leaders believe that a system that closely ties students’ achievement scores to a teacher’s evaluation rating is a more effective way to commend good teachers, provide growth plans for average teachers, and weed out those that are not effective.  Of course, there has been extensive research and commentary on whether value-added models for aligning student achievement data to a teacher’s performance are truly valid models given the plethora of variables that impact student achievement.

One thing is universally true about school districts that have had to retool their evaluation systems under Race-to-the-Top, they have not been very creative in designing their evaluation systems.  They all look pretty much the same, with a few minor tweaks.  They might count student achievement scores 25% instead of 40%.  In fact, they are designed off of previous models with slight variations.

Another article that appeared in the New York Times, Grading Teachers by the Test, written by Eduardo Porter, suggests that according to Goodhart’ Law, an economic principle related to incentive design that sounds a lot like Heisenberg’s uncertainty principle in physics:

A performance metric is only useful as a performance metric as long as it isn’t used as a performance metric.

The idea in education being that if we rate teachers according to their students’ test scores do we run the risk of “fudging” the data to achieve what we want to achieve.

If we want to study organizations that are innovating their way to an evaluation system that meets the needs of their employees, then we have to go to the business world.  We won’t find it in education.  But I would argue that education has a lot to learn from the way some creative businesses approach giving constructive feedback to their employees.

In Harvard Business Review, Marcus Buckingham and Ashley Goodall write about the changes taking place at Deloitte Services LP in the article, Reinventing Performance Management. They describe how Deloitte is “rethinking peer feedback, and the annual review, and trying to design a system to fuel improvement.”  It strikes me that if we speak with most educational administrators they would say their hope is that their school’s evaluation system would fuel improvement as well.  Of course, the data shows that most teachers don’t believe their school’s evaluation system “fuels their improvement.”  In a study done by Weisberg, Sexton, Mulhern, & Keeling (2009) called the Widget Effect, the authors write:

In districts that use binary evaluation ratings (generally “satisfactory” or “unsatisfactory”), more than 99 percent of teachers receive the satisfactory rating. (page 6)

If 99% of teachers are seen as satisfactory, then great teaching might go unrecognized while poor teaching does not get addressed.  Another piece of data from the study shows that:

In fact, 73 percent of teachers surveyed said their most recent evaluation did not identify any development areas, and only 45 percent of teachers who did have development areas identified said they received useful support to improve. (page 6)

So the bottom line is that most of our evaluation systems do not “fuel improvement.”  Not only are there flaws in the design of how we evaluate but there are also flaws in the way we go about implementation of the model.  However, there is good data that suggests faculty believe that their principal’s feedback is important but it depends on whether principals are well-trained, understand the instruments they’re expected to use, understand their role in the process, and have confidence in differentiating for individual teachers’ needs.  But some school systems, like New York State, are trying to deemphasize the principal’s role in the rating system.  For that reason, and others, I think the Deloitte study is interesting for us to consider as a prototype for a new way of thinking about giving effective feedback to teachers.

Here is a high-level comparison of their old and new system.

 Old system  New system
 Objectives  cascading performance & strength oriented
 annual reviews  Yes  No
 360 degree tools  Yes  No
 Rating system  Yes  No

In moving to their new system they used data from research, an understanding of their organizations needs, and a commitment to fuel the growth of their employees.  The science of rating systems shows that “62% of the variance in the ratings could be accounted for by individual rater’s pecularities of perception” (page 43).  What they concluded from looking at the research is that ratings do not measure the performance of the ratee as much as they reveal the biases of the individual rater.  So they moved away from ratings.  They also moved away from annual reviews to weekly and quarterly feedback based on team projects because their focus was on “spending more time helping their people use their strengths and we wanted a quick way to collect reliable and differentiated performance data” (page 44).

What Deloitte realized from a study done by the Gallop Organization on strengths-based leadership, as well as their own research using their high-performance design teams, is that if an evaluation systems focuses on strengths the person being evaluated invests more heavily in the process.  Buckingham and Goodall write:

It found at the beginning of the study that almost all the variation between high- and lower-performing teams was explained by a very small group of items. The most powerful one proved to be “At work, I have the opportunity to do what I do best every day.” (page 44)

So if we work to align a person’s job responsibilities to their strengths, then we maximize opportunities for that person to be successful in their work.  In the Deloitte study, here are the three items they found had high correlation with high-performing teams:

  1. Co-workers on the team were committed to doing quality work.
  2. The company’s mission inspired members of the team.
  3. Members of the team have a chance to use their strengths everyday.

When they designed their new system they had three objectives to fulfill.  They were:

  1. The new system would allow them to recognize performance, particularly through variable compensation.
  2. The new systems had to facilitate ways in which they could CLEARLY SEE each person’s performance.
  3. The new system had to be able to fuel changes in performance.

I found it interesting that to achieve the second objective they redesigned the system and redefined the questions they asked of people being evaluated.  First, they made their system highly relational, encouraging and creating expectations and time for each person to be in conversation with his or her immediate supervision or team-lead.  To move away from rater reliability issues, they asked the team leader to use a set of four questions that focused more on the future relationship of the leader to the person being evaluated.  The four questions were (page 46):

  1. Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus.
  2. Given what I know of this person’s performance, I would always want him or her on my team.
  3. This person is at risk for low performance.
  4. This person is ready for promotion today.

“In effect, we are aksing our team leaders what they would do with each team member rather than what they think of that individual.”

I find it interesting that they pivoted 180 degrees with their questions.  So in education, what if the principal was required to answer the following questions:

  • Would you recommend that your child be taught by this teacher for a full year?
  • Would you pick this person to serve on your leadership team for building an ideal school?
  • Would you pick this person to lead a new initiative in your school that requires an innovative leader?

Finally, in order to shift the responsibility from the team leader to a team member being evaluated, they set up a system where the person being evaluated identifies their strengths through a self-assessment tool and then shares those with other team members, the team lead and the organization.  They have found:

that if you want people to talk about thow to do their best work in the near future, they need to talk often (page 48).

So their new system facilitates frequent conversation between team member and team lead about personal and professional strengths and progress towards goals.  They designed for these conversations to be simple, frequent (weekly), quick and engaging.

As they have developed experience with their new system, there is a shift in the question that drives their work: from “what is the simplest view of you to what is the richest view of you?

So unlike evaluation systems being designed by state departments of education, or for that matter evaluation systems that exist in almost all public and private schools, we should be designing systems that provide for the richest view of our teachers.  The richest view will not come from assigning 50% of the rating score to student achievement results.  A teacher is a more complex professional than the results his or her students achieve on an imperfect standardized test that measures only a very small snapshot of what the student knows, understand and can do.

As educators, we have to be bold, creative, and thoughtful as we attempt to co-create the systems that will be used to evaluate our work.  Our voice must be at the table in designing the process if it is going to succeed and fuel our improvement.  Some answers to our questions are right before our eyes in the processes used by other organizations.  Let’s learn from each other.




Fascinating Video on Properties of Water & Propylene glycol

March 23, 2015

Check out this fascinating video about the properties of water and propylene glycol.  The interaction of the two in a mixture.  As one vaporizes in the mixture it influences the behavior of the drop and an adjacent drop.  This could be a very interesting inquiry activity for physical science, chemistry, or physics students.
The Physics of a Water Droplet


Get every new post delivered to your Inbox.

Join 1,428 other followers

%d bloggers like this: