Category Archives: Data

The three cycles of great teaching

So you want to be a great teacher? The key is to understand the learning and assessment cycle, and know the three key ways to use it.

Quick test: what’s wrong with this statement?

Teach a topic –> Assess the topic –> Feed back –> Start again.

Bog standard it may be, but it’s also poor practice. Avoid assuming every student is ready to start at the same place by actually finding out what they know first, and planning accordingly. Here’s one version of the learning cycle for a topic that we discuss when I deliver training sessions.

  1. Assess first. Assess the students’ prior understanding, prior attainment, and capacity to learn (e.g. work ethic, habits and attitude).
  2. Teach/Prompt. Provide appropriate instruction/tasks to do one or more of the following:
    1. fill gaps in ‘foundation’ knowledge,
    2. challenge misconceptions,
    3. present new knowledge,
    4. embed new knowledge and link it to other topics,
    5. give students the ability to self-assess,
    6. inspire/stretch students,
    7. improve capacity to learn.
  3. Assess again. Check the resulting level of attainment and check on misconceptions that may have arisen (or been uncovered).
  4. Provide feedback, and suggest the next appropriate task (step 2 again).

That may sounds like quite a lot, but this cycle could be summarised as:

Assess –> Teach/prompt –> Assess again –> Feedback –> Start again…

The key to make this great teaching is to consider this cycle over three separate time-scales.

  • Within the lesson. Every lesson should contain mini cycles that start with assessment, or follow from a previous one. Any good methods of questioning will help here. Cycles can occur to encompass small tasks, to break up larger ones, or in conversation with students as they work on something more extended.
  • Between lessons. Use information gathered from marking exercise books, from homeworks and from online assessments to assess learning. Plan larger tasks or series of tasks for the next one to three lessons. Check the outcomes both within the class and also between lessons.
  • Long term. Use prior attainment data to assess learning (and current capacity to learn) when students start a new topic or course. Plan appropriate tasks to address the attainment. Use formal assessments or exams to compare students’ progress to other classes and to agreed standards. Using this information you can evaluate your teaching and locate/share good practice in your department. You can also plan bigger interventions to address low attainment and poor capacity to learn, and you can create extension tasks for high attainers.

This is the key to teacher greatness:

  • constantly evaluating the level of student learning
  • self-evaluating the effectiveness of your own teaching.

Use each learning cycle to adjust and improve your practice and make these adjustments:

  1. in the short term: within each class,
  2. in the medium term: between classes in lesson planning,
  3. in the long term: between topics/courses.

Of course all of this comes alongside confident behaviour management, strong interpersonal skills, outstanding organisation, deep subject knowledge, etc., but the heart of any lesson is the learning. Crack that, and you’re on your way.

Contact Informed Education if you would like a training session run at your school on using data and assessment for better teaching.

Making sense of predictions and targets

These days, schools are awash with targets, estimates, and predicted grades. Used well, they are a way to embed a common ambitious vision for each child. Used badly, they are a demotivating, self-fulfilling prophecy of underperformance.

It’s really important to understand the difference between these:

  • Target: “I would like you to aim for…” – a reasonably ambitious goal that stretches the student.
  • Prediction: “In my judgement you’re currently heading for…” – a professional opinion, based on evidence of assessment.
  • Estimate: “Similar students to you most commonly achieved…” – a statistically-generated grade based on previous exam results and/or developed ability (aka the current IQ score or similar).

It’s really important to be clear about the difference. Start telling a student that you are predicting them a B grade, and some will hear that you don’t think they can achieve an A grade. It’s a veritable mine-field, and one where you can easily push students in to labelling themselves: i.e. “this grade tells me how clever I am”.

Here’s an example of the sort of language you might use with students (in the English education system).

Teacher: “Sarah, most students who got the same levels as you in their Key Stage 2 SATs went on to get a B in GCSE Maths. Some of them worked harder and got an A, and a few of them worked really hard and even got an A*. However, the ones who gave up easily in lessons got lower grades. You and I don’t know how hard you’ll work yet, but we should set a target to aim for.”

Some schools like to use chances graphs to help them explain this information, like the following:

This is a great way of showing students the grades that similar students achieved. It also beautifully illustrates the fact that people with similar results went on to achieve a huge variety of results. It is worth having some good discussions with students to get them to think about what factors caused someone ‘like them’ to end up with a U-grade, and what made some of the students ‘like them’ to get A-grades.

This is really empowering language. It builds on Carol Dweck‘s excellent work on fixed and growth mindsets, and ensures students stay focused on how they are learning, not just what they are learning.

All this work can very easily be destroyed by reverting to “I predict you will get a B”. It sounds like a done deal, like the teacher is saying this will happen in spite of your efforts. Of course, some students are very resilient and will carry on working regardless, but for others it can be a self-fulfilling prophecy.

There is a danger with targets, however. If you constantly use positive “you can achieve anything” language without referring to current work-rate then students develop unrealistic attitudes – there can be a disconnect between their goals and their current actions.

As with any good process-management students need to constantly check their progress with robust assessment and appraisal, and they need to both learn the tools and develop the characteristics to deal with inevitable situations where they underachieve.

Here’s an example of language that uses all three concepts: estimates, predictions, and targets.

“Sarah, we know that most similar students to you end up with B at GCSE [estimate], but some of them got an A, and we agreed earlier this year that you would aim for an A-grade [target].

My worry is that if you carry on working at your current level, based on your last pieces of work, you might currently be on course for a C-grade [prediction]. Why do you think this is, and what do you think you need to do to get yourself back on track for the A-grade you wanted?

Some people may well wish to avoid the language of grades completely and focus more on specific skills, but the general principle is that this is:

  • realistic – based on current assessments
  • empowering – focuses on the student’s ability to improve and be in control of their success
  • optimistic – reinforcing the idea that people ‘like her’ have achieved their A-grade targets.
  • specific – the discussion will then focus on specific measures to improve the situation, ideally including ways that both student and teacher can use to check improvement is happening.

The language is the easy bit, of course, and by itself will achieve nothing. However it can keep the focus on the variously challenging, frustrating, and hopefully ultimately rewarding process of helping students improve.

I should add that, of course, not all classroom teaching and learning should be based around exams and grades – doing this exclusively will inevitably reduce motivation and engagement. However given the inevitable exam focus in most schools then this is quite a good way to approach it.

I’d be really interested to hear ideas of how to improve the above examples of dialogue, and for more ways to keep students pushing themselves.

Contact Informed Education if you would like a training session run at your school on using data and assessment for better teaching.

Prompting discussion about improvement

Effective teaching is the hot topic at the moment, and with such fantastic discussions such as those at purpose/ed and the interesting (although controversial) Measures Of Effective Teaching project from the Bill & Melinda Gates foundation, there’s a lot to come.

I’ve been asked to develop my own tool to encourage some really good discussion and collaboration between colleagues, prompting a good hard look at the ways we teach, and what is going on in our classrooms. This is for a pilot project with a teacher training organisation.

So what do you think is a good set of data to prompt that discussion? I need your help! My initial thoughts are:

  • How much students test scores have improved (from initial formative assessment to final summative test)
  • Student levels/grades compared to target grades (based on prior attainment)
  • Student enjoyment survey/ratings/opinions
  • Teacher enjoyment survey/ratings/opinions (including assessments of behaviour etc)
  • Small portfolio of linked work that class are particularly proud of
  • Student self-assessment of how much independent learning went on – how would they rate their ability to improve in this topic without further assistance?

I don’t think this is exhaustive, and I certainly don’t think you’d measure all of these for every topic. However, a selection of these different approaches would prompt some very interesting discussion, and feed back nicely into upgrading schemes of work and resources for the next time it is taught.

What do you think?

UK benchmarking

There seem to be so many people talking about school benchmarking in the USA, but not so much in the UK. I have a feeling that it would be a good idea to have some sort of meet-up to start setting the agenda in this area – in order to make sure the focus stays on collaboration, sharing, and mixed data sources, instead of competition, league-tables, and a results-only blinkered view.

I think that a decent system of benchmarking in different areas such as finance, staffing, pupil satisfaction, results, teacher satisfaction, leadership, governance, etc. would start some fantastic conversations between schools, encourage sharing of good practice, and foster a culture of reflection, review and collaboration.

Do you think this would be a good idea? Or does a forum like this already exist?

Resources I’ve found for benchmarking so far:

I’m also aware of some other large organisations looking in to this area. What have you heard about?

Help wanted – any statisticians out there?

We know that a great teacher can make a huge difference to a class, and the whole debate today focuses on the way to get more great teachers, and allowing them to make even more impact.

The thing is, data I’ve been looking at seems to suggest to me that the variation in any teacher’s quality is pretty huge compared to the variations between teachers. This would suggest that many teachers are capable of delivering some great lessons, but that most of them are inconsistent. If this is true, the focus could be much more about collaboration, about getting teachers to have more great lessons, and less on the blunt “good teacher/bad teacher” labels.

Trouble is, despite being an A-level maths teacher, I wouldn’t say my statistics is top-research-quality, and I wondered if there was someone out there who might help me?

What happens when government looks at the long-term?

A few weeks ago I blogged that government needs to engage with long-term outcomes in education, similar to the long-term survival statistics that we see in the health sector. Thanks to @LeeDonaghy for pointing out a fascinating piece in the Guardian saying that they are now planning to do just that, by linking two government databases together and finding out the destinations of school-leavers.

It is brilliant that government is taking up this agenda, and should be welcomed. I can’t entirely understand why Christine Blower of the NUT said:

“I cannot see what relevance this information would be to government, except to use as yet another measure against which to judge schools”

Surely if the government are going to judge schools against something, then long-term outcomes are something that schools can be more proud of than raw exam results? Personally I’d love to see the extent to which schools are raising the aspirations of their students – i.e. do a comparison of parental education/jobs and students’. A great school may be in an incredibly deprived area, and still doing very well by their students who are going on to be in better-paid jobs, more highly qualified, with happier lives than their parents.

So, this measure is a great start, but to go further we need to see it taking account of parents and local area. I would also like to see some ‘soft’ statistics such as confidence, health and happiness. I know that these would be controversial, but I for one would love to see government and schools working together to raise aspirations and produce students who are happy, self-confident, fit and healthy.

What do we really want from education?

I had a fascinating conversation this evening with Professor Philip Woods, chair in educational policy, leadership and democracy at the University of Hertfordshire, and Charles Weston, Director of Equity Research at Numis Securities. Philip is an expert in educational entrepreneurialism (amongst other things), and Charles is an analyst in private healthcare. Between us we analysed the emerging trend for private healthcare to move in to areas in which the NHS had a monopoly, and considered the implications for education.

Charles told us how a relatively small healthcare firm, Circle, beat the giant Serco to win the contract to run a poorly-performing NHS trust. Apparently they have already achieved amazing things, with improved throughput of patients, improved patient satisfaction, and improved staff satisfaction. The key? They gave the doctors control over decision-making, gave them 49% of the shares in the trust, and encouraged them to innovate.

In healthcare it is standard practice to measure success using short- and long-term outcomes, as well as intangibles like patient and staff satisfaction. Charles wondered whether such a system could be imported in to education.

Philip then explained the extra complexities in defining outcomes in education. The key difficulty is getting people to agree on a definition of “what is good education?” There are so many conflicting ideas of what a great school looks like. You have everything from selective grammar schools producing students with lots of exam success through to Steiner schools who produce extremely rounded individuals. He then gave us some details about Steiner education (having studied it in some depth), and we agreed that it would be difficult to produce measures of success within that system that could also be used with the grammar school.

This thought-provoking conversation (which moved through a large number of similarly interesting topics) really made me think about what it is that makes some parents choose a Steiner education for their children. Is it because they are taking the long view? Do they essentially trust the school to manage the process so long as their children emerge as rounded, happy, confident and competent adults who can be successful in their lives?

I suspect, in many ways, that is what we are all asking for from education. The trouble is, we have become bogged down with the current measures. Exams were supposed to tell us how successful schools and students were at moving from unformed child to this ideal adult. Somehow, along the way, we now see the exam itself as the outcome, as the measure of success.

So what if we started measuring long-term outcomes? What do you think the results would be if we could devise a measure of happiness and success in adults (and how would we do it)? I think it would be revolutionary.

If the government could produce this measure, and scrap all others, then you could set schools and teachers free to produce the adults that we are all after. Tweaking measures of exams will make barely any difference at all. What we want are long-term measures, like the NHS. Where is education’s version of the 10-year survival rate?

English Baccalaureate

I am an eternal optimist, and as such I believe that there is some value to the new English Baccalaureate. However, this is only true if it is used to recognise achievement rather than highlight the lack of it. Many schools fear it will be used as another measure to brand them as failures, and are angry that the goalposts have shifted. This is just another cycle in the endless story of data and education.

For those who don’t know, the new UK governing coalition has introduced a new yardstick against which to measure school success. This is, the proportion of students who, at the age of 16, achieve a grade C or higher in their exams in all of the following:

  • English
  • Mathematics
  • Science
  • A foreign language (modern or classical)
  • A humanity

Previous published measures included the proportion of students gaining 5 or more exams in any subject at a grade C or above, and the number gaining 2 or 3 or more grade C’s plus the same standard in English & Maths.

Inherently there is nothing wrong with any of these measures. In fact they were all brought in to reflect various different ideas about what represents a ‘good’ outcome. I similarly believe there is nothing wrong with the new English Baccalaureate measure. Any new piece of data tells you something new, helps suggest areas of particularly good practice, and opens up new questions.

So how should these data be used? Ideally, as a package of measures, publicly available, along with contextual information, and an internal self-evaluation by the school. It is reasonable to be able to compare these measures against other schools, but only if equal weight is given to each piece of data. The culture set by the government should then be to highlight and recognise schools with particularly outstanding achievement and ethos and supportively challenge other schools to share, collaborate, and learn from these schools.

The concept of “awarding” the baccalaureate suggests that it should mark out particularly hard work and excellent achievement:

“If you get five GCSEs in those areas, I think you should be entitled to special recognition,” Gove said.

This was the excellent spirit in which the education secretary announced the new award in September in The Guardian. His aides followed with:

The education secretary was not seeking to tell pupils what exams to take, but the baccalaureate would be a way of rewarding those who took a wider range of subjects.

These are all positive statements. Unfortunately the spirit in which the English Bacc. was launched hasn’t entirely been maintained:

“We are publishing more information which shines a light on the last Government’s failure to give millions of children access to core academic knowledge in other subjects”
(Michael Gove in the Daily Mail, 8th January 2011)

Data should be used to highlight good practice and raise aspirations, but never for a witch-hunt . I would love our government to follow the classroom example and aim for a ratio of at least 5 pieces of praise to each negative statement made publicly. I genuinely believe that a culture of positivity and innovation coupled with tough challenges and high aspirations will make an order of magnitude more difference than league tables, criticism, and persecution.
[ad#Google Adsense – banner]

Data is not the enemy of great teachers

There is a backlash against data in teaching. Here in the UK, and even more so in the USA, teachers seem to be reacting against efforts to put ever-greater weight on fairly narrow statistical measures by rejecting them completely. Two examples of this poor practice are

  • Teachers in the USA being ranked on ‘effectiveness’ based on test scores alone.
  • UK schools being judged as failing if they fail to reach an arbitrary standard of less than 30% of students achieving 5 or more ‘good’ GCSE grades.

For me, great teachers and great schools will reject people arbitrarily taking one or two narrow measurements and then imposing their judgements. This is certainly the correct thing to do – imposing judgements about success or failure deskills, demotivates, and deprofessionalises teachers in the same way that it does with students.

However, a great, reflective teacher or school leader will be actively searching for ways to measure their own effectiveness, and will be searching out colleagues and schools who are demonstrating excellence. One aspect of these measurements will be test scores, and value-added. Other aspects will be student feedback, lesson observation comments, action research carried out, colleague recommendations, and so on.

Testing and analysis is just one tool in the effective educator’s arsenal. Currently it is overemphasised chronically, misunderstood by many, and misinterpreted. The key is to create a climate where everyone shares and suports, and everyone is looking to understand, reflect, and improve constantly.

Don’t throw the valuable “testing” baby out with the dirty “accountability extremism”bathwater.

Measures of effective teaching

The Bill & Melinda Gates foundation is conducting research in to measuring and improving the effectiveness of teachers. They recently released some early findings which show:

  • First, in every grade and subject we studied, a teacher’s past success in raising student achievement on state tests (that is, his or her value-added) is one of the strongest predictors of his or her ability to do so again.
  • Second, the teachers with the highest value-added scores on state tests also tend to help students understand math concepts or demonstrate reading comprehension through writing.
  • Third, the average student knows effective teaching when he or she experiences it.
  • Fourth, valid feedback need not be limited to test scores alone. By combining different sources of data, it is possible to provide diagnostic, targeted feedback to teachers who are eager to improve.

One of their key findings is that student feedback is critical. They have some very interesting tables of results showing the difference in responses from students in schools at the 25th percentile and students at the 75th percentile, and the results are compelling.

What are the key points here?

  • Create an environment where teachers are free to innovate and eager to improve, without fear of retribution.
  • Listen to the student voice – sample regularly and analyse the data, both at class level and school-wide.
  • Assessment data is a key element of showing effective teaching. Teachers who produce better achievement tend to score better on all measures of teacher performance.