Target Setting doesn’t have to be hard work….

Those of you who follow my tweets regularly will already know that I exist mainly on a diet of data, more data and the occasional large glass of Brancott Estate (if you haven’t yet had the pleasure of this delicious Marlborough, stop reading and get to Tesco).

As part of my job at SISRA, I am lucky to be able to gain first-hand experience in many school environments which makes it much easier for me to judge objectively, take snippets of the best and pass that information on to those who could benefit. Along with the wonderful work being done up and down the country, I also see some practice which frankly, keeps me awake at night. I often wonder how and why this variety exists within schools, and more importantly how, as one person I can contribute to helping drive the standard of data use in the right direction. If you’ve come this far then I don’t need to tell you that if the use of data isn’t contributing to teaching and learning, then we shouldn’t be doing it.

I recently visited an outstanding school to do some work with their middle leaders. I was greeted with trepidation as the staff filtered in to the session, but I didn’t take it personally and as we settled in to the session it emerged that the staff hadn’t been briefed beforehand which explained their suspicious faces. We began looking at the most recent exam results, discussing attainment, progress and comparisons with the previous year which all seemed to go well, and most staff were keen to follow the session which I measured by the proportion simultaneously checking their emails when they thought I wasn’t looking.

All was well until I mentioned the T word… I was surprised and saddened to find that not one of these talented, passionate middle leaders could tell me how the targets for the pupils in their subjects had been set, let alone engage in open discussion about them and as a result there was a complete disconnect between the staff and their data as well as several other harmful consequences.

So, what did I recommend for this school?

1.       Training staff to use estimates

Even as an expert, I sometimes feel overwhelmed by the amount and complexity of the data available to schools so it’s vital that differentiated, relevant training is provided. At the very minimum, ALL staff should know the difference between an estimate, prediction and target (in that order) so in case you didn’t, or need refreshing I have summarised below:

What is it? Source
Estimate “Data says that pupils with your profile are most likely to achieve a B grade.”“40% of pupils with your profile achieve a B grade and 12% achieve an A.” Key Stage 2 results, FFT Estimates, CATs indicator, MidYIS, internal baseline testing
Prediction “If your current poor efforts and attitude are maintained, I predict you will achieve a C grade.”“As you’ve been working so hard, you’re currently on course for that A!” Estimate + professional judgement (attitude, personal circumstances, personality)
Target “It might be a challenge, but I really think you could aim for an A grade.” Prediction + teacher and pupil aspiration

As a summary of the information above, I really like this which is taken from


So, if we assume that a target requires a prediction, and to get that we require an estimate, we can start to look at the different types of estimates schools might want to consider.

  • Key Stage 2 results – personal opinions on the validity of these results aside, schools will be judged on the progress each pupil makes from their respective result, so it’s always a good idea to keep them in mind.
  • FFT Estimates – clue’s in the name folks, these are NOT targets! FFT Estimates in their various flavours are probabilities based on key stage 2 data, national performance and historical school results.
  • CATs Indicators – gives numerical outcomes in verbal, non-verbal and quantitative tests, with 100 being the national average. Provide a +/- profile where there is a significant difference between performance in any of the three strands. Interesting comparisons can be made between these scores and KS2 results.
  • MiDYIS – as with CAT, tests pupils in a variety of areas then produces probability and chance style data.
  • Internal baselines – when done rigoursly, internal baselines can provide valuable input for target setting, especially in subjects which are not taught in primary.

2.       Making a prediction

To make a prediction, teachers should be able to use estimates, then have the freedom to add in their professional judgement and anything else not quantifiable (attitude, personality, etc.).

I have no objection to a skilled data manager or senior leader using the data in weird and wonderful ways to provide another layer of estimation to the target setting process, in fact I think that’s really great when done properly, but estimates are not useful if staff cannot understand what they are and where they have come from.

Some schools may opt to measure staff performance against these predictions instead of the more aspirational targets set in conjunction with pupils.

3.       Deciding on a target

Assuming that both an estimate and prediction have been made, a target can now be set. I strongly believe that pupils should be involved in this process in order for them to take ownership and to encourage accountability should their progress or attitude slip during the year.

One popular, and powerful way to do this is to share chance probabilities with pupils then discuss where they see themselves. Represented on a simple graph or as a table,  looking at this data with subject teachers will enable targets to be truly aspirational and personal. Involving the pupils in this process also secures a verbal commitment from them, encouraging accountability.


Progress Leaders/Heads of Year /House can then monitor progress towards these targets, identifying underperformance and putting interventions where necessary.

There are many good ways to set meaningful, challenging targets for pupils. No one method will suit every school, subject or pupil so it’s important to draw from as many places as possible before deciding on what’s right for your school. On the flip side, there are also many ways to disengage teachers, dissociate pupils from their own data, and create a culture of mistrust between teachers and senior leaders.

Targets are an essential ingredient in improving attainment and progress and I’d like to see more schools devoting the time and energy needed to get it right!


4 thoughts on “Target Setting doesn’t have to be hard work….

  1. Pingback: Performance Benchmarks – Alps | Teachsense

  2. Dave Chapman

    Hi, Thanks for the helpful article and I can see what you are saying about involving a pupil and teacher in deciding their targets however isn’t it all academic? For example if Katie’s external government provided data says she has to achieve a B to make 3LP from KS2, isn’t it foolhardy to allow this to be dropped down to a C by Katie and the teacher because they know Katie’s work ethic and trying to achieve a B might demotivate her? At the end of the day, we (the school) and her teacher will surely be judged against her ‘real’ targets from the government, not the adjusted ones by the school? Or have I misunderstood?

  3. Charlotte Post author

    Hi Dave, thanks for reading and replying!

    The first rule for me is that if a ‘target’ is set using a laptop (with no involvement from a human), then it is in fact an ‘estimate’ and not a ‘target’ at all. A target can be set for a pupil which has no relation to their estimate but takes in to account other things we know about them such as their circumstances at home, health, aptitude, passions. For every pupil like Katie who might have a target lower than her estimate, there would be another who has a target which is higher than the estimate provided by FFT, 3LOP or other.

    If we started measuring teacher and school performance in relation to a range of estimates, and left targets for pupils, that might be a good way to go about things. What are your thoughts on that kind of system?


  4. Dave Chapman

    Hi Charlotte, sorry for the delay in replying, I was appointed Assistant Head in charge of data shortly after writing my post so my mind was elsewhere! No I have to get my head around a lot of things, including target setting.
    After much subsequent reading, I agree with you are saying. It seems that the best way is to do exactly that – separate ‘estimates’ from ‘targets’ and teach the HoDs the difference. Then we can judge success against the estimates, and compare predictions against estimates as a whole rather than the targets of individual pupils.
    I’m struggling to get my head around the range of analysis tools available to schools. What is the difference between SISRA and FFT aspire? We use SISRA at school but also have access to FFT aspire. Is the main difference that although both can analyse results FFT aspire can also set targets? Why use SISRA over FFT aspire?
    Many thanks for you help.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s