Archive | assesment RSS feed for this section

Is it worse when bad students do well or when good students do badly?

5 Apr

.

.

 

This is a question that occurred to me in a frenzy of test marking that took place last month…  I’m interested to know what people think!

I’d appreciate your answer before you read on….

Having gone through fifty odd tests and looked at the scores, there were quite a few surprises in there.  Students who are attentive and hard working in the classroom who scored very poorly and of course the opposite – students who do the bare minimum and who mess around and who lack focus and who confound expectations by doing well.

So what does that tell us?

Mostly it tells me that labelling students as “good” or “bad” is not a particularly helpful activity and in fact I know this already and have written about it before in “The Myth of the Good Student“.

It also tells me that behaviour does not equal learning and just because something looks like learning, doesn’t mean it is learning.  I remember a student who, at a previous parent teacher conference had been exhorted to try harder, started sitting up and writing stuff down more during lessons and doing the coursebook exercises promptly and with reasonable efficiency.  Upon slightly closer monitoring however, it turned out that he was literally just moving his pencil over the page in random squiggly lines and then putting it down and saying “finished” when there were enough other students doing the same to hide in amongst.  I’m not sure what he thought it would achieve, but it is probably a tactic that works in other contexts, where there are thirty odd students in the class and the teacher doesn’t get beyond the first row very often.

There is a part of me though that believes effort, when it is made, should be rewarded.  When I see the “good students” who proactively write things down and try to do the activities and exercises properly and who try to practice the language, and who give every appearance of being bright, keen and engaged – when I see them fail or score poorly it gets to me.  I want them to feel like the time and hard work they put in was FOR something.

The flip side of that of course, is that it ever so slightly annoys me when the students who don’t do any of the work and who muck about in the lessons just breeze through the tests without any apparent effort at all.  There are of course any number of reasons why they might behave the way they do, one of which might well be that they know it all and don’t need to make the effort because it is familiar ground to them.  Or they could be swans.  Effortlessly gliding on the surface whilst underneath they are paddling furiously – they might go home to parents who sit with them for an hour a day doing homework or extra reading….  You just don’t know.

So my answer to the question is that it’s probably worse when the good students do badly, but there is a third option which I didn’t put into the poll:  when bad students do badly.  I think this is probably worst of all, just because if the intention and motivation isn’t there, it’s very difficult to get them back on track again.  But let me know what you think…..

child staring school window

5 Highly Effective Teaching Practices

21 Mar

Earlier this year, a piece from the Edutopia website was doing the rounds under the title “5 highly effective teaching practices”.  I automatically question pieces like this as I doubt somewhat whether the purpose of the piece is actually to raise standards in the profession and develop teachers – or whether it is simply to get a bit more traffic to the website.  But perhaps I am being unnecessarily cynical?

To be fair, the practices the article suggests are generally quite effective:

  1. State the goals of the lesson and model tasks, so that students know what they are going to do and how to do it.
  2. Allow classroom discussion to encourage peer teaching
  3. Provide a variety of feedback, both on an individual and a group basis.  Allow students to feedback to the teacher.
  4. Use formative assessments (tests the students can learn from) as well as summative assessments (tests that evaluate student ability, mostly for reporting purposes)
  5. Develop student metacognitive strategies so that students can become more aware of their own learning and how to make it more effective.  (Learner Training)

What surprises me though is that these practices are considered novel enough that they need to be put into a blog post?  CELTA tutors may correct me (please do!) but aren’t these things considered essential on a CELTA?  Aren’t we meant to be doing all these things anyway?  I suppose it speaks to the audience of the Edutopia website, which is not primarily language teaching.  Maybe language teachers are just more responsive and methodologically up-to-date than mainstream education…..

banner-982162_1920

Image Credit: Pixabay

The piece is based on a book by John Hattie, who is the director of the Educational Research Institute at the University of Melbourne.  He has made his name in the field of meta-analysis, effectively taking in all the data from studies that have looked at educational attainment and looking at it to see what patterns emerge.  In 2008 he published “Visible Learning”, which was based on 800 meta-studies from around the world and which identified positive and negative processes in classroom learning, such as the long summer break that has now led some schools to adjust their school calendars to teach over the summer and provide more shorter terms punctuated by more, shorter holidays.  In 2011 he published a follow up book “Visible Learning for teachers” that looks at the practical implications for teachers and suggests strategies they can employ to maximise the learning that takes place.  The article is based on this latter book.

I don’t disagree with the suggestions made in the article – but it made me wonder what my five “effective teaching practices” would be.  This is what I came up with.  I can’t claim that I do all these things all the time, but I generally try to do them most of the time!

(1)  Have fun.  Obviously, it doesn’t necessarily follow that if the teacher is having fun the students are too (“Dance, my minions!  Dance!”), but if you’re bored and pissed off then the students probably will be too.  You can’t be happy all the time, and some activities and goals are more serious than others, but fun should always be there, like a background harmonic.

(2) Make it valuable.  Time is precious to everybody, young and old, and we’ve all sat through too many meetings where at the end of it we walk out thinking “That was three hours of my life I won’t see again” that we should probably make sure our students don’t feel the same way about us!  Again, difficult to achieve with ALL the students ALL the time, but possible to do with MOST of the students MOST of the time.  And if you have a student who is getting nothing out of it at all, then they’re in the wrong class.  I had a 15 year old CPE student whose main school teacher used to send him to the library to look up words in the dictionary because she felt it was a better use of his time than being in her class….

(3) Make them DO stuff.  We’ve been having this discussion in the forums on another post, and again, it isn’t something that you can do for every language point or for every skills focus, but my everlasting bugbear with the majority of materials is that the students aren’t required to be active producers of language, but that they are mostly seen as passive receptacles.  There is room for both aspects and indeed, both are needed to give the students time for input to become intake – but I don’t believe that is where it should stop.  Involve tasks and activities that ask the learners to USE whatever language they have.

(4) Know your students.  This ties in a bit to the first two because if you don’t know your students well, then you probably won’t have as much fun or make it as valuable as they need.  But essentially as a teacher, your job is to help the students get from where they are, to where they want to go.  If you don’t know them, you won’t know either of these things and won’t be able to help as effectively.  I am a big fan of needs analysis, in particular using google forms for this process, and I do this with all my adult and exam classes.  I haven’t yet come up with a decent needs analysis form for young learners, so if you have – please share!

(5) Teaching not testing.  The majority of summative testing or formal assessment is completely pointless.  The majority of tests don’t do what they are actually meant to do, which is measure what the students can do with the language.  Very often in a language learning context what they measure is the learners ability to manipulate a grammatical structure under controlled and guided conditions, so I do wonder what getting seven out of ten in an exercise where the student has to put the verb into the past continuous tells us?  Given the choice I wouldn’t bother with it.  Including a test in a course makes a mockery of the idea of needs analysis and student centered learning, because as soon as you put a test in the course, it becomes about test centered learning.  Alas I am not in a position to follow through on this idea completely, mostly because other stakeholders in the process want to be able to reduce an abstract concept like “language ability” to an easy to read number on a report card.  But there we go.

So – these are my five!  I’d be interested to know what other people think.  It is a largely subjective process (despite all the research that John Hattie put into his ideas), so do feel free to share your thoughts in the comments section below.

Image credit: Pixabay

Image credit: Pixabay

What is “good speaking”?

26 Jan

We are approaching the end of the first semester in our school and this is typically a time when we review our assessments, give out our grammar and vocabulary tests and write all the reports.  Like many schools, our reports contain the categories: Grammar & Vocabulary, Listening, Reading, Speaking, Writing.  The students do three assessments in reading, listening and writing that are spread out over the semester and then a larger grammar and vocabulary test at the end of the semester.  The marks for each component get converted to a score out of twenty and the scores for all five components are added together to give a percentage, which is the student’s final grade.

The eagle-eyed amongst you may have spotted the problem here.

Assessing speaking is always difficult.  One of the biggest problems I always find is whether I am actually assessing their speaking or whether I am assessing their spoken production of grammar and vocabulary.  To what extent does personality play a part in this?  Susan Cain’s TED talk on “The Power of Introverts” reminds us that just because people aren’t saying something, doesn’t mean they can’t.

Rob Szabo and Pete Rutherford recently wrote an article arguing for a more nuanced approach.  In “Radar charts and communicative competence“, they argue that as communicative competence is a composite of many different aspects, no student can simply be described as being good or bad at speaking, but that they have strengths and weaknesses within speaking.

Szabo and Rutherford identify six aspects of communicative competence (from Celce-Murcia) and diagram them as follows:

competence 01

 

This is an enticing idea.  It builds up a much broader picture of speaking ability than what is often taken – a general, global impression of the student.  It also allows both the teacher and the student to focus on particular areas for improvement:  in the diagram above, student 1 needs to develop their language system, it isn’t actually “speaking” that they have the problem with.  Equally student 2 needs to build better coping strategies for when they don’t understand or when someone doesn’t understand them.  These things aren’t necessarily quick fixes, but do allow for a much clearer focus in class input and feedback than just giving students more discussion practice.

From a business perspective, which is mostly where Szabo & Rutherford’s interests lie, there is also added value here for the employer or other stakeholders.  One of the points that David Graddol was making at the 2014 IATEFL conference (click here for video of the session) was language ability rests on so many different dimensions that in certain areas (Graddol mentioned India as an example) employers may well need someone with C1 level speaking ability, but it doesn’t matter whether they can read or write beyond A2.  Graddol kept his differentiation within the bounds of the CEFR and across abilities; Szabo and Rutherford take a more micro-level approach and suggest that the level of analysis they suggest may well be useful to employers in assigning tasks and responsibilities.  Quite what the students may feel about that is another matter.

Whilst this idea has been developed in a business English context, it is a useful idea that should also make the leap from the specialist to the general, as it has clear applications in a number of areas.  In reviewing the different competences, there are cross overs to the assessment categories used in Cambridge Exams for example – where discourse management, interactive communication, pronunciation and Grammar & Vocabulary have clear corollaries.  Diagramming pre-exam performance in this way again, can help teachers and students have a clearer picture of what needs doing and can make instruction more effective.

A helpful next step for the authors might be to think about how this idea can translate into practice in the wider world.  Currently, it seems as though a mark out of ten is awarded for each competence and while this inevitably gets the teacher thinking in more detail about what exactly their student can or can’t do, no definitions are currently provided as to what a “10” or a “3” might mean.