Tag Archives: assessment

What is “good speaking”?

26 Jan

We are approaching the end of the first semester in our school and this is typically a time when we review our assessments, give out our grammar and vocabulary tests and write all the reports.  Like many schools, our reports contain the categories: Grammar & Vocabulary, Listening, Reading, Speaking, Writing.  The students do three assessments in reading, listening and writing that are spread out over the semester and then a larger grammar and vocabulary test at the end of the semester.  The marks for each component get converted to a score out of twenty and the scores for all five components are added together to give a percentage, which is the student’s final grade.

The eagle-eyed amongst you may have spotted the problem here.

Assessing speaking is always difficult.  One of the biggest problems I always find is whether I am actually assessing their speaking or whether I am assessing their spoken production of grammar and vocabulary.  To what extent does personality play a part in this?  Susan Cain’s TED talk on “The Power of Introverts” reminds us that just because people aren’t saying something, doesn’t mean they can’t.

Rob Szabo and Pete Rutherford recently wrote an article arguing for a more nuanced approach.  In “Radar charts and communicative competence“, they argue that as communicative competence is a composite of many different aspects, no student can simply be described as being good or bad at speaking, but that they have strengths and weaknesses within speaking.

Szabo and Rutherford identify six aspects of communicative competence (from Celce-Murcia) and diagram them as follows:

competence 01

 

This is an enticing idea.  It builds up a much broader picture of speaking ability than what is often taken – a general, global impression of the student.  It also allows both the teacher and the student to focus on particular areas for improvement:  in the diagram above, student 1 needs to develop their language system, it isn’t actually “speaking” that they have the problem with.  Equally student 2 needs to build better coping strategies for when they don’t understand or when someone doesn’t understand them.  These things aren’t necessarily quick fixes, but do allow for a much clearer focus in class input and feedback than just giving students more discussion practice.

From a business perspective, which is mostly where Szabo & Rutherford’s interests lie, there is also added value here for the employer or other stakeholders.  One of the points that David Graddol was making at the 2014 IATEFL conference (click here for video of the session) was language ability rests on so many different dimensions that in certain areas (Graddol mentioned India as an example) employers may well need someone with C1 level speaking ability, but it doesn’t matter whether they can read or write beyond A2.  Graddol kept his differentiation within the bounds of the CEFR and across abilities; Szabo and Rutherford take a more micro-level approach and suggest that the level of analysis they suggest may well be useful to employers in assigning tasks and responsibilities.  Quite what the students may feel about that is another matter.

Whilst this idea has been developed in a business English context, it is a useful idea that should also make the leap from the specialist to the general, as it has clear applications in a number of areas.  In reviewing the different competences, there are cross overs to the assessment categories used in Cambridge Exams for example – where discourse management, interactive communication, pronunciation and Grammar & Vocabulary have clear corollaries.  Diagramming pre-exam performance in this way again, can help teachers and students have a clearer picture of what needs doing and can make instruction more effective.

A helpful next step for the authors might be to think about how this idea can translate into practice in the wider world.  Currently, it seems as though a mark out of ten is awarded for each competence and while this inevitably gets the teacher thinking in more detail about what exactly their student can or can’t do, no definitions are currently provided as to what a “10” or a “3” might mean.

 

 

Tests, Tests and more Tests…

19 Feb

February seems to be all about tests.  All my classes have just done their mid-year grammar and vocabulary tests, my exam class students have just done a mock exam and are getting ready for the real thing in March, and already I’m preparing the next set of skills assessments for the continuous assessment programme.  Testing, it seems, is as inevitable as death or taxes.

Over on the Teaching English website, testing is one of the blogging themes this month and there is quite a range of posts on the topic:

My own post is called “To test or not to test – that is the question.”  In it, I look at the influence that tests have on education systems and the learning that students have to do in order to pass the tests, arguing that in many respects, the question of testing comes down to a battle between the system and the individual.

Ceri Jones offers an excellent example of negotiated assessment.  In “Assessment – negotiating exam formats“, she describes the experience of leading her learners to design their own assessment instruments, what would be tested and how, and reflects upon the success of the process.

Larry Ferlazzo looks at how to test your students and argues that test data should be used meaningfully, and that it shouldn’t just be a question of test and forget:  “Assessing English Language Learners.

NinaMK asks us to think about “Testing and Assessment” from the perspective of the pros and cons of asking students to assess themselves and each other – and gives a stark example of what can happen when it all goes wrong!  Meanwhile, JVL Narasimha Rao offers a personal insight into “Assessment of and for learning“, drawing on his own experiences within the Indian state sector.

Finally, Rachel Boyce argues for informal assessent. In “Testing and assessment – give your students a security blanket“, she suggests that a blend of informal and formal assessment is the best way to keep learners on track and engaged in measuring their progress.

***

What strikes me about the six posts, is the sheer range and purpose of testing that is discussed.  To go back to my own post for a moment, and to think about the whys and wherefores of testing, it occurs to me that those on both sides of the testing debate seem to mostly represent very black and white positions.  In testing, it seems, you are either for or against.

However, and as with many things, it seems the reality is infinitely more nuanced than that.  These posts demonstrate that not only are there many different ways to test – there are also very clear philosophies of testing emerging.  But that, perhaps, is another post!

Happy Testing!