In my first teaching job, 16 years ago and a continent away, we had a prospective student walk into the school to do a placement test. He spoke to a colleague about what he wanted and told her he had to go to the USA to negotiate a business deal and wanted his English to be up to the task. He was leaving in three months, what did she think? She thought that he would be best in a pre-intermediate (A2+) group and that he should hire a translator. I don’t remember whether he stayed to do the classes or not…
This summer, the French footballer Antoine Griezmann, who plays for Spanish club Atletico Madrid, was surrounded by speculation over his future and whether he was going to leave the club. At a press conference he refused to answer questions that were not in French to try and prevent questions from the Spanish media about his future, only to be met by an enterprising reporter and his smartphone – and Google translate. Clever, but Griezmann never did answer the question… You can watch the moment here.
Google translate had a good summer in Russia. There are lots of stories of fans and journalists using their smartphones to communicate with locals and getting by and it felt like the technology had turned a corner. I have long been pessimistic about the future impact of this technology on language teaching and I remain so and yet…. I don’t know whether that businessman from 16 years ago would have trusted a negotiation to google translate or not, just as I don’t know whether Griezmann would have actually answered the question if the journalist had spoken the French question, rather than pressing play on his smartphone. But the technology has come a long way in a short time.
As is the way of these things, it turns out that my son goes to school with the daughter of a university professor specialising in AI and language. He’s also one of the top brass at a large software company (no not that one) and we met at my son’s birthday party, managing to have a fascinating chat while the kids devoured crisps and threw brightly-coloured plastic balls at each other’s heads.
What I wanted to know was how AIs are taught language? That seemed to be the most interesting thing – there might well be useful parallels in teaching humans after all, or there might be useful cross-overs the other way. It turns out that this is not the case.
What I hadn’t realised is the extent to which machine translation is an exercise in brute force. Essentially, if you want to teach a machine language, what you have to do is code every single word into the program. Words do not carry significance, meaning, or implication in this model – they are treated as data objects with a set of defining characteristics. Then the program goes off and finds the word in the target language with the same set of defining characteristics. These are then defined as a match. One of the reasons why machine translation has got so much better is because the data sample the machines are working from has got so much better – they are no longer translating at word level, but at phrase level or sentence level. There is so much language out there that all you need to do is feed it in and you can find equivalence in another language. An organisation like Google, which probably by now has access to pretty much anything ever written anywhere (including all those informal colloquial text messages and chats you send through your android phone), has the ability to give their AI tremendous amounts of data.
But what this means, in short, is that machines and AI translators do not KNOW language. They know ABOUT language, and possibly not even that. I wonder whether the AI tags a structure like “was cooking” in “I was just cooking lunch, can I call you back?” as the past continuous? Is it helpful for the AI to know that? (Is it helpful for anyone to know that?) To judge by some of the student essays I see, Google translate does not know much about false friends – a recent example from a student suggested that both Hitler and Mussolini “defended the totalitarian regime” – “defend” being a cognate in Portuguese but in this case the translation “advocate” would have been better. Such tools also do not pick up on implicature or tone of voice. Think of the sentence “I’m so happy to see you” and now imagine it being said in a number of different emotional registers – anger, joy, love, frustration. Not to mention things like sentence stress and the impact on meaning.
It makes me wonder about that difference between “language” and “language-like behaviour”. You have probably all read a coursebook article about Alex the African Grey parrot, who lived for just over thirty years and who eventually developed a vocabulary of over 100 words – being able to both answer and ask simple questions. There is a lot of controversy about whether Alex actually understood, or whether he was simply picking up on external cues to guide a response. I remember working with a student who was doing homework from his regular school, which was essentially a grammar translation exercise, and performing it successfully (as I remember it was a text on military tactics for some reason), but who lacked the communicative ability to tell me what the text was about – in his case, he was able to produce language at a very basic communicative level, but to simulate understanding of language at a much higher level.
It seems to me that teaching AI languages is mirroring the development of language teaching methodology. Initially we have seen a lot of the grammar translation approach – words and grammatical patterns are seen as direct correlates of each other, and the translation occurs at the level of the pattern. This has proved to be somewhat ineffective, or at least not to be totally effective all of the time, and so it looks like this has moved onto the behaviourist methods and the stimulus-response approach. I can see the appeal of both of these from a programming perspective, because they must be, fundamentally, a lot easier to code. But both still end up with “language-like behaviour”.
All this is partly why the university professor told me that I would be safe in my job for a while yet. “Fifteen years?” I asked. “Probably about thirty” he said. But it makes me wonder what it must be like to truly teach an AI language. Stephen Pinker has cogently argued for the existence of an innate human ability for language and in almost all language teaching, there is a linguistic framework in the student – the ability to map words to concepts, ideas, and objects that we see in the world around us. If you were to truly teach an AI language, you would have to understand what that framework is and how to code it. I have no doubt that people are working on this now, but it seems like an uphill task. Coding is a logical “if that then this” process – language is rich with emotional content. Will an AI ever be capable of that?
***
Addendum: Since originally writing this, Phil Longwell has published a summary and review of the IATEFL LTSIG (Learning Technologies) webinar event: “The Role of AI in English Language Teaching, Learning, and Assessment”. For a better idea of how AI is already impacting our profession, I highly recommend you to read through it! You can find it here:
https://ltsig.iatefl.org/the-role-of-ai-in-english-language-teaching-learning-and-assessment/
Acknowledgement: This piece was sparked by “The Irreplaceable Human” by Rob Szabo, who also presented at the IATEFL LTSIG event mentioned above. I should say that Rob’s article has very little to do with mine, dealing with the equally interesting topic of whether you can ever replace human teachers with computers or robotic teachers (aka “The Mitra Conundrum”) – but reading it reminded me of the conversation I had had previously. However, I would say that if you are ever going to have a robot teacher, they will need to know a language, not just about it.
Image credits:
Code Photo by Markus Spiske on Unsplash
Girl & Robot photo by Andy Kelly on Unsplash
***
Leave a Reply