AI and EAP: perspectives from BALEAP members, October 2023

Following the launch of Chat GPT in November 2022, more AI-based tools have become available to the general public, and discussion about the potential impacts of AI on all aspects of society has broadened.

In the HE sector, the use of AI tools by students in coursework is a particular source of interest (and concern?), and there has recently been a lot of discussion, on the BALEAP@jiscmail.ac.uk list, of how we should attempt to understand the use of AI in an EAP context, and how we should respond. This post is an attempt to summarise key themes and questions raised in this discussion, as a starting point for further discussion and research.  I have focused on emails to the thread during the period 6 - 13 October 2023.


Student use of AI to assist in the learning process

All contributors to the thread either stated explicitly or implied that there is scope for students to use generative AI to develop their language and academic skills, and that there is a role for EAP specialists in advising them what AI can do for them, and what it can’t.


Impact of AI on the EAP curriculum

A lot of contributors thought that we have arrived at a point at which we need to radically re-evaluate the EAP curriculum. 

Some thought that there was an opportunity to spend less time on developing writing skills (because AI either has great potential in helping students develop writing skills, or will do the writing for the students), and more time on oracy.  Others thought that we might spend less time teaching language skills and knowledge, which would free up time to  work on academic skills, e.g. critical thinking, genre analysis.

Most contributors suggested that we need to be giving our students guidance on using AI.  In some institutions the use of AI has been integrated into the curriculum as a component of digital literacy skills, with a focus on critically evaluating AI tools and their output, and on using these tools in a way that is consistent with institutional policy on academic integrity.

Do we need to radically overhaul our EAP curricula in the light of developments in AI?

If so, what can be taken out ( or dealt with in less detail), and what needs to be added?            

 

 AI, EAP and academia

Many contributors pointed out that concerns about the use of AI by students extend far beyond EAP, and that the questions raised by EAP specialists are very similar to those raised at different levels across the educational sector. 

Whatever we advise our students needs to be consistent with institutional policy; ideally EAP specialists should be part of teams shaping institutional and sectoral responses to the AI phenomenon.

In addition, whatever we feel about future needs to teach language skills, there are external stakeholders who may have different views. UKVI sets minimum language level requirements for EAP students who need a visa to study in the UK. I am not aware whether UKVI has any kind of policy on the use of AI tools, and I assume that at present the use of AI tools on SELTS is proscribed.

On a more positive note, some contributors pointed out that the use of AI may help remove some of the barriers facing students with weak language skills, for example in correcting language errors in writing, or identifying areas in which students consistently make errors.  This potentially offers a more level playing field for the kind of students we teach, compared to other students.

How do we ensure any changes to our curricula and assessment, and the advice on the use of AI that we give to students, are consistent with institutional and sectoral policy, and with constraints placed by other stakeholders?

How do we ensure that EAP voices are heard in the shaping of educational policy on the use of AI?

 

Student use of AI for assessed coursework

There is clearly a lot of concern that students are able to use generative AI tools to obtain summaries of texts and responses to writing tasks, such as essays, to produce coursework (i.e. assessed work that is not produced under exam conditions). It appears to be becoming increasingly difficult to know how much of a submission for assessment is the student’s own work, and how much has been generated by AI.

One response is to replace existing tasks with ones for which the use of AI is unlikely to be helpful, for example reflective writing tasks.

In some institutions EAP assessment has moved from coursework (i.e. assessed work not produced under exam conditions) to exams, whether in-person or online, presumably because it is easier to police student use of AI. It is clearly easier to police use of AI in in-person exams than in online exams, through tools such as lockdown browsers and remote invigilation, which help minimise academic misconduct in online exams, but these are far from failsafe.  Even in in-person exams there are ingenious ways in which students have claimed to use technology to help them.

How can assessments be designed so that they encourage academic integrity, and discourage academic misconduct? 

 

Are claims made for the effectiveness of AI (in its present state) in producing or supporting coursework overstated?

Several contributors discussed the limitations of the use of AI, despite the hype that surrounds it.  After the initial launch of ChatGPT in 2022 there has been a realisation that it is a) not as easy to use as we were given to understand  and that b) its value as an aid in the production of academic texts is limited.

Generative AI will produce text that looks good (high level of accuracy and appropriacy in its use of language), but is less able to express stance, or evidence of analysis or critical thinking.   In addition, ChatGPT does not at present produce references to support points it has generated. 

How can claims for the effectiveness of AI, made by tech companies, students and academic staff be tested?

What research into the uses of AI to produce academic texts has been conducted, and what does its findings tell us?

Going forward, does the EAP community have the resources to track relevant developments in AI, and to test the claims made for technological advances?

 

It would be useful if we could take forward some of the questions raised in the discussion, and look for answers in existing literature, while at the some time conducting research projects to test out some of the claims made for AI in relation to language development, coursework and assessment.

Comments

  1. I have been following discussions on this on various channels. My personal stance is that the use of GenAI clashes what we teach in EAP - accessing and processing dense texts by engaging purposefully with their content in order to glean specific information, summarise it and incorporate it into an argument. Some of these steps can be delegated to AI - finding sources by key word, reading and summarising content. However, I strongly believe this will lead to deskilling and cognitive regress with time. For this reason, I do not think the use of AI in EAP should be encouraged where assignments involve research and work with sources. Of course there are a lot of issues surrounding students' use of it and academic misconduct so solutions that have been proposed to modify or change assessments seem feasible to me.
    Mirena Nalbantova

    ReplyDelete

Post a Comment

Popular posts from this blog

As Teaching Moves Online, Combating the Threats Posed by Ghost Writers is Perhaps More Important Now