ChatGPT or CheatGPT: how to manage assignments and exams?
Published on 02/13/2023
Thematics :
Published on 02/13/2023
Over the past few weeks, ChatGPT, the generative AI, has provoked a lot of discussion ranging from fascination to concern, especially when it comes to assignments and exams. Alain Goudey, Chief Digital officer, discusses banning or allowing the use of ChatGPT.
Regulating the use of ChatGPT
The first response is perhaps to prohibit the use of ChatGPT, like the New York establishments have done.
Technically, prohibiting network access to the OpenAI.com domain (and to all other domains offering generative AI based on GPT-3) is a possibility. This solution is not particularly effective because it can be easily circumvented and, above all, it is virtually impossible to manage because the phenomenon will certainly expand and entry points will multiply. So, when this tool is included in Office 365 by Microsoft, it will be completely pointless to consider this approach. It is also possible to use a proctoring approach, which only allows students to access authorized windows on their computer. Yet again, however, the tool is accessible via a mobile phone, so this approach is only relatively effective.
The second possibility is to move assignments or exams back to the classroom where they are handwritten and the use of connected devices can be monitored (thereby avoiding any cheating during an exam). This very simple technique is very effective in limiting the use of ChatGPT.
A clear position on the use of generative AI needs to be outlined: what is and what is not permitted and in what context.
However, I think it is more important to set out a clear position on the use of generative AIs like ChatGPT in the institution or during examinations: what is and what is not allowed and in what context? It is therefore necessary to clearly define what is and isn’t allowed with this tool in order to avoid AI-assisted plagiarism (or Algiarism) and under what conditions. This implies that students are given a clear policy in which the penalties for plagiarism (usually the same as for cheating) are detailed.
Should AI be authorised, It is equally important to provide a clear framework for its use by asking to indicate explicitly any passages where AI has been used and obviously not to simply put “ChatGPT” in the bibliography, which is as precise as indicating “Internet” as a bibliographic source! It might be a good idea to have the student indicate the prompt(s) used in relation to the passage(s) concerned and have the result provided by the tool added as an appendix, this should be a minimum requirement in citing ChatGPT correctly.
Having students indicate the prompt used and the result provided by ChatGPT should be the minimum requirement.
When ChatGPT is asked to give a clear (and restrictive) policy for use in university exams, this is the result:
And when asked for an authoritative and appropriate policy on the use of ChatGPT, the suggestions are:
I’m sure that each institution will write their own policy to ensure and guarantee academic integrity, but above all, the proper training for students in today’s world.
In addition, among the recommendations, it is clear that more work must be done on the examination itself. And this article continues with this question!
Changing the form of the assignment or exam
Although scary, there are a number of things that ChatGPT cannot do. Working on the form is therefore important because the tool is simply not be able to respond. A number of possibilities are listed below:
Changing the assignment or exam to mislead ChatGPT (and make our students think)
As mentioned above, in many cases, ChatGPT will have great difficulty answering with precision. This is because ChatGPT works by “predicting the next word” to construct the text without understanding what has been written. Therefore, when information is missing, it will invent the most plausible text by “ungraceful degradation”: which is not really a problem in a creative process, but is so when it comes to answering a question or making a synthesis. Thanks to Yves Caseau for this hilarious example of doctors and their unreasonable 24-minute work time (due to the tool mistaking minutes for hours):
This shows that in non-creative cases it can be useful to have the skill to use ChatGPT as a cognitive aid and to use broad questions so the learning data has to be pertinent. So any specific questions (with little data) or any questions requiring logic or original analysis will render the tool useless. Therefore, many possibilities are available:
Following the assignment or exam: identifying and countering ChatGPT
There are many “anti-ChatGPT” tools available today, such as GPTZero, AI Content Detector, Giant Language Model Test Room, Opportunity and Open AI GPT-2 Output detector. However, this is still very complex and the detection tools are often wrong… However, the best is quite probably OpenAI, which will soon be able to sell the tool and its antidote… in fact, only they have the in-depth knowledge of the most common ChatGPT text and vocabulary structures and only they are in a position to introduce some kind of “watermarking”, a genuine authentication signature for the content generated by their AI.
However, a human brain can quite easily identify content generated by ChatGPT: the sentence structures, assertiveness, the structure of the response is more or less regular and recurrent (especially if the initial prompt is basic). There is a lack of tone or emphasis, a “mechanical” writing style. On the other hand, another element should raise questions: the absence of spelling mistakes, constant grammatical accuracy, an all too regular writing style and similarity between different copies are all warning signs.
Another possibility is to raise the standard of an “average” assignment and to award a bonus to any expression of an original, contrasting or critical point of view… or to encourage “risk-taking”, “innovation” or true “creativity” in the answer and the form of the answer. As for the assessment, clarity concerning the authorised or non-authorised use of the tool is a must.
Using the tool for assignments or during exams?
ChatGPT can also be a great ally for teachers. For example, it is easy to create quizzes, MCQs and dictations using the OpenAI tool.
Beyond that, I’d suggest reversing roles and placing the student as “corrector” of the ChatGPT’s answers so they can think about the prompts and elements or arguments provided by ChatGPT:
Here it is easy to ask for an illustration of ChatGPT’s propositions in certain passages of the text, for example, to have the date checked, to ask for the origin of the text … This student-corrector approach is, I find, extremely relevant because they are obliged to work with the different concepts they have learned, whilst also adopting a critical approach to the concepts and tools at their disposal.
In conclusion: ChatGPT and its impact on thinking
Like any tool or technological advance, generative AI, will have an impact on our thinking, our way of seeing the world and our brain.
The use of ChatGPT could also have an impact on freedom of thought. On the one hand, ChatGPT can facilitate access to information and ideas by generating texts rapidly and efficiently. This can help users to explore new ideas and discover new perspectives.
On the other hand, the use of ChatGPT can also lead to a certain degree of dependence on automatic text generation (just look at current levels of usage when servers are regularly down) and can reduce the ability of users to think for themselves and formulate their own ideas. In addition, ChatGPT is run on a vast corpus of textual data, so it is certain to reproduce the bias and stereotypes present in this data, which could limit the perspectives and ideas suggested by the model. In fact, it is an excellent tool for revealing dominant thinking (within the corpus, of which not everything is known at this stage).
It is therefore important to use these generative AI tools in a mindful and critical way, knowing that they can help to explore new ideas, but they should not replace independent thought and personal reflection. It is therefore important to be aware of the limitations of these tools to avoid falling into a statistically dominant pattern!
In the age of artificial intelligence, we need to cultivate human (and collective) intelligence! Join the “AI Generations & Education” LinkedIn group, if these topics interest you.