Ouvrir le menu

NEOMA's world

Thematics :

Over the past few weeks, ChatGPT, the generative AI, has provoked a lot of discussion ranging from fascination to concern, especially when it comes to assignments and exams. Alain Goudey, Chief Digital officer, discusses banning or allowing the use of ChatGPT.


Regulating the use of ChatGPT

The first response is perhaps to prohibit the use of ChatGPT, like the New York establishments have done.

Technically, prohibiting network access to the OpenAI.com domain (and to all other domains offering generative AI based on GPT-3) is a possibility. This solution is not particularly effective because it can be easily circumvented and, above all, it is virtually impossible to manage because the phenomenon will certainly expand and entry points will multiply. So, when this tool is included in Office 365 by Microsoft, it will be completely pointless to consider this approach. It is also possible to use a proctoring approach, which only allows students to access authorized windows on their computer. Yet again, however, the tool is accessible via a mobile phone, so this approach is only relatively effective.

The second possibility is to move assignments or exams back to the classroom where they are handwritten and the use of connected devices can be monitored (thereby avoiding any cheating during an exam). This very simple technique is very effective in limiting the use of ChatGPT.

A clear position on the use of generative AI needs to be outlined: what is and what is not permitted and in what context.

  However, I think it is more important to set out a clear position on the use of generative AIs like ChatGPT in the institution or during examinations: what is and what is not allowed and in what context? It is therefore necessary to clearly define what is and isn’t allowed with this tool in order to avoid AI-assisted plagiarism (or Algiarism) and under what conditions. This implies that students are given a clear policy in which the penalties for plagiarism (usually the same as for cheating) are detailed.

Should AI be authorised, It is equally important to provide a clear framework for its use by asking to indicate explicitly any passages where AI has been used and obviously not to simply put “ChatGPT” in the bibliography, which is as precise as indicating “Internet” as a bibliographic source! It might be a good idea to have the student indicate the prompt(s) used in relation to the passage(s) concerned and have the result provided by the tool added as an appendix, this should be a minimum requirement in citing ChatGPT correctly.

Having students indicate the prompt used and the result provided by ChatGPT should be the minimum requirement.

 When ChatGPT is asked to give a clear (and restrictive) policy for use in university exams, this is the result:



And when asked for an authoritative and appropriate policy on the use of ChatGPT, the suggestions are:

I’m sure that each institution will write their own policy to ensure and guarantee academic integrity, but above all, the proper training for students in today’s world.

In addition, among the recommendations, it is clear that more work must be done on the examination itself. And this article continues with this question!

Changing the form of the assignment or exam

Although scary, there are a number of things that ChatGPT cannot do. Working on the form is therefore important because the tool is simply not be able to respond. A number of possibilities are listed below:

  • Have students write by hand: this seems basic, but it will force them to think a little about what they are writing (and it is much more tedious than copy- pasting).
  • Use MCQs, true-false answers or logic: it is very easy to mislead a tool that does not understand what it is writing. Any form of reasoning will therefore outsmart it very quickly. A trivial example here:

  • Use graphics, mind maps or any form of expression other than simple text: I personally use this technique to teach my students how to convey concepts with more impact by ignoring the PowerPoint format or long, uninteresting texts.
  • Use video: video is omnipresent in our students’ lives and they often have a real talent for storytelling. It is a means of expression that has become very simple, effective and pertinent. And it is highly useful for them know how to express themselves intelligently with video!

Changing the assignment or exam to mislead ChatGPT (and make our students think)

As mentioned above, in many cases, ChatGPT will have great difficulty answering with precision. This is because ChatGPT works by “predicting the next word” to construct the text without understanding what has been written. Therefore, when information is missing, it will invent the most plausible text by “ungraceful degradation”: which is not really a problem in a creative process, but is so when it comes to answering a question or making a synthesis. Thanks to Yves Caseau for this hilarious example of doctors and their unreasonable 24-minute work time (due to the tool mistaking minutes for hours):

This shows that in non-creative cases it can be useful to have the skill to use ChatGPT as a cognitive aid and to use broad questions so the learning data has to be pertinent. So any specific questions (with little data) or any questions requiring logic or original analysis will render the tool useless. Therefore, many possibilities are available:

  • Apply the concepts to a specific case: by design, if the case is more specific and “not well-known”, i.e. the GPT-3 case corpus is weak, then the tool will not be much help. This implies the need to use the concept rather than memorising it.
  • More generally, any less mainstream or more recent topic (the ChatGPT data only goes up to November 2021 for the moment) cannot be dealt with effectively by ChatGPT due to a partial or total absence from its corpus
  • Stop following a “one correct answer” approach in favour of one that requires individual reflection: for example, by having the students analyse the changes in versions of the documents submitted (if it is a dissertation), ask them to analyse a text/document from a number of angles with different viewpoints (and have them compare them), and work on the nuances of the points of view. This is more about reflecting on the response strategy than on the response itself.
  • Focus on skills rather than knowledge, focus on meaning rather than simple execution, focus on critical analysis rather than memorisation: is it better to know how to hammer in a nail or to know what a nail is? To define a nail or to ask why the nail? To propose relevant alternatives to the nail (the screw)? Or all of these simultaneously?
  • Develop strategies for monitoring, curation and reflection: once again, this implies getting our students to develop genuine practices for collecting information but above all for analysing this information. This is a skill that will become essential in an era of the generation of continuous and automated content.


Following the assignment or exam: identifying and countering ChatGPT

 There are many “anti-ChatGPT” tools available today, such as GPTZero, AI Content Detector, Giant Language Model Test Room, Opportunity and Open AI GPT-2 Output detector. However, this is still very complex and the detection tools are often wrong… However, the best is quite probably OpenAI, which will soon be able to sell the tool and its antidote… in fact, only they have the in-depth knowledge of the most common ChatGPT text and vocabulary structures and only they are in a position to introduce some kind of “watermarking”, a genuine authentication signature for the content generated by their AI.

However, a human brain can quite easily identify content generated by ChatGPT: the sentence structures, assertiveness, the structure of the response is more or less regular and recurrent (especially if the initial prompt is basic). There is a lack of tone or emphasis, a “mechanical” writing style. On the other hand, another element should raise questions: the absence of spelling mistakes, constant grammatical accuracy, an all too regular writing style and similarity between different copies are all warning signs.

Another possibility is to raise the standard of an “average” assignment and to award a bonus to any expression of an original, contrasting or critical point of view… or to encourage “risk-taking”, “innovation” or true “creativity” in the answer and the form of the answer. As for the assessment, clarity concerning the authorised or non-authorised use of the tool is a must.

Using the tool for assignments or during exams?

 ChatGPT can also be a great ally for teachers. For example, it is easy to create quizzes, MCQs and dictations using the OpenAI tool.

Beyond that, I’d suggest reversing roles and placing the student as “corrector” of the ChatGPT’s answers so they can think about the prompts and elements or arguments provided by ChatGPT:

Here it is easy to ask for an illustration of ChatGPT’s propositions in certain passages of the text, for example, to have the date checked, to ask for the origin of the text … This student-corrector approach is, I find, extremely relevant because they are obliged to work with the different concepts they have learned, whilst also adopting a critical approach to the concepts and tools at their disposal.


In conclusion: ChatGPT and its impact on thinking


Like any tool or technological advance, generative AI, will have an impact on our thinking, our way of seeing the world and our brain.


The use of ChatGPT could also have an impact on freedom of thought. On the one hand, ChatGPT can facilitate access to information and ideas by generating texts rapidly and efficiently. This can help users to explore new ideas and discover new perspectives.


On the other hand, the use of ChatGPT can also lead to a certain degree of dependence on automatic text generation (just look at current levels of usage when servers are regularly down) and can reduce the ability of users to think for themselves and formulate their own ideas. In addition, ChatGPT is run on a vast corpus of textual data, so it is certain to reproduce the bias and stereotypes present in this data, which could limit the perspectives and ideas suggested by the model. In fact, it is an excellent tool for revealing dominant thinking (within the corpus, of which not everything is known at this stage).

It is therefore important to use these generative AI tools in a mindful and critical way, knowing that they can help to explore new ideas, but they should not replace independent thought and personal reflection. It is therefore important to be aware of the limitations of these tools to avoid falling into a statistically dominant pattern!

In the age of artificial intelligence, we need to cultivate human (and collective) intelligence! Join the “AI Generations & Education” LinkedIn group, if these topics interest you.