GenAI: Using (Gen)AI for peer review and evaluations

Generative AI (GenAI) is increasingly becoming part of our daily lives, and (Gen)AI applications can be used for many different tasks. However, this does not mean that its use is always permitted or that it is ethical and responsible. Are you allowed to use (Gen)AI to evaluate an article, research proposal or abstract of a conference contribution? What if you think your article, research proposal or contribution was evaluated using (generative) AI? 

As a general rule, it is not allowed to completely outsource a peer review or evaluation to a (Gen)AI tool. But what exactly does this mean?

Would you like more information about peer reviews in general? Read this Research Tip.

 

Ghent University offers access to the AI tool Microsoft Copilot Chat. 
For more information, please consult following Research Tip: GenAI: Copilot Chat as an AI assistant at Ghent University (choosing data security).

 

As a reviewer/evaluator: Rules about the use of (Gen)AI for peer review and evaluations

The existing guidelines and information provided by Ghent University on the use of (Gen)AI emphasise that it is best not to enter privacy-sensitive, personal, copyright-protected or confidential information into (free) (Gen)AI tools without a binding protection guarantee or explicit and informed consent. This is the golden rule when using GenAI in order to protect the intellectual property and privacy of others (and yourself). Unpublished articles, research proposals or abstracts for conference contributions belong to this type of information and are shared in confidence.

For more information: Please consult the Ghent University guidelines on confidential information (in Dutch).  

The new European “Living guidelines on the responsible use of generative AI in research” (henceforth: Living Guidelines), explicitly state that researchers should…

Refrain from using generative AI tools substantially in sensitive activities that could impact other researchers or organisations (for example peer review, evaluation of research proposals, etc). Avoiding the use of generative AI tools eliminates the potential risks of unfair treatment or assessment that may arise from these tools’ limitations (such as hallucinations and bias).  Moreover, this will safeguard the original unpublished work of fellow researchers from potential exposure or inclusion in an AI model […].” (Living Guidelines, 2025)

According to the Living Guidelines, it is therefore not permitted to outsource the evaluation of papers, research proposals, conference contributions, etc. (in whole or partially) to (Gen)AI applications. On the one hand, this is because there is a risk of incorrect or unfair assessment as a result of the limitations of the (Gen)AI application used, such as hallucinations or bias.  On the other hand, because this would mean that unpublished work shared in confidence, which may contain privacy-sensitive, personal, copyright-protected or confidential information, would be made public without the consent and knowledge of the author(s). It is possible that the paper or research proposal could be used as input by the (Gen)AI application as training data and could therefore be generated (in part) as output elsewhere at a later date.

Using (Gen)AI to evaluate an article or research application may give rise to a breach of privacy (under the GDPR), as personal data may be processed without a valid legal basis and this data may be disclosed in an unsafe manner. In addition, it may also involve (1) copyright infringement, (2) breach of confidentiality obligations, (3) as well as infringement of intellectual property rights when copyrighted data is shared without authorisation.

Ghent University endorses ALLEA’s “European Code of Conduct for Research Integrity”. This code includes a section on good research practices, which also contains recommendations regarding assessments and evaluations. Researchers must “review and assess submissions for publication, funding, appointment, promotion, or reward in a transparent and justifiable manner, and disclose the use of AI and automated tools.” (ALLEA code, 2023). They must also respect the rights of authors and, consequently, seek permission from the author(s) “to make use of the ideas, data or interpretations presented” (ALLEA code, 2023). This also includes entering the file into a (Gen)AI application. Furthermore, the principles laid down in copyright law (Art. XI. 165 WER) are even stricter on this point than the rules on scientific integrity (ALLEA).

The abovementioned guidelines stipulate that it is not appropriate for reviewers or evaluators to enter a paper, research proposal or abstract into a (Gen)AI application (without explicit permission). However, this does not mean that all use of (Gen)AI during the peer review process is automatically excluded. Unless (Gen)AI use is explicitly prohibited by the journal, publisher, funding institution or conference organisers, it is, in principle, permitted to use (Gen)AI, for example for language correction of your review, provided that you do not share any privacy-sensitive, personal, confidential or copyright-protected data from the paper, research proposal or abstract (even indirectly). What is definitely not acceptable is to enter a complete paper/research proposal/abstract into a (Gen)AI application (e.g. Copilot, ChatGPT, Gemini...) and ask it to write a review. This would mean outsourcing the evaluation, which is not permitted according to the above guidelines. Various journals, publishers and research funding institutions also explicitly prohibit any use of (Gen)AI by reviewers.

Summary: Outsourcing the evaluation of a paper, research proposal, or abstract to (Gen)AI is NOT permitted and constitutes a breach of scientific integrity.

 

Example: guidelines of a funding institution (FWO)

Not every journal, organisation or funding institution applies the same rules. So make sure you are well informed about what the rules are. For example, the FWO prohibits the use of (Gen)AI in the evaluation process (see Article 21 ($6) and Article 24 ($2 and $4) in their guidelines).

 

As an author: What if you suspect that your paper, research proposal or abstract has been evaluated using (Gen)AI?

As explained above, outsourcing the assessment of a paper, research proposal or conference contribution entirely to generative AI is not permitted, and substantial use must also be transparently disclosed.  When writing a peer review, the use of (Gen)AI may constitute a breach of scientific integrity, a copyright infringement, a violation of confidentiality obligations and intellectual property rights. If you suspect that your assessment or evaluation was generated by (Gen)AI, you should verify this (to the extent possible) and report it to the appropriate authorities.

How can you recognise a peer review or evaluation written by (Gen)AI?

Recognising (Gen)AI-generated content is not yet an exact science, and (Gen)AI detectors regularly miss the mark. Nevertheless, there are a few things that can give away the use of (Gen)AI. This list is not exhaustive, but it can serve as a useful tool:

·         The peer review recommends referring to papers/publications that...

1.      … are unrelated to the subject of the proposal/paper.

2.      … do not appear to exist (and were therefore hallucinated).

·         he assessment or evaluation contains a sentence (or part of a sentence) that explicitly reveals that it was generated by a chatbot, for example, “I am very sorry, but I don't have access to real-time information as I am an AI language model” or “of course, I can generate a peer review for you”.

·         The wording of the review sometimes seems nonsensical or suspiciously vague and contains few or no concrete references to your research proposal, abstract or paper (see also the findings of Maria Ángeles OviedoGarcía):

o   “In abstract, the author should add more scientific findings.”

o   “Discuss the novelty and clear application of the work in the abstract as well as in introduction section.”

o  

 

What to  do about it?

If you have reasonable grounds to suspect that the assessment or evaluation of your research proposal, article or abstract was generated using (Gen)AI and that your rights as an author may have been infringed as a result, you should report this as soon as possible to two authorities:

1.      Contact the editors involved (and possibly also the journal or publisher), the funding institution and/or the conference organiser and inform them of this..

2.      In view of the provision in the ALLEA code, which defines the scope of the Commission for Research Integrity (CWI), you should always report complaints about irresponsible AI use to that body as well.

 

Need more information about (Gen)AI?

Quite a lot of information regarding the use of (generative) AI, from different perspectives and with different objectives, is already available at Ghent University.

More information regarding (Gen)AI and research?

More information regarding (Gen)AI and education?

Do you want to learn, experiment and practice?

Do you want to know to which information the students have access?

 

 

Vertaalde tips


Laatst aangepast 10 juli 2025 15:26