Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The widespread availability of new Artificial Intelligence (AI) tools, (such as ChatGPT) has given rise to debate around how these should be used in research. The use of AI tools presents opportunities for research and researchers, but there are also risks inherent in how these are used and credited in research outputs.

Research and Innovation Committee (RIC) agreed recently that the University should align its position to that agreed by the major academic publishers, namely that:

  1. AI should not be credited as an author of a research output
  2. For transparency, any use of AI in generating a research output including, for example, the collection, analysis and interpretation of the data, should be cited in the methods or acknowledgements sections of the output as appropriate
  3. Peer reviewers should be vigilant about the use of AI technologies and Large Language Models in the development of any research output which they review, in particular the collection, analysis and interpretation of data underpinning the output.  

The University’s Publication and Authorship guidance has been updated to reflect this position.