Guidance on the use of generative Artificial Intelligence in PGR programmes

Generative Artificial Intelligence (AI) presents huge opportunities for you as a postgraduate researcher but you must make sure that you use it appropriately.

Principles for the use of generative AI in PGR programmes

As a PGR:

  1. You are responsible for maintaining a critical oversight of your use of generative AI. You should be able to explain and justify any use you make of generative AI.  
  2. You must be transparent about your use of generative AI. It will be assumed that any research you present or any work you submit for your programme is your own unless you acknowledge the use of generative AI (or any other forms of assistance).
  3. Your use of generative AI must align with the University’s expectations for responsible research. The research you present must be a product of your own effort (with transparency about any help you have received) and be authentic (not fabricated or falsified). Your research must meet legal and University expectations in terms of ethics, data management (including data protection) and intellectual property rights (including copyright).  
  4. Your use of generative AI must align with the University's expectations for academic integrity. Any work you submit for your programme must be a product of your own effort and all sources must be fully acknowledged. 
  5. Your use of generative AI must align with your development as a researcher within your academic community. Undertaking a PGR programme is about more than undertaking research, it is also about learning how to research (for example, how to undertake a comprehensive literature search, or how to analyse data), acquiring a wider knowledge of your chosen field (and understanding the limits of your knowledge), and learning how to behave within an academic community (for example, how to constructively critique the work of others).

The limitations of generative AI

To use generative AI properly, you need to be aware of its limitations.

  • Generative AI can ignore intellectual property rights
    It may produce outputs that ignore intellectual property rights, for example by producing content that does not include appropriate acknowledgement or breaches copyright. 
  • Generative AI use may encourage users to breach intellectual property rights
    For example by inputting/uploading articles or books or other material into a generative AI tool in contravention with the terms of use set by the publishers or other rights holder.
  • Generative AI can be wrong
    It may produce outputs that are incorrect or nonsensical.
  • Generative AI can make things up
    It may produce outputs that include ‘hallucinations’ - information (eg references, data) that is made up by the generative AI platform.
  • Generative AI can exhibit bias
    It may produce outputs that are biased or based on stereotypes, and may be weighted towards Western perspectives.
  • Generative AI is not omniscient 
    Its outputs may be missing key information or be outdated. 
  • Generative AI may be affected by the competency of the user 
    Outputs are partly determined by the prompt that a user inputs: an unclear prompt may produce an output that does not achieve what the user intends.

In addition, using generative AI tools also has significant environmental and human costs and may create a digital divide.

Take all of this into account and use generative AI tools with caution.

The risks of using generative AI 

As a PGR, you are responsible for maintaining a high standard of academic and research integrity and for adhering to the University's principles for the use of generative AI in PGR programmes. If you use, or misuse generative AI in your PGR programme you may be committing academic or research misconduct:

  • Plagiarism: for example, if you use a generative AI output (eg text, image, idea) without sufficient acknowledgement 
  • False authorship: for example, if use generative AI to produce or adapt material (eg writing, code, images, data) that you present as your own
  • Cheating: for example, if you use generative AI to provide support in an oral examination  
  • Fabrication or falsification: for example, if you use generative AI to create or manipulate/select eg data/images/consents/references for your research 
  • Misrepresentation: for example, if you present generative-AI produced research as your own. 
  • Breach of duty of care: for example if, via your use of generative AI, you commit a breach of data protection rules, or share sensitive or confidential data.

Guidance on AI usage scenarios

Using generative AI wisely

Follow these steps to ensure that you're using generative AI wisely and adhering to the principles for the use of generative AI in PGR programmes:

Referencing the use of generative AI use

You must correctly reference your use of generative AI.  

Glossary

Acknowledgements

This website has drawn from guidance issued by the Department of Computer Science at the University of York. It has also been influenced by guidance issued by other universities, with a particular mention to that issued by the University of Manchester and the University of Glasgow.