Guidance on the use of generative Artificial Intelligence in PGR programmes
Generative Artificial Intelligence (AI) presents huge opportunities for you as a postgraduate researcher but you must make sure that you use it appropriately.
Principles for the use of generative AI in PGR programmes
As a PGR:
- You are responsible for maintaining a critical oversight of your use of generative AI. You should be able to explain and justify any use you make of generative AI.
- You must be transparent about your use of generative AI. It will be assumed that any research you present or any work you submit for your programme is your own unless you acknowledge the use of generative AI (or any other forms of assistance).
- Your use of generative AI must align with the University’s expectations for responsible research. The research you present must be a product of your own effort (with transparency about any help you have received) and be authentic (not fabricated or falsified). Your research must meet legal and University expectations in terms of ethics, data management (including data protection) and intellectual property rights (including copyright).
- Your use of generative AI must align with the University's expectations for academic integrity. Any work you submit for your programme must be a product of your own effort and all sources must be fully acknowledged.
- Your use of generative AI must align with your development as a researcher within your academic community. Undertaking a PGR programme is about more than undertaking research, it is also about learning how to research (for example, how to undertake a comprehensive literature search, or how to analyse data), acquiring a wider knowledge of your chosen field (and understanding the limits of your knowledge), and learning how to behave within an academic community (for example, how to constructively critique the work of others).
The limitations of generative AI
To use generative AI properly, you need to be aware of its limitations.
- Generative AI can ignore intellectual property rights
It may produce outputs that ignore intellectual property rights, for example by producing content that does not include appropriate acknowledgement or breaches copyright. - Generative AI use may encourage users to breach intellectual property rights
For example by inputting/uploading articles or books or other material into a generative AI tool in contravention with the terms of use set by the publishers or other rights holder. - Generative AI can be wrong
It may produce outputs that are incorrect or nonsensical. - Generative AI can make things up
It may produce outputs that include ‘hallucinations’ - information (eg references, data) that is made up by the generative AI platform. - Generative AI can exhibit bias
It may produce outputs that are biased or based on stereotypes, and may be weighted towards Western perspectives. - Generative AI is not omniscient
Its outputs may be missing key information or be outdated. - Generative AI may be affected by the competency of the user
Outputs are partly determined by the prompt that a user inputs: an unclear prompt may produce an output that does not achieve what the user intends.
In addition, using generative AI tools also has significant environmental and human costs and may create a digital divide.
Take all of this into account and use generative AI tools with caution.
Generative AI tools:
- use statistical prediction - they are not rational or critical and typically there is no evaluation or fact-checking of material
- draw on the information they are trained on and have access to (typically the internet) and this information:
- may be wrong, flawed, limited (eg exclude information that is non-digital, not on the internet, or behind an internet paywall) or lack currency (some generative AI platforms only have access to information up to a certain date)
- is likely to have been created by other people and may be subject to copyright or other intellectual property rules and laws
- rely on algorithms which may bias the outputs in a non-transparent manner
- may not provide information on the sources used
- are subject to user error.
The risks of using generative AI
As a PGR, you are responsible for maintaining a high standard of academic and research integrity and for adhering to the University's principles for the use of generative AI in PGR programmes. If you use, or misuse generative AI in your PGR programme you may be committing academic or research misconduct:
- Plagiarism: for example, if you use a generative AI output (eg text, image, idea) without sufficient acknowledgement
- False authorship: for example, if use generative AI to produce or adapt material (eg writing, code, images, data) that you present as your own
- Cheating: for example, if you use generative AI to provide support in an oral examination
- Fabrication or falsification: for example, if you use generative AI to create or manipulate/select eg data/images/consents/references for your research
- Misrepresentation: for example, if you present generative-AI produced research as your own.
- Breach of duty of care: for example if, via your use of generative AI, you commit a breach of data protection rules, or share sensitive or confidential data.
Guidance on AI usage scenarios
Using generative AI wisely
Follow these steps to ensure that you're using generative AI wisely and adhering to the principles for the use of generative AI in PGR programmes:
Learn about the range of generative AI tools that are available, their similarities and differences, the opportunities and limitations they present, their terms and conditions of use, and how to use them to the best advantage.
Currently (June 2024), the University only has a licence for Microsoft's Copilot generative AI software, and other generative AI packages are not licensed, and therefore not supported by the University.
Share and discuss work in progress with your supervisor(s) and TAP.
Discuss any planned use of generative AI with your supervisor(s) and TAP, gain their approval where necessary, and record the outcomes of your discussions on SkillsForge.
Increasingly, generative AI is becoming integrated into standard software packages (eg Copilot is integrated into Microsoft products). You need to be aware of this so that you do not find yourself using generative AI when you did not intend to. To avoid this issue, consider turning off any integrated generative AI features in the software you use on your personal computer, and be particularly vigilant when using a non-personal (eg University) computer where integrated generative AI features may be activated within software packages as a default.
Undertake the Research Integrity Tutorial (this is mandatory and should be completed before your first TAP meeting).
Read and make sure you understand the relevant University policies:
- PGR Academic Misconduct Policy
- Policy on Transparency of Authorship in PGR Programmes (including generative AI, proofreading and translation)
- Responsible research and research misconduct
- Academic Misconduct Policy (this only applies if you are undertaking taught modules)
Consider undertaking a BRIC course on research integrity. Discuss any queries you have with your supervisor(s) and TAP.
- Ask yourself whether using generative AI in your research is the right approach or whether an alternative might be more transparent or more in keeping with your development as a researcher.
- Ask yourself if your use of generative AI is replacing your own effort or simply replacing another technological tool - be particularly wary of the former.
- Check whether your funding body (if applicable) has a policy on generative AI use.
See the guidance on AI usage scenarios for more information.
It is recommended that you avoid using generative AI - in any way - in the production/delivery of formally assessed work, including, but not limited to reports for formal reviews of progress, the thesis (or alternative assessment format) and the oral examination AND for formative tasks (eg drafts of work or written updates for your supervisor, TAP reports, ethical approval forms) that may feed into formally assessed work.
This is because the boundary between acceptable and unacceptable practice may not always be clear, and it may be tempting and easy to move from acceptable to unacceptable practice.
If you and/or your supervisor(s) think there may be any ethical implications to your use of generative AI you must apply for ethical approval as soon as possible. If in any doubt, contact your ethics committee and ask for advice.
You are required to have a data management plan: use this to help you consider any risks associated with generative AI use. If you upload private, sensitive, confidential or embargoed data to a generative AI tool you are likely to breach privacy or intellectual property rules, or break a contractual or licensing agreement.
Keep good records of your use of generative AI.
Always acknowledge your use of generative AI and correctly cite it in any work you submit for your programme - see the advice below on correct referencing.
Evaluate the reliability, accuracy, currency and impartiality of any generative AI output. Check for any infringement of intellectual property rights (including copyright).
Keep good (full and where applicable, time-stamped) records of your work in progress (including notes, calculations, data generated/collected, report/thesis drafts,) and (where relevant) save different copies of your work in progress rather than overwriting the same file. You need to be prepared to explain how your work was produced, and how your thinking has evolved over time, and provide evidence to back this up.
Generative AI is a fast moving area. Keep up to date with developments in the technology and guidance on how to use it. Engage with your fellow PGRs, members of staff, and experts in the field to share knowledge and good practice.
If you are planning to publish your research, determine the likely publishers and find out what their policies are on use of generative AI to make sure you are not setting yourself up for issues in the future.
If your research is focused directly on the use of generative AI (for example, exploring biases in generative AI tools or looking at the impact of generative AI tools on the propensity of students to commit academic misconduct) then it is possible that, in order to conduct your research, you may need some exemptions from the standard restrictions on the use of generative AI set out in this document. You must discuss any such scenarios with your supervisor and TAP as soon as possible and potentially obtain ethical approval.
Referencing the use of generative AI use
You must correctly reference your use of generative AI.
It is recommended that you set up your own prompt/response repository to ensure accurate record keeping and ensure that you reference generative AI use correctly. This could be a spreadsheet or other document where you record the information you need for referencing alongside the response you receive.
If you include or refer to any generative AI output in your work, you must reference it correctly and not present the generative AI output as your own work (eg text/image) or thought. Failure to reference generative AI outputs correctly is academic misconduct.
Current consensus is to treat a generative AI output as if it were private correspondence. This is because, like private correspondence, a generative AI output cannot be easily replicated and verified, and each prompt and response session is unique to you at that moment in time.
It is expected that references to generative AI outputs will include: (i) the name of the generative AI platform, (ii) the date of use, (iii) the person who input the prompt.
You should keep records of prompts used and the responses received from generative AI, even if you do not use these in your submission.
Advice on how to cite private correspondence within the various referencing styles is available from the Library.
eg for Harvard referencing the in-text citation might look like this (OpenAI ChatGPT, 2024) and the corresponding reference might look like this: OpenAI ChatGPT (2024). ChatGPT response to YOUR NAME, 1 January 2024.
If you use generative AI in your work, you must, in most cases, acknowledge this: failure to do so is academic and/or research misconduct (depending on the type of use).
It is expected that references to generative AI use will include: (i) the name of the generative AI platform, (ii) the date of use, (iii) the nature of the use.
You should keep records of your use of generative AI.
This is covered in the Policy on Transparency of Authorship in PGR Programmes (including generative AI, proofreading and translation) .
It is recommended that you use the ‘database’ format, with the addition of information about the specifics of the use.
For example:
OpenAI (2024). ChatGPT. Version X. [Online]. Available at: URL [Accessed 9 April 2024]. To translate the Welsh poem Rhyfel (out of copyright) into English.
Glossary
Artificial intelligence (AI). Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks (JISC).
Generative AI: an artificial intelligence (AI) technology that automatically generates content [text, images, video, music, code etc.] in response to prompts written in natural-language conversational interfaces (UNESCO).
Generative AI tools include (this list is not exclusive and is constantly growing): ChatGPT, Microsoft Copilot, Google Gemini, Anthropic’s Claude, Meta’s Llama, Midjourney, DALL-E, RunwayML, Pika, AIVA.
Large Language Model (LLM): a form of generative AI that is tailored for text-based tasks.
Responsible research: research that meets the University’s research integrity principles of honesty, rigour, transparency and open communication, care and respect, and accountability.
Acknowledgements
This website has drawn from guidance issued by the Department of Computer Science at the University of York. It has also been influenced by guidance issued by other universities, with a particular mention to that issued by the University of Manchester and the University of Glasgow.