Guidelines
These guidelines are intended to provide timely advice to staff and students.
They will be reviewed and updated regularly, and will be developed to provide formal guidance. Microsoft Copilot 365 was used in the preparation of this document, to search for existing guidelines and summarise texts, but no content generated by Copilot is included directly in the document.
Background
Artificial intelligence (AI) is the ability of machines to perform tasks that normally require human cognition, such as recognising speech, problem-solving, making decisions and translating languages. Many AI tools and resources are developed by ‘learning’ complex patterns and relationships from ‘training’ data.
An important subarea is Generative AI, which involves algorithms that generate new content including text, code, images and music, by training on very large datasets. A Large Language Model (LLM, for example ChatGPT) is a type of Generative AI system that has been trained on vast amounts of text data to generate plausible textual responses to prompts written in natural language.
Recent years have seen rapid advances in AI technology, already transforming how we live and work. In higher education, AI challenges existing models of teaching and learning, has become essential in many fields to conducting world-leading research, and has the potential to transform operational processes.
The University recognises the potential benefits and associated risks of embracing AI and is committed to promoting and supporting its responsible use.
Potential of AI
When used appropriately, AI technology has the potential to enhance many activities undertaken by University staff and students.
Examples include: summarising emails, documents and meetings; reviewing research literature; generating ideas; searching for information; planning and automating experiments; providing interactive and personalised learning; analysing and presenting data; supporting critical thinking; improving accessibility and inclusion; developing writing skills; planning teaching and learning activities; understanding complex concepts; making decisions; automating business processes; providing personal assistant support.
More detailed examples are given in Appendix 1.
Risks of AI
There are also inherent risks in using and developing AI tools and resources, particularly generative AI, that are important to recognise.
These include: ‘hallucinating’ facts and sources; violating the intellectual property rights of content owners; perpetuating ethnic, social, cultural and other biases present in the training data or underlying algorithms; repeating factual errors present in training data; misusing confidential and sensitive information; failing to understand and consider potential harms to individuals or society.
More detailed examples are given in Appendix 2.
Principles for appropriate use
AI technology is evolving rapidly and will pose particular challenges that are difficult to predict. There are, however, basic principles that should underpin any use or development of AI, particularly generative AI. The University has adopted the following core principles, building on existing frameworks for academic integrity and emerging guidance on AI.
All staff and students using or developing AI are personally responsible for adhering to these.
Transparency
Always make clear when and how you have used AI in a process or in producing an output, citing relevant details. Specifically:
- distinguish clearly your original contribution from that derived from AI tools and resources;
- cite the AI tools or resources you used and provide links to their documentation;
- keep detailed records to provide an audit trail for your use of AI tools and resources.
Accountability
Take responsibility for any outputs or outcomes resulting from your use of AI. Specifically:
- investigate the reliability of sources from which information has been drawn;
- apply critical thinking to verify the accuracy and reliability of outputs or outcomes;
- acknowledge primary sources appropriately.
Competence
If you use or develop AI, update your knowledge and skills regularly to ensure you are aware of its capabilities, limitations and risks, and are able to use it effectively. Specifically:
- follow the popular and, where appropriate, academic literature on AI;
- take advantage of training opportunities;
- seek opportunities to collaborate and share best practice.
Responsibility
Ensure your use of AI tools and resources is ethical, legal and fair. Specifically:
- avoid malicious, dishonest or harmful uses of AI and adhere to recognised ethical frameworks;
- understand and mitigate the risks of using AI, including embedded ethnic, social and cultural bias;
- avoid disclosing personal or sensitive information;
- respect the rights of copyright and intellectual property owners.
Respect
Ensure your use of AI tools and resources demonstrates respect for individuals, society and the environment. Specifically:
- respect the privacy, protect the confidentiality, and ensure the agency of individuals;
- consider and mitigate possible negative impacts on individuals or society;
- be aware of and mitigate negative impacts on the environment.
Applying the principles
The five core principles for appropriate use apply to all areas of University activity and should be used as the primary source of guidance for staff and students who use or develop AI tools and resources. In this section we show how these apply in specific contexts.
Use of AI and AI-Enabled Tools
Public vs enterprise services
Many generative AI tools (e.g. ChatGPT and Microsoft Copilot) are available as either free-to-use public services or paid-for enterprise services. Free-to-use public services often store user input, potentially disclosing the content to third parties. They should only be used with extreme caution due to the risk of disclosing sensitive or confidential information, which could violate our ethical and legal responsibilities, including those under UK GDPR and Export Control legislation.
The University has licensed enterprise versions of Microsoft Copilot that do not share information outside the University and must be used when there is any risk of inappropriate disclosure.
Referencing the use of AI
There are currently no universally accepted rules for referencing the use of AI, but there is an emerging consensus around several principles. Where the use of AI is allowed and has been used to create an output or outcome you could not have produced yourself (‘significant use’), you should:
- declare its use and explain the role it played in creating the output or outcome;
- include details of the AI tool(s) used and provide links to relevant documentation;
- for generative AI systems, document the queries used to generate the output;
- avoid listing AI tools as document authors, because being an author involves agency;
- treat any AI-generated content included in a document as a ‘private communication’. For up-to-date information on referencing the use of AI, see the University Library Guidance on citation.
AI awareness
There is an increasing trend to include AI functionality in common software tools, in ways that may not always be obvious. This can create ambiguity regarding the need to declare the use of AI tools and runs the risk of inappropriate disclosure. In such cases it is your responsibility to take all reasonable steps to apply the core principles as rigorously as possible, noting the following:
- using AI simply to boost personal productivity does not constitute ‘significant use’;
- you must make others aware if an AI tool will be used to process their input (e.g. to a meeting);
- you should use University-approved tools wherever possible to avoid inappropriate disclosure;
- if you use other tools you are responsible for ensuring there is no inappropriate disclosure.
Teaching and Learning
University position
The University position is that, when used appropriately, AI tools have the potential to enhance teaching and learning and can support inclusivity and accessibility. Output from AI systems must be treated in the same manner by staff and students as work created by another person or persons, used critically and with permitted license, and cited and acknowledged appropriately.
Course unit variation
With approval at School level, this position may be broadened or narrowed for specific Course Units or assignments to encourage, require, or disallow specific uses of AI. In such cases students must be given detailed information that explains the rationale for the variation from the default position, as well as what is and is not allowed. Students may be required to sign up to a School-approved Code of Conduct.
Plagiarism
There is an expectation that work submitted by a student for assessment should contain their own original work and provide an honest representation of their understanding of the subject. Students should declare any use of generative AI in preparing work for assessment, and explain its role. Submitting work created by Generative AI as their own, or to misrepresent their understanding of the subject, is plagiarism and will be dealt with in accordance with the University’s Academic Malpractice Procedure.
Proofreading
If a student uses an AI tool for proofreading work submitted for assessment, they should be mindful of the University Proofreading Policy and ensure that use of the tool does not result in substantive changes to the content or meaning of their work.
Access and choice
Where the use of AI is encouraged or required, the University must ensure students have equitable access to tools at no additional cost to themselves. Where the use of AI requires students to provide personal data to a third party, an alternative mechanism must be available that does not disadvantage any student who declines to provide such data. Students must be made aware of how third-party systems use their data, including their authored prompts and other uploaded content.
Detecting malpractice
Tools to detect AI-generated content are unreliable and biased and must not be used to identify academic malpractice in summative assessment. Output from such tools cannot currently be used as evidence of malpractice.
Research
University position
The University recognises the potential of AI to power research and innovation and encourages applications that adhere to the principles for appropriate use. It will seek to support appropriate use by providing equitable access to AI technology and training.
Data
Any use of AI to generate data should be completely transparent. Legitimate uses may include synthesising datasets for use in research on scarce or sensitive data, imputing missing data in real datasets or conducting research on generative AI. In such cases the distinction between real and synthesised data should be made clear, and the methods used should be detailed. Using AI to fabricate or manipulate data such as experimental measurements, interview texts or research images, without clear declaration, constitutes research misconduct.
Publication
The corresponding author of an academic publication carries ultimate responsibility for its accuracy, balance and transparency. It may be legitimate to use AI in preparing a manuscript for publication, for example, to review the literature or improve writing style, but the corresponding author remains responsible for all content, whether AI-generated material is used verbatim or paraphrased. Specifically the author should ensure:
- publishers’ guidelines on the use of AI-generated content are adhered to;
- any significant use of AI in preparing the manuscript is declared and properly referenced;
- all claims in the text are accurate and sources properly referenced;
- the selection of material is unbiased (e.g. in reviewing the literature);
- references used to support a claim or observation actually do so;
- the source of all content is properly acknowledged and referenced.
Reviewing
When reviewing papers for publication or applications for funding, AI should be used with extreme caution, and any use must be declared to the publisher or funder. Inappropriate use of AI can be considered research misconduct. Reviewers should avoid:
- breaching a duty of confidentiality by uploading parts of a document under review to an AI service, or writing prompts that contain confidential information;
- relying on AI tools when making reviewing decisions, rather than using their own judgment;
- using AI when it is specifically prohibited by the publisher or funder seeking the review.
Chatbots
There is growing use of AI-powered chatbots in research, for example to screen research participants or collect qualitative data. In such cases the use of AI should be made clear to participants. Where there is a need to elicit personal or sensitive information, it is not acceptable to use a public generative AI service because that risks disclosing it to third parties.
Students undertaking research
Doctoral students and others undertaking research as part of their studies are responsible for maintaining a high standard of academic integrity. They should apply the principles of appropriate use and follow the specific guidance for both teaching and learning and research. As they are relatively new to research, they may be at risk of using AI inappropriately. As part of their research training, they should discuss any use of AI with their academic supervisor on a regular basis, and supervisors should mentor them on appropriate use.
Appendices
Appendix 1: Potential of AI
When used appropriately, there are many potential applications of AI across the full spectrum of academic activity. Many of these will be discipline-specific, but the following is a non-exhaustive list of generic examples.
Personal assistant
Intelligent support for personal organisation including generating task reminders, summarising documents, meetings and email conversations, and searching for information.
Literature review
Finding relevant literature, identifying key publications, and providing summaries of the state-of-the-art in a field or sub-field.
Writing and editing
Real-time spelling and grammar correction, style suggestions, and text revision to improve clarity and coherence.
Accessibility and inclusion
Captioning and audio description, voice-to-text, text-to-voice, language translation and other assistive technologies.
Data analysis
Analysing data, finding patterns, generating insights, and creating graphs and charts to enhance the presentation and interpretation of research results.
Personalised learning
Creating personalised learning plans based on learners’ specific needs, and providing real-time tutoring.
Personalised feedback
Providing personalised feedback on work submitted for formative assessment to improve student experience.
Critical thinking
Encouraging critical thinking by challenging key points in a document, identifying gaps, and suggesting ways in which arguments could be strengthened.
Generating ideas
Generating initial drafts of academic papers, research proposals, business plans or policy documents as a foundation for further refinement and development.
Understanding complex ideas
Summarising key points, clarifying important concepts, and answering specific questions.
Planning T&L activities
Generating learning activity ideas, creating or enhancing lesson plans, and support the development of resources to support T&L.
Personal research assistant
Suggesting relevant articles, suggesting new avenues to explore, and summarizing research papers.
Experiments and simulation
Optimising the design of experiments, automating experiments, and simulating complex systems.
Automating business processes
Providing tailored information or advice, automating decisions, and providing a concierge service for complex processes.
Appendix 2: Limitations and risks of using AI
There are inherent risks in using or developing AI systems, particularly generative AI. The following is a non-exhaustive list of some of the common challenges faced by users and developers.
Confidential information
Submitting confidential or sensitive information to an AI system may result in that information being unintentionally revealed to other users. Similarly, the output of generative AI systems may contain confidential or sensitive information that should not be shared.
Export control
When information about a research project is shared with a public AI system, there is a specific risk that this may violate export control regulations.
Accuracy and reliability
The accuracy and reliability of AI systems are only as good as the data on which they have been trained, and training sets often contain errors or have uncertain provenance.
Hallucination
Generative AI systems can make spurious connections between different data sources used in their training, leading them to fabricate ‘facts’ and misattribute sources.
Bias and stereotypes
Many generative AI systems are trained on data generated by humans – mainly from developed countries and self-selected ethnic and social groups. This means the outputs they generate are prone to ethnic, social and cultural bias and stereotyping.
Data cut-off
Generative AI systems are typically trained on a snapshot of data taken at a particular point in time, making them ‘blind’ to information generated since then.
User input
The quality and appropriateness of the outputs of generative AI systems depend on the ability of the user to provide appropriate, clear, concise prompts.
IP and copyright
Generative AI systems may generate content based on intellectual property or copyright material which the owners or creators have not licenced or consented for use. Similarly, submitting material that is copyright or otherwise embodies intellectual property to a generative AI system is likely to infringe the owners’ rights.
Impeding the development of knowledge and skills
There is a real risk that students or staff who rely too heavily and/or non-critically on AI systems will fail to acquire key knowledge and skills that are crucial to their development.
Potential harms
Users and developers of AI systems may fail to foresee potential harms to individuals or society, and this is exacerbated by these systems’ lack of transparency or accountability.
Generic vs specialised tools
Users can be tempted to use generic generative AI tools where more specialised tools (possibly also using AI) are available, resulting in poor outcomes.
Appendix 3: Terminology
Al Agent
A computer program capable of performing tasks autonomously by making decisions based on its environment, inputs, and predefined goals.
Algorithm
A set of step-by-step instructions for solving a problem or performing a task.
Artificial General lntelligence
A type of Al that has the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human.
Artificial lntelligence (Al)
Technology that is designed to carry out tasks that typically require human intelligence to complete.
Data Abstraction
The process of hiding specific details and showing only essential information to simplify data handling and enhance efficiency.
Deep Learning
Deep Learning based Al models are trained to understand and use human language, capable of performing language-based tasks like text summarization and answering questions.
Large Language Models (LLMs)
Al models trained to understand and use human language, capable of performing language-based tasks like text summarization and answering questions.
Machine Learning (ML)
A subset of Al that focuses on enabling machines to learn and make decisions from data without being explicitly programmed.
Narrow Al
Artificial intelligence that is designed to perform a specific task or set of tasks within limited contexts.
Neural Networks
Computing systems inspired by the brain's network of neurons, designed to recognize patterns and learn from data.
Reinforcement Learning
A machine learning technique that develops strategies to perform tasks through trial and error, improving over time based on actions, observations, and rewards in a simulated or real environment.
Supervised Learning
A machine learning technique that uses labelled data to predict future outcomes.
Unsupervised Learning
A machine learning technique that discovers patterns and associations in data where no labels are provided.
Guidelines for staff and students using or developing AI (PDF version)
Version history:
Implementation date - 14 Feb 2025
Last edited - 14 Feb 2025
Owner - University AI Strategy Group
Date of next review - July 2025