The University has defined four classification levels:
These use cases are provided to give practical guidance and reasoning as to why a classification is applied to a particular set of information.
Personal data of research participants and information governed by a contract, confidentiality or data sharing agreements, or where an assurance of confidentiality has been given or is expected will usually be Confidential.
Likewise, commercially or strategically sensitive information, protected Intellectual Property, and 'keys' that allow the re-identification of participants or subjects whose identifying characteristics have been replaced with codes or pseudonyms.
Data that has been pseudonymised is still considered to be personal data and falls within the scope of data protection legislation. Pseudonymisation is a way of reducing risk and ensuring appropriate data security, for example in interview transcripts, but it does not transform personal data to the extent that data protection legislation and our other obligations no longer apply. In many cases, pseudonymisation will still allow re-identification via indirect methods or indirect identifiers. A de-identified transcript may thus be Internal or Confidential depending on the sensitivities involved and risk of re-identification.
If the data is truly and irrevocably anonymised, is de-identified information which the participant has provided informed and express consent to make publicly available, or the information is intended and made for public access, then it will (as with the publications and theses that are not embargoed) be Public.
Experimental project and data analysis prior to publication will be Confidential, but may move to Public upon publication.
All information related to the delivery of research contracts with external organisations that may potentially lead to Intellectual Property or where contracts require results to be provided back to the external agency or organisations will be Confidential.
All processing using the ‘Data Safe Haven’ or High Performance Computer will be Confidential.
Social research that will be gathering Personal Information using services such as Redcap or Qualtrics will be Confidential.
Forensic analysis of any information; eg audio analysis will be Confidential.
The long term gathering and collation of information or specific data in terms of health and welfare studies will be Confidential.
Engagement with Open Source coding activities will always be Public. Code developed internally for research with the intention of posting to open GitHub repositories after publication will be Confidential until published at which time it will become Public.
Code developed as part of Intellectual Property will be Confidential.
Exam question papers start their life as Confidential with limited and strictly controlled access before the exam. Once the exam has been held they might become Internal (to the UoY and its students to help protect intellectual property in module design and examination) or remain Confidential for a period of years in cases where questions might be reused, and then become Public as their sensitivity declines over time and question papers are archived.
As information relating to the health and wellbeing of individuals, constituting special category personal data under data protection legislation, fit notes and records about medical matters will always be Confidential.
Records and information relating to investigation, disciplinary or performance will be Confidential. These may be shared directly with data subjects in a form of their requesting, or where agreement and process is engaged with external parties or representatives.
University groups or forum minutes and action lists will be Internal with appropriate access controls assigned to access on file stores or repositories. For example Wiki pages may be only accessible to attendees or members of a Department or Faculty. Documents and presentations provided to these meetings will carry at least the Internal classification but may be higher.
The use of AI in information preparation and analysis is becoming increasingly widespread but the type of information provided when querying AI must be managed.
When preparing information that will be Public the use of AI services for creation and analysis is appropriate and common AI services may be used. When the information is going to be Internal you must only use a limited set of services and you must also “Sign in with Google” and use your University account where possible.
Using AI to support the creation of Internal information, documentation or material you must ensure that you are not providing any personal information, eg contact details from team lists, as part of the submission process. This is particularly important when using meeting recording or transcribing services, the topic of the meeting must also be considered, it is not appropriate to use transcription or note taking services when personal matters, performance or circumstances may be discussed or where topics may stray on to information that would be Confidential. Internally all parties must agree to AI services in meetings and must have the opportunity to review, comment or amend output.
You cannot use any AI services to process or analyse Confidential or Secret information.
For additional support contact: