FAQ
FAQ
Basics
What is generative AI?
Generative AI refers to technical systems that are able to generate content - for example text, images, music or videos. The best-known example currently is ChatGPT: in response to an input (the so-called "prompt"), suitable text passages are generated in natural (human-sounding) language. The term artificial intelligence (AI) has become established for models that are developed using various machine learning methods. The models are "trained" using large amounts of data in order to generate output on the basis of probabilities.
Source: Ruhr University Bochum - Centre for Science Didactics.
A for Chatbot Access
Does the University of Oldenburg provide its members with access to a chatbot (e.g. ChatGPT)?
Since Feb. 2025, UOL members have been able to use the Chat-AI service. To do this, you log in to the Academic Cloud with your university login details.
This provides access to various AI models (more precisely: Large Language Models "LLM"), including various versions of Open AI's ChatGPT and, most recently, DeepSeek.
This is currently the one exception to the rule that you are not allowed to log in to AI tools with your university email address.
Are AI-generated texts factually correct?
(Large) language models - such as ChatGPT - are primarily probability distributions: It is calculated in which context which word is likely to follow another. As a result, the output text is not always factually correct. For very general and well known facts, the generated answers are usually valid, but the more specific it gets, the more false content is generated (so-called "data hallucinations"). Incorrect answers are also given in a linguistically plausible form, so that it is not possible to distinguish them from correct answers at first glance and a critical examination is required.
Source: Ruhr University Bochum - Centre for Science Didactics.
Rules & responsibilities
Where can I find information on the use of GKI-supported tools in teaching and for examinations at the University of Oldenburg?
The use of GKI tools in teaching and examinations is determined by the respective examination regulations and by the lecturers.
Who decides whether AI-supported tools may be used in the course and for examinations?
Within the framework of good academic practice, lecturers can determine whether and to what extent AI tools may be used in teaching and examinations for their courses. The basic principles can be found in the relevant examination regulations and module handbooks.
Click here for further texts related to AI.
Lecturers at the University of Oldenburg can use KIKO, the GKI handout configurator, to transparently communicate their requirements for the use of AI. This will be available in a pilot phase from winter semester 2025/26.
Should the use of AI tools be prohibited?
A complete ban on generative AI generally appears neither feasible nor sensible, as AI tools are already being used in everyday private, academic and professional life. Accordingly, all university members should have the opportunity to learn how to use these tools in a reflective and critical manner.
However, this does not mean that there cannot be situations and contexts in which their use is prohibited. For example, the use of AI tools can be restricted or prohibited in the context of examinations for writing seminar papers, theses or comparable written formats. When making a decision, lecturers must consider the learning objectives of a course and how these can be achieved with or without AI applications.
If the use of AI applications in teaching is prohibited for certain tasks, this should be made clear (e.g. in a handout that can be created with KIKO, the University of Oldenburg's AI handout configurator ), as is customary for other permitted or non-permitted aids. AI tools are not generally permitted aids, as they can often replace the student's own performance in a task. However, AI tools can be defined as permissible by lecturers within the framework of the applicable examination regulations and the principles of good academic writing.
What do I do as a student if teachers have not provided any information?
In principle, students must be informed at the beginning of the semester whether AI-supported tools are permitted; otherwise, the use of these tools is not permitted. In case of doubt or ambiguity, it is the students' responsibility (and in their own interest) to consult their lecturers.
Good academic practice & responsibility
U for Upload Guidelines
Which texts (and other files) can I upload to an AI chatbot?
Basic information in advance: Since many AIs include the content of prompts in their training data (see the respective terms of use), it is important which information is made available to the AI in this way, and therefore the question arises as to which information (texts, data, files) may be made available at all. As far as uploading texts or other files is concerned, no legally binding answer can be given here.
As a non-binding tip , however, the following can be said (as of Jan. 2025):
If a text or other file is already publicly accessible via the Internet, e.g. is made available on websites, then uploading these documents "to an AI" is also possible because the text is already publicly available and therefore no additional damage to the originator is to be expected.
For the same reason , files that are stored behind a paywall or are otherwise protected may not be uploaded - because they are not generally accessible. This can also apply to titles that are made available via a library, for example, as they could originate from a subscription and are therefore not freely usable. The same applies to literature or materials that are made available via a university seminar, for example: they are not automatically usable for AI.
As a general rule, texts (and other formats) may be uploaded if permission has been granted for their use. Permission is granted if
- the material is in the public domain,
- it is your own material,
- the material is under an Open Access or OER licence (such as CC0, CC BY and CC BY-SA) or
- if explicit and verifiable permission has been granted by the originator(s).
CC BY and CC BY-SA are actually a grey area. You can argue differently, but in the case of OER you can be a little more generous, as these materials are created for sharing and re-use.
Note: You can find out more about CC licences and OER at www.uol.de/oer
Thesituation is different for locally operated AI applications: if I can ensure at all times that the data cannot be viewed or processed by unauthorised third parties, then it may be possible to process the files. If in doubt, always ask (e.g. the operator of the AI, the originator of the data, etc.).
What do I need to look out for when using AI tools?
It is important that you adhere to the principles of good academic practice and adhere to examination regulations and other instructions (e.g. in the form of a handout) on the use and labelling of AI tools or AI output by your lecturers. Do not adopt AI output without reflection, but check it for relevance, (scientific) correctness and bias (i.e. the reproduction of prejudices and the associated disadvantage of certain groups) before using it for your own work.
Read more here:
What is changing for students when writing academic papers?
Basically, the general principles for scientific work and writing do not change: Transparency and comprehensibility must be ensured. Sources and resources used must be cited in a meaningful way, as is currently the case for standard resources. As long as no recognised practice for the use and labelling of AI tools has been established (such as the citation guidelines of your discipline for texts and text passages by other authors), students should discuss this topic with their supervisor at an early stage: How can AI tools be used for writing an academic paper? How and in what form should this be labelled?
Read more here:
How do I cite AI-generated texts and/or code correctly in the context of coursework or examinations?
In accordance with the principles of good academic practice, the use of AI must always be documented. How this is to be done can vary and depends on the applicable examination regulations and the information provided by lecturers for the respective course or examination (e.g. in the form of a handout). Any ambiguities should be clarified at an early stage!
Examinations, valuation & law
I for Integrity Concerns
What do I do if I suspect cheating with AI in an examination?
Status: October 2024.
In the case of suspected cheating, no distinction is made between the use of AI and other cases - the case can therefore be treated like any other suspected cheating and can, for example, be referred to the Examinations Office. Ideally, however, there should be contact between the persons concerned in advance.
Teachers should contact the person(s) concerned directly if they suspect cheating. There may have been misunderstandings as to whether/how the use of AI should be documented. Clarification should definitely be sought. However, the use of an AI testing tool is generally not permitted and the results from these tools are not reliable!
If no solution is found, the incident should be referred to the Examinations Office like any other suspected cheating.
Students should clarify in advance whether and under what conditions the use of AI tools is permitted for an examination.
Problems can arise if
- AI was used even though it was not explicitly permitted.
- AI was used, and this was permitted, but the use was not made transparent (citing, documenting and/or reflecting, depending on the requirements of the respective teacher).
If a mistake has been made, it is best to communicate this openly as soon as it is noticed. A solution can then be sought together with the teacher before the Examinations Office is involved.
If the suspicion is unjustified, the origin of the performance can also be described in a discussion and doubts can be dispelled. If this does not help, the relevant student bodies or examining boards can be asked for support.
Can people be obliged to use tools such as ChatGPT?
"If the use of AI tools is to be mandatory, the terms of use of the respective software must be observed. In particular, it depends on how user data is handled. This can vary greatly depending on which software or platform is used. If no data protection-compliant solution can be provided, use should be viewed critically and may only be voluntary." [1]
The University of Oldenburg offers a data protection-compliant solution in the form of the Academic Cloud (AC) AI tools.
Information on registering with AC can be found here.
---
Is it possible to recognise AI-generated texts - with the help of special software, for example?
Some providers of plagiarism detection software advertise that their programmes can recognise AI-generated text. Experience reports and a study show that this is only insufficiently successful. In addition, although a probability is given as the result of the analysis (for example: "This text was created with a probability of XX% AI-supported"), there is still a burden of proof on the part of the teacher. In a legal dispute in Bavaria concerning an attempt to cheat on an essay as part of an admission procedure, the analysis by software was not recognised as sufficient justification. Instead, the experience of the evaluating professor, who pointed out irregularities by comparing the texts with those of other students, was accepted as justification. Software analyses can also have the effect of fuelling mistrust between teachers and students: even low percentages in the probability can suggest that there is a fundamental possibility that the text could have been generated by an AI and that the teacher's assessment could be negatively influenced as a result. Accordingly, students express the concern that their self-written texts could be wrongly classified as AI-generated and fear unjustified negative consequences.
But even beyond questions of feasibility and accuracy, it does not appear to make sense in perspective to use software to detect AI-generated text: If text-generating AI applications are used productively and are soon also integrated into common word processing programmes - as is already the case in some cases - then it may become part of everyday text production, so that hybrid texts are created and the question of whether the text originates from a human or an AI can no longer be critically assessed (given appropriate transparency and context).
Source: Ruhr University Bochum - Centre for Science Didactics.
To read more:
Baresel, K., Horn, J. & Schorer, S. (2025). The use of AI detectors to check examination performance - A statement. Published by the "Digitale Lehre Hub Niedersachsen". Licensed under CC BY-SA 4.0. DOI: https://doi.org/10.57961/fjg9-jr89
AI tools
AI tools that can be used at the UOL
Since January 2025, a service directive has stipulated that employees of the University of Oldenburg administration may use the AI tools of the Academic Cloud (AC), such as Chat AI. Aspects of data protection and copyrights must be observed. You can access the AC using your university email address and log in with the corresponding university ID and password.
This is how registration works (for Chat AI, analogue for the other AC tools).
Students and lecturers can also use the AC's AI tools. However, use may be restricted for students, particularly in connection with examinations (see guidelines on the use of generative AI in studies and examinations).
If use is permitted, use of the tools is generally voluntary for students. The use of AI tools may be mandatory, e.g. if AI tools are the subject of teaching and examinations, but then only AI tools that comply with data protection regulations (such as Chat AI) and do not collect any personal user data may be prescribed. In addition, students may use other AI tools at their own discretion, at their own responsibility and expense. It should be noted that commercial AI tools (especially the free versions) can collect various data from users and from prompt entries.
Only AI tools approved by the University of Oldenburg may be used by students for teaching purposes. Exceptions to this or admission must be agreed with the Data Protection and Information Security Unit, among others.