AI tools have become part of everyday student life. According to a Bitkom survey, 68% of people aged 16 and over in Germany regularly use AI. So the real question is no longer whether students use AI, but how to use it in a way that strengthens your learning, protects your data, and stays within the rules.
At University of Stuttgart, AI use is explicitly allowed, but under clear principles: it should support your personal engagement with the content and your scientific skills, not replace them. This article combines practical study strategies with the university’s key principles, so you can work with AI confidently and responsibly.
Why “letting AI do it” backfires
AI can feel like certainty on demand: instant explanations, summaries, structures, even entire drafts. The danger is subtle: if AI becomes your first and only step, you skip the mental work that actually builds competence. Over time, you risk weakening exactly the skills university is meant to develop: reasoning, argumentation, judgement, and creative problem-solving.
Also important: AI tools can be wrong. They can hallucinate, reproduce bias, and generate text that resembles existing works closely enough to create accidental plagiarism risks. That’s why the university’s principles emphasize your responsibility for both input and output, including careful checking before you use anything.
So yes, use AI, but use it like a smart calculator: powerful, helpful, and still something you steer.
The 7 principles, translated into everyday student decisions
Here’s what the University of Stuttgart principles mean in practice.
1) AI use is allowed, as support
Use AI to deepen understanding, practice, and structure your work. Don’t use it as a substitute for learning or for producing a “ready-made” assessment. Good use: explanations, examples, feedback on your reasoning, alternative perspectives
Risky use: generating your full solution or full text and submitting it as-is
2) Exams and graded work: your examiner decides
For assessments, the level of allowed AI use is decided by the examiner. If it’s not clear, ask. Practical tip: Ask early, not the night before submission.
3) You are responsible for what you enter
Don’t paste confidential, internal, or personal data into external AI tools. Don’t upload copyrighted materials into systems that store or train on inputs, unless the university explicitly permits it. Rule of thumb: If you wouldn’t publish it on a public website, don’t feed it to an external chatbot.
4) You are responsible for what comes out
Check AI output carefully: accuracy, bias, and similarity to existing texts.
AI can help you draft, but you must validate. Minimum check: verify claims with reliable sources, check quotations, confirm definitions, and rewrite in your own scientific voice.
5) Transparency and fairness
In agreement with the examiner, document and label where, how, and which AI system you used, so your work remains traceable and fair.
Simple documentation idea: keep a short “AI use log” (tool, purpose, what changed after your review).
6) Prefer university-operated tools and use them fairly
The university provides a data-protection-compliant tool called Responsible AI (RAI) and recommends using the university’s GDPR-compliant systems when possible. RAI is accessible via the university network or VPN and, according to the university’s information, is operated in the EU and inputs are not used for training.
7) Build AI skills intentionally
Use training offers and learning resources to develop AI competence as part of your academic toolkit: Inhalt: Lernportal KI für Studierende: ILIAS für Lehre und Lernen – Universität Stuttgart
A study workflow that keeps you in the driver’s seat
If you want a concrete way to use AI without drifting into “AI did it”, try this four-step routine:
Step 1: Attempt first (even briefly)
Before you ask AI, write down:
- what you already know
- what you don’t understand
- your first outline / hypothesis / solution approach
This protects your learning and makes the AI interaction higher quality.
Step 2: Ask for coaching, not answers
Examples of “learning-first” prompts:
- “Explain this concept at two levels: beginner and exam-level, then ask me 3 check questions.”
- “Here is my outline. Point out gaps, weak logic, and missing counterarguments.”
- “I solved it like this. Check each step and tell me where the reasoning breaks.”
Step 3: Verify with sources you can cite
Use library resources, lecture notes, textbooks, and peer-reviewed material. AI can suggest keywords and structure, but your references must come from real sources you have checked.
Step 4: Write and revise in your own academic voice
Treat AI output as raw material. You remain accountable for clarity, correctness, and integrity.
Choosing tools: what to use when
- For sensitive or university-related content: prefer Responsible AI (RAI)
- For external tools (if you use them): only do so without sensitive, confidential, or personal data, and adjust privacy settings where possible. The university explicitly warns that using external tools is at your own responsibility.
A short checklist before you submit any work
- Did AI support my thinking, not replace it?
- Did I clarify allowed AI use with the examiner (if graded)?
- Did I avoid sensitive, personal, confidential, or risky copyrighted inputs?
- Did I fact-check and rewrite, and check for unintended plagiarism?
- Can I transparently document where and how I used AI?
AI won’t make you obsolete. But using it uncritically might. If you treat AI as a learning partner, stay transparent, protect data, and verify output, it can genuinely improve your studying: faster iteration, better practice, clearer structure, and more feedback loops.
Jose/ USUS Team
Comment on this article
Your email address will not be published.