An AI tutor that doesn't say the answer
Turning ChatGPT loose on homework teaches students nothing but copy-paste. Studius is built on the opposite principle: refusal is the core competency. The AI asks Socratic questions, suggests strategies, references earlier material — but never gives the final answer.
What breaks today
Students using ChatGPT to solve math exercises pass exams until they sit in a test room. Teachers cannot detect this, and the education system has no alternative that respects their pedagogical role.
How EduVlaanderen fixes it
Studius runs locally (Ollama, EU-sovereign) with a system prompt that explicitly forbids final answers. A post-filter detects accidentally leaked answers and rewrites the output to a Socratic question. Three times pressing for the answer auto-escalates the conversation to the class teacher.
Safety layers
- Pre-filter: scan message for self-harm or abuse signals (NL+EN patterns). Match → direct supportive answer with 1813/1712 helplines, escalation to care coordinator.
- Misuse detection: count "give me the answer" attempts. At 3 → lock conversation + notify class teacher.
- System prompt per age band: kindergarten blocked, lower (6-12) gets short sentences and max 3 sentences per answer, secondary gets full Socratic explanation.
- Post-filter: regex check on "the answer is", "= [number]", "the solution is". Match → rewrite to Socratic fallback.
- Audit log: every turn with refusal-reason + safety-flag in a tenant-scoped table. Care coordinator can inspect any escalated conversation.
Refusal as pedagogy
A good teacher rarely answers a student question directly — the first reaction is almost always a counter-question: what have you tried? what do you know about this? which step would you take first? Studius is not Socrates cosplay; it is an AI explicitly trained to imitate that teacher reflex. The difference with ChatGPT is fundamental: ChatGPT optimises for "helpfulness", Studius for "learning yield".