5 Reasons to use AI cautiously

AI is like water: a basic necessity of life, but if you are not careful, you can drown in it. It offers huge opportunities, but also requires a careful and thoughtful approach. For education and L&D professionals, it is important to know when AI can be a valuable ally as well as when it becomes counterproductive. Here are five situations when you should use AI with great caution.

1. When the process is also the goal

Learning is just not fun sometimes. But we do learn better by "biting through" when the going gets tough, in other words, the process of "struggling" is essential for many learning activities. Whether revising a text, developing a theory or practising a skill: shortening this process by using AI leads to lower learning outcomes. On the other hand: using AI as a practice master (by, for example, practising conversations in another language or practising how to deal with resistances or, for example, angry customers), you actually learn much more than from a theory book.

Example: AI can help you practise conversational skills or clinical reasoning, but it should be supportive of your own learning questions, not a replacement for them.

2. When you need to develop new insights
To gain insight into something, you need to delve into something. This can be done by asking AI to quickly summarise something for you. Then it is recommended to read and think through the entire material yourself. AI is great at generating summaries, but taking the shortest route rarely leads to deep understanding. When you ask AI to summarise complex information, you miss the process of reading, analysing and synthesising yourself. This process is essential for learning and developing new insights. Skipping this "struggle" means you are actively harming the learning process.

Example: Having AI do your homework may seem appealing, but without fiddling with the material yourself, you miss crucial aha moments that really make you grow.

3. When you don't yet know the limits of AI very well

A sycophant is a term that refers to a person who behaves in a submissive and overly flattering manner, often with the aim of obtaining favours or gaining favour with someone with more power, authority or influence. AI can suffer from this type of behaviour, and it fails differently from humans. If you don't properly understand AI's failure factors, you risk making wrong decisions. So never blindly trust what AI comes up with, always check the facts.

Recommendation: Start experimenting, but do so from a safe setting with innocuous topics that you can easily oversee yourself in which you can make mistakes without major consequences. Never share personal data, even if you think it is safe.

3. When AI is not suitable for a particular task

By now, I am a fairly experienced user and I see and hear a lot about AI applications, but despite this, I am often still amazed at what AI can do. For example, AI is surprisingly good at totally unexpected things, such as writing a beautiful poem in the style of Bredero, but can fail at seemingly simple tasks, such as counting letters correctly. Just ask how many times the letter G appears in the word Scheveningen. Without experience and understanding, you run the risk of overestimating AI's capabilities.

Consideration: AI can only work with data points that are actually available. It lacks the ability to form a holistic picture, as a teacher can. AI analysis should therefore never be a substitute for human judgement.

5. When accuracy is required

AI makes mistakes, just like humans. It proves even more poor than ourselves at gathering facts and checking them for accuracy. AI's so-called hallucinations - errors that sound convincing but are factually incorrect - make it dangerous to trust AI blindly in situations where high accuracy is required. AI lacks the human intuition to recognise errors that seem "too good to be true".

Personal experience: When I asked AI Today to list educational AI tools without including foundational models, I still got a list of ChatGPT and Claude. The error was understandable, but highlighted the importance of having prior knowledge to control AI output.

Recommendations For Sensible AI Use

1. Always provide "the human in the loop". As with an autopilot in an aircraft, human supervision is indispensable. Use AI as a tool, but don't let it make decisions without human control.

2. Assess output critically: always check the accuracy of AI output and be alert to plausible-sounding errors. Do not blindly rely on AI, especially in high-risk situations.

3. Experiment safely: Try AI with innocent topics or topics you already know a lot about yourself. Along the way, learn what works and what doesn't, but never share personal data, even if you think it won't hurt.

In short: AI offers huge opportunities for teaching and learning, but it requires wisdom and caution. The real power of AI lies in collaboration: harnessing the strengths of humans and machines to achieve more together. By handling AI consciously, you can reap the benefits without falling into its pitfalls. Use AI with policy, and always be the "gentleman in traffic" when it comes to this powerful technology: abide by traffic rules and practice well before engaging in traffic seriously.

SHARE POST

More inspiration

Review: Stop innovating, start changing sustainably

5 Reasons to use AI cautiously

Blog: The Future of Education with Generative AI - opportunity or challenge?

Get in touch

Leave your details here, we will get in touch soon with you!