Anna Collard | SVP Content Strategy & Evangelist | KnowBe4 Africa | mail me |
Artificial intelligence (AI) risks for children encompass privacy concerns, over-trust, misinformation and psychological effects, highlighting the need for careful oversight. It is challenging to maximise AI’s benefits for children’s education and growth while ensuring their privacy, healthy development and well-being.
In just two years, AI has undergone a revolution. Generative AI tools like ChatGPT, Google’s Gemini and Microsoft’s Copilot are now widespread. Meta has integrated AI chatbots into platforms like WhatsApp, Facebook and Instagram, making the technology more accessible than ever.
Children growing up in the AI-powered world
For children growing up in this AI-powered world, the implications are both exciting and concerning. These AI tools offer unprecedented opportunities for learning, creativity, and problem-solving. Children can create art, compose music, write stories and learn languages through engaging, interactive methods.
The personalised nature of AI chatbots, providing quick answers and tailored responses, makes them especially appealing to young minds. However, like any transformative technology, AI brings risks that parents, educators and policymakers must carefully consider. These include privacy concerns, over-trust, misinformation and psychological effects, among others.
As we step into this AI-driven era, we must weigh the incredible potential against the genuine risks. Our challenge is to harness AI’s power to enrich children’s lives while safeguarding their development, privacy and overall well-being.
AI use by children and privacy concerns
Parents need to know that chatbots, although seemingly harmless, collect data and may use it without proper consent. This could lead to privacy violations. According to a Canadian Standards Authority report, privacy risks range from minor to highly significant issues.
Examples include using a child’s data for targeted advertising or creating detailed profiles based on tracked conversations, preferences and behaviours. Such profiles, if used maliciously, could enable manipulative tactics to spread misinformation, polarisation or grooming.
Large language models (LLMs) were not designed with children in mind. These systems train on massive amounts of adult-oriented data. This training process may fail to account for the special protections minors’ information requires.
AI for children and over-trust
Another concern for parents is that children may form emotional connections with chatbots and trust them too much. These chatbots are not human and are not their friends.
The over-trust effect is linked to the media equation theory, which states people anthropomorphise machines and assign human attributes to them. This phenomenon can make people overestimate AI systems’ capabilities and place too much trust in them, leading to complacency.
Over-trust in generative AI could lead children to make poor decisions by failing to verify information provided by the AI. This compromises accuracy and can result in various negative outcomes. When children rely too much on AI, they may become complacent and reduce face-to-face interactions with real people.
Inaccurate and inappropriate information
AI chatbots, despite their sophistication, are not infallible. When unsure, these AI tools may ‘hallucinate’ by making up answers instead of admitting they don’t know. This could result in incorrect homework answers or, more seriously, false health diagnoses for minors experiencing illness.
AI systems are trained on biased data, which can reinforce these biases and provide misinformation. This misinformation can negatively affect children’s understanding of the world.
The potential exposure to harmful sexual material is particularly alarming for parents. This includes AI tools creating deepfake images or manipulating and exploiting children’s vulnerabilities. Such scenarios may subliminally influence children to behave in harmful ways.
Psychological impact and reduced critical thinking in children
As with most new technology, overuse of AI tools can lead to poor outcomes. Excessive AI use by kids and teens reduces social interactions and critical thinking.
We already see negative side effects from overusing social media, like increased anxiety, depression, aggression, and sleep deprivation. These issues also result in a loss of meaningful interactions with others.
The need for careful oversight
Navigating this brave new world is challenging for children, parents, and teachers alike. I believes policymakers are catching up. In Europe, the AI Act aims to protect human rights by ensuring AI systems are safer.
Until proper safeguards exist, parents must monitor children’s AI usage and introduce family rules to mitigate negative effects. Prioritising non-screen play and reading helps boost children’s self-esteem and critical-thinking skills.