

New research is suggesting that children are forming emotional connections with chatbots and are also acting on the advice they receive.
According to the research, 81% of 11-16-year-olds are using AI chatbots regularly, while 31% of those users have said the chatbot feels like a friend.
Most worryingly, 33% say they’ve shared something with a chatbot that they wouldn’t tell their parents, teachers or even friends.
According to Toni Koraza, founder of MADX Digital, this shows a new kind of online risk that parents need to be aware of.
“For years, online safety has focused on what children might see – harmful posts, strangers in chat rooms, inappropriate content,” Koraza says.
“With AI, the concern is different. It’s about how children interact with the technology, and how personal that interaction can feel.
“Chatbots are built to feel helpful, friendly and constantly available.
“For a child, that can make it very easy to forget they’re talking to a system, not a person.”
Vodafone’s research found that 39% of children think chatbots can understand emotions in the same way that people do, while 17% say speaking to technology feels safer than speaking to a person.
“Children are still developing their understanding of trust, authority and emotional connection,” says Koraza.
“When a system is designed to mimic empathy, it can create a false sense of security.
“Parents need to explain that chatbots don’t ‘care’ – they generate responses based on patterns in data.”
The expert went on to warn that this kind of design could begin to discourage real-world conversations and affect how children form opinions and process information.
Koraza highlighted several key concerns families should be looking out for:
Emotional dependency: Children may confide in chatbots about friendships, mental health or personal worries. Vodafone’s study found 37% admit they confide in AI tools, with 16% seeking mental health-related advice.
Unchecked advice: With 55% of young users saying they struggle to tell whether information is accurate or biased, acting on chatbot guidance can be risky. “AI doesn’t verify truth in the way adults assume it does,” Koraza notes.
Academic over-reliance: Nearly half of teachers surveyed in the Vodafone research say students are increasingly turning to AI for schoolwork, and 29% have observed declines in independent thinking or problem-solving skills. “When AI completes the thinking process for them, children can develop a false sense of competence,” says Koraza. “They submit polished answers, but the learning hasn’t actually happened.”
Late-night usage and screen creep: Unlike social media feeds, AI chats can feel like ‘productive’ screen time. “Parents may not realise their child is awake at midnight asking a chatbot for homework help or personal advice,” Koraza adds. “That quiet interaction can disrupt sleep and emotional regulation.”
Experts stress that banning AI entirely isn’t realistic – or necessary. Instead, families should focus on structure, awareness and conversation.
Explore more on these topics:
Share
17th February 2026
12:48pm GMT