I’m increasingly asked two questions at schools and when I’m chatting with parents: “Is AI bad for my teen?” and “What should I be doing about it?” After reviewing the evidence here’s a clear, parent-friendly guide. My aim is simple: to help you to keep your child safe, confident and future-ready.
*I am not an expert on AI, so lots of useful resources are included for more information.
What are the main ‘AI concerns’ for parents of teens right now?
Privacy & data trails. Many AI tools learn from what young people type, say or upload. That matters because biased or poorly governed data can lead to unfair outcomes and deep mistrust. Schools are being urged to treat data literacy as an ethical competency—teaching pupils what data is collected, why consent matters, and how bias creeps into AI. Your child’s school should also run tight, transparent data-governance for any AI it deploys.
Misinformation & deepfakes. Teens already struggle to understand what is real and what is FAKE NEWS. Generative AI raises the stakes with convincing fakes—videos, images and audio that look real enough to sway reputations and friendships. UK children’s regulators and charities have warned specifically about deepfakes’ role in bullying, scamming and reputational harm. Internet Matters
Over-reliance & “techno-stress.” When a bot drafts your homework, it’s tempting to let it think for you. Evidence shows public concern about dependence, reduced human skills and the mental load of keeping up with fast-moving tech. We see this as sleep disruption, perfectionism, avoidance of difficult tasks and a drop in face-to-face connection—especially risky in adolescence. Schools should explicitly teach when not to use AI so students keep building deep, independent thinking.
Wider online harms. The biggest harms arise when AI amplifies existing risks—hyper-targeted social feeds, appearance pressure, and sexual exploitation. The Children’s Commissioner has called for a ban on “nudification” and sexually explicit deepfake apps because they facilitate abuse—girls are particularly targeted. childrenscommissioner.gov.uk+1
Scale of use. AI is no longer niche. Ofcom’s latest research shows half of UK children say they use AI tools, with usage rising year-on-year and common for schoolwork and fun. That makes literacy and safety education urgent, not optional. www.ofcom.org.uk+1
Which forms of AI are most risky for teenagers?
- Social AI “companions”. Apps designed to simulate friendship or romance (with memory, flattery and 24/7 availability) can create unhealthy attachment, enable sexual role-play, and displace real-world relationships. Child-safety groups have labelled these an unacceptable risk for under-18s; recent testing of big-platform chatbots has also exposed failures around self-harm and eating-disorder guidance. The advice might be to ‘supervise or avoid’ but you will need to talk to your child about risks and staying safe. Here is an example of when it can go tragically wrong, but I would advise against ‘scare tactics’ with teens, so this is more for your information. LifewireThe Washington Post
- Deepfake/nudification tools. These are uniquely harmful because they weaponize a child’s likeness and can be used for coercion and abuse. Treat as zero-tolerance.
- Unvetted chatbots embedded in social apps. Many lack parental controls, moderation or clear data policies. Start with “what is stored, who can see it, how do I turn it off?” If answers are unclear, advise against use. Internet Matters
(Important nuance: AI itself isn’t “good” or “bad”—risk depends on design, guardrails and supervision.)
What are the benefits when AI is used well?
Plenty—especially in education and inclusion:
- Personalised learning. Adaptive tools can tailor practice and feedback to a teen’s pace, building confidence and freeing teachers for human connection. Used well, that reduces anxiety and helps motivation.
- Support for additional learning needs. Text-to-speech, speech recognition, visual scaffolds and real-time translation can remove barriers for pupils with dyslexia, ADHD or for those learning English—promoting independence and dignity.
- Teacher time back. Automating routine admin and first-pass feedback gives staff more time for relationships and pastoral care—the most protective factor in school.
- Creativity & problem-solving. When machines handle rote tasks, teens can spend more time on ideas, design and collaboration—the human skills that will hold value in the job market.
So what should parents ask schools to teach (and to do) about AI?
Here’s a succinct checklist for your next parents’ evening:
- Critical AI literacy for all. Teach what AI can and can’t do; how to check outputs against reliable sources; how bias enters systems; and how to spot deepfakes. (This belongs in PSHE/Humanities and Computing.)
- Data ethics & governance. Clear policies on student data, GDPR-compliant tools, transparent parent comms, and staff training.
- Digital well-being. Explicit lessons on healthy use, sleep, screen breaks, and when not to use AI so learning—and brains—stay strong.
- Human-centred skills. Prioritise creativity, critical thinking, collaboration, and communication across subjects; adapt assessments (more in-class work, oral presentations, process portfolios) to reward thinking, not copying.
- Staff capacity. Ask about teacher training, “digital champions,” and how the school is using trustworthy content banks and guidance from the Department for Education. ; Education Hub
What can you do at home?
- Learn and talk together. Explore tools together; insist that any AI output used for homework must be understood and checked by your teen. (Ask them to show you the checks.)
- Set clear boundaries. No AI companions; strict rules on face swaps/deepfakes; share-nothing-you-wouldn’t-say-on-a-postcard. Internet Matters
- Protect sleep & attention. Keep devices out of bedrooms, and require “hard mode” (no AI help) for certain tasks to maintain stamina and memory.
Resources -Where can parents learn more?
- DfE Education Hub – “AI in schools: what you need to know.” Clear, current guidance on how schools should use AI and what parents can expect. Education Hub
- Ofcom – Children and parents: media use and attitudes (2025). The go-to data on how children really use AI and media this year. www.ofcom.org.uk
- Children’s Commissioner for England. Briefings on children’s experiences online and strong guidance on deepfakes and exploitation. childrenscommissioner.gov.uk+1
- Internet Matters – AI advice hub. Parent-friendly explainers and step-by-step safety tips, including deepfake guides. Internet Matters+1
- Alan Turing Institute – “Children & AI.” Research-grounded recommendations and children’s own views on how AI should treat them. Turing Institute+1
Bottom line: AI is already part of teen life. The biggest risks come from companion-style chatbots, deepfake tools, and ungoverned data practices; the biggest benefits come when schools and families use AI to personalise learning, include every learner, and strengthen human skills.
Creativity, critical thinking, collaboration, and communication will ALWAYS be valued human skills, so encourage your child to develop these.
With clear family agreements and a school that teaches critical AI literacy, your child can be safe, calm and brilliantly prepared for what’s next.
PS I wrote this with a little help from AI and a lot of fact checking!