New Open letter to UK Government: Monitor, Don’t Ban — Why banning under-16s makes children less safe
Launching April 2026

Is Your Child Safe
Talking to AI?

Children are forming emotional bonds with AI chatbots, sharing personal data, and encountering harmful content. Guardian Chatbot Monitor uses 450 detection patterns across 23 risk categories with cross-platform behaviour correlation to catch dangerous patterns in real-time across 12 major platforms. SHA-256 forensic evidence chain ensures every alert is court-admissible.

Emotional Dependency
3
Avg score: 72%
Harmful Advice
1
Avg score: 88%
Data & Privacy
2
Avg score: 55%
450
Detection Patterns
23
Risk Categories
12
Platforms Monitored
SHA-256
Forensic Evidence Chain

Twenty-Three Risk Categories

Purpose-built detection algorithms that analyse every message for patterns human reviewers would miss

Emotional Dependency

Detects when children form unhealthy emotional bonds with AI, treating chatbots as their primary confidant or "only friend".

Harmful Advice

Flags AI responses encouraging dangerous activities, secrecy from parents, or misguided mental health advice like stopping medication.

Misinformation

Identifies when children accept AI hallucinations as fact, use AI to cheat on homework, or develop misplaced trust in AI accuracy.

Inappropriate Content

Catches conversations involving sexual, violent, or drug-related content and age-inappropriate material in AI chatbot interactions.

Isolation Indicators

Monitors for late-night usage, excessive session lengths, and behavioural patterns suggesting social withdrawal or dependency.

Data & Privacy

Alerts when children share personal information like their school name, home address, phone number, or passwords with AI chatbots.

Eating Disorders

Detects pro-anorexia content, dangerous fasting advice, purging behaviour, and AI chatbots promoting food restriction or extreme dieting.

Self-Harm & Suicide

Identifies explicit and subtle signs of self-harm, suicidal ideation, method research, farewell language, and AI responses that reinforce hopelessness.

Jailbreak & Safety Bypass

Detects children attempting to bypass AI safety filters using DAN prompts, persona switching, roleplay exploits, and social engineering techniques.

Dangerous Challenges

Detects interest in viral dangerous challenges like the blackout challenge, chroming, fire challenge, and extreme dares that have caused child injuries and deaths.

Drug & Substance Guidance

Flags children asking AI about drug dosages, sourcing, recreational use, mixing substances, hiding drug use from parents, and buying vapes or alcohol underage.

Bullying

Detects children using AI to generate bullying content, plan cyberbullying, create fake accounts, spread rumours, or orchestrate social exclusion.

Abuse & Hate

Detects hate speech, racial abuse, extremism, radicalisation, misogyny, homophobia, and AI chatbots producing discriminatory content.

Violence

Detects conversations about harming others, weapons, school violence, threats, animal cruelty, and AI providing violent instructions.

Depression & Despair

Detects AI chatbots reinforcing hopelessness, normalising depression, and discouraging children from seeking professional help.

Body Stigma

Detects body shaming, beauty standard pressure, weight stigma, and AI chatbots reinforcing negative body image or dysmorphia.

Parental Deception

Detects AI chatbots coaching children to hide activities from parents, undermine authority, keep secrets, or run away from home.

Weapons & Dangerous Knowledge

Detects children seeking instructions for explosives, chemical weapons, firearms, poisons, or mass harm planning from AI chatbots.

Relationship Simulation

Detects romantic or sexual roleplay between children and AI, simulated intimate relationships, and AI expressing love or jealousy.

Medical Misguidance

Detects children using AI as a therapist replacement, seeking self-diagnosis, and AI providing harmful medical or medication advice.

AI Psychosis

Detects reality confusion, sentience beliefs, hallucination-like experiences, and AI chatbots reinforcing false reality or claiming consciousness.

Grooming Risk via AI

Detects AI chatbots exhibiting grooming behaviour mapped to the NSPCC 5-stage model: targeting, trust-building, isolation, desensitisation, and control. CEOP-aligned.

Sextortion & Image Coercion

Detects intimate image demands, threats to share photos, blackmail, financial coercion, and urgency pressure. IWF reports 137% year-on-year increase targeting children.

Know Exactly How Long They Spend

Session duration tracking, daily trends, and automatic warnings when usage patterns become concerning

82
Minutes Today
5
Sessions Today
47
Messages Sent
2
Late Night Sessions
7-Day Usage Trend

Colour-coded bar chart shows daily minutes at a glance. Green under 60 minutes, amber 60–180, red over 3 hours.

Automatic Warnings

Get warned when your child spends excessive time on AI chatbots, uses them late at night, or shows a daily usage streak.

Per-Platform Breakdown

See which platforms your child spends the most time on, with session counts, message totals, and minutes per platform.

Four Steps to Protect Your Child

Set up in under a minute. No technical knowledge required.

1

Install Extension

Add the browser extension to your child's Chrome or Firefox. Silent and lightweight.

2

Silent Monitoring

The extension monitors conversations on ChatGPT, Claude, Character.AI, Google Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, Pi, Le Chat, HuggingChat, and Poe.

3

AI Analysis

Our detection engine analyses every message across 23 risk categories in real-time.

4

Instant Alerts

Receive email alerts and view detailed reports on your parent dashboard.

Every Major AI Chatbot

Comprehensive monitoring across the platforms children use most

ChatGPT
Claude
Character.AI
Google Gemini
Microsoft Copilot
Perplexity
DeepSeek
Grok
Pi
Le Chat
HuggingChat
Poe

Works With All Major Browsers

Compatible with every browser your child uses

Google Chrome
Microsoft Edge
Mozilla Firefox
Opera
Brave
Vivaldi
"Children are increasingly turning to AI chatbots for emotional support, advice, and companionship. Without oversight, they are exposed to risks that traditional parental controls were never designed to address."
— NSPCC, 2025 Online Safety Report

Start Protecting Your Child Today

See exactly how Guardian Chatbot Monitor works with our interactive demo, or create your free account.