“ As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. ”
AI chatbots offer tremendous promise. The technology can support learning and creativity, provide entertaining conversation, and act as a virtual sounding board. But with those benefits comes serious responsibilities. Experts, regulators, and parents increasingly recognize that, without meaningful guardrails, AI chatbots can expose vulnerable users, especially children and teens, to serious harms. The tech industry cannot wait while state and federal lawmakers shape the rules.
Benefits
Risks
Regulators, parents, and the general public expect AI companion chatbot platforms to prioritize the safety and well-being of young audiences, and BBB National Programs’ Center for Industry Self-Regulation (CISR) is well positioned to help industry explore standards that will last and evolve at the pace of technology.
Convening
CISR’s AI Chatbot Accountability Initiative will bring together AI chatbot developers and publishers, consumer brands, mental and developmental health professionals, NGOs representing industry and consumer interests, and 30 years of online child protection expertise from BBB National Programs to prioritize discussion on the safety and well-being of young audiences.
Join this critical mission and affirm your commitment to the transparency and accountability needed to safeguard children and teens who use AI chatbots.
Complete the form below to set up a phone call.
To set the foundation for this initiative, BBB National Programs’ Children’s Advertising Review Unit (CARU) and its Supporters examined both the potential benefits and risks of generative AI, discussing the online advertising, privacy, and safety issues brands and companies face when designing and developing their online services directed to children using generative AI.
CARU's AI Working Group released The Generative AI & Kids Risk Matrix, a set of realistic, actionable considerations and best practices to help companies identify and mitigate AI-related risks specific to children, while also equipping parents and consumers with tools to understand potential harms and take informed, protective actions.