Girls are increasingly turning to digital spaces for information and support—but too often find a world that isn't designed for them. Millions face barriers of stigma, limited resources, and privacy concerns when seeking trusted guidance and safe spaces to ask sensitive questions.
Since 2018, we’ve been using Artificial Intelligence (AI) and Machine Learning (ML) to provide girls with youth-friendly, accurate, and judgment-free health information and services.
Fine Tuning
Central to our principles is ensuring the safety of our users above all else.
Through multiple layers of protection, including keyword detection, human moderation, and intervention, we have developed AI models designed to identify sensitive disclosures. These models enable us to detect users in need and connect them with real-time human support and professional services.
Safety by Design
Our LLM-powered evaluation framework is at the heart of our approach.
We understand Al models have inherent biases and outputs are based on a level of randomness, which is why we've developed our own LLM-trained tool to evaluate each response against custom metrics for safety, accuracy, resonance, and reliability. Designed by our team of experts and guided by girls, this evaluation framework ensures that our content is safe and accurate for users, in line with our commitment to ‘‘do no harm."
When engaging with users, we don't rely on the LLM's built-in knowledge base. Instead, our model pulls from the high-quality, locally relevant content we've spent nearly a decade developing. Using advanced Retrieval Augmented Generation (RAG) technology with protective guardrails, we ensure all responses come exclusively from our trusted knowledge base— not random Al training data.
Al-Driven, Human-Connected
Keeping Humans in the Loop for Real-Time Support and Health Access
Our Al chat companions-Big Sis in South Africa, WAZZIl in Kenya, and Bol Behen in India-answer young people's pressing health questions, provide crucial information and support, and connect them to essential health services.
Equipped with Al, these companions generate responses by drawing from our existing content base and adhering to best practices in social behavior change communication. Strict guardrails ensure that all responses are safe, accurate, and appropriate.
However, Al doesn't replace human connection, it enhances it. That's why we keep humans in the loop through our trained chat operators, who step in when users need personalized support. Whether it's triaging concerns, offering real-time reassurance, or booking appointments with health services, our human
team ensures that young people receive the care they need beyond Al-powered conversations.
With Al and human expertise working hand in hand, we're building a safer, more responsive digital ecosystem that truly meets young people's needs.
Local Large Language Models
For over ten years, we've been answering girls' questions about growing up. Through our chat companions, we have responded to more than 35 million questions and messages from young people, creating a vast dataset that has shaped the evolution of our Al-powered support.
Leveraging this data, we have built and trained custom Large Language Models (LLMs) to ensure our Al companions understand and communicate in the languages young people use today.
As a result, our Al companions can now understand and respond in Hinglish (a mix of Hindi and English) and Sheng (a mix of Swahili and English), making conversations more natural, relatable, and accessible for the girls who rely on them.