Loading...
Kids with Yegna Cast Member 1
documents/Frame_372.svg
documents/Frame_372.svg

Girls are increasingly turning to digital spaces for information and support—but too often find a world that isn't designed for them. Millions face barriers of stigma, limited resources, and privacy concerns when seeking trusted guidance and safe spaces to ask sensitive questions.

Since 2018, we’ve been using Artificial Intelligence (AI) and Machine Learning (ML) to provide girls with youth-friendly, accurate, and judgment-free health information and services.

documents/Frame_1129_2.svg
documents/Frame_1041.svg

INUA Health, our 24/7 AI-powered digital health platform, delivers culturally relevant, life-changing health information and direct connection to services for millions of youth.

Frame 1040
Image (3)
documents/Frame_1203.svg
Ensuring User Safety


Fine Tuning


Central to our principles is ensuring the safety of our users above all else.

Through multiple layers of protection, including keyword detection, human moderation, and intervention, we have developed AI models designed to identify sensitive disclosures. These models enable us to detect users in need and connect them with real-time human support and professional services.

Providing Safe, Accurate, Relevant Information


Safety by Design


Our LLM-powered evaluation framework is at the heart of our approach.


We understand Al models have inherent biases and outputs are based on a level of randomness, which is why we've developed our own LLM-trained tool to evaluate each response against custom metrics for safety, accuracy, resonance, and reliability. Designed by our team of experts and guided by girls, this evaluation framework ensures that our content is safe and accurate for users, in line with our commitment to ‘‘do no harm."


When engaging with users, we don't rely on the LLM's built-in knowledge base. Instead, our model pulls from the high-quality, locally relevant content we've spent nearly a decade developing. Using advanced Retrieval Augmented Generation (RAG) technology with protective guardrails, we ensure all responses come exclusively from our trusted knowledge base— not random Al training data.

AI-Driven, Human-Connected


Al-Driven, Human-Connected


Keeping Humans in the Loop for Real-Time Support and Health Access


Our Al chat companions-Big Sis in South Africa, WAZZIl in Kenya, and Bol Behen in India-answer young people's pressing health questions, provide crucial information and support, and connect them to essential health services.


Equipped with Al, these companions generate responses by drawing from our existing content base and adhering to best practices in social behavior change communication. Strict guardrails ensure that all responses are safe, accurate, and appropriate.


However, Al doesn't replace human connection, it enhances it. That's why we keep humans in the loop through our trained chat operators, who step in when users need personalized support. Whether it's triaging concerns, offering real-time reassurance, or booking appointments with health services, our human
team ensures that young people receive the care they need beyond Al-powered conversations.


With Al and human expertise working hand in hand, we're building a safer, more responsive digital ecosystem that truly meets young people's needs.

Speaking Her Language


Local Large Language Models

For over ten years, we've been answering girls' questions about growing up. Through our chat companions, we have responded to more than 35 million questions and messages from young people, creating a vast dataset that has shaped the evolution of our Al-powered support.


Leveraging this data, we have built and trained custom Large Language Models (LLMs) to ensure our Al companions understand and communicate in the languages young people use today.


As a result, our Al companions can now understand and respond in Hinglish (a mix of Hindi and English) and Sheng (a mix of Swahili and English), making conversations more natural, relatable, and accessible for the girls who rely on them.

OUR IMPACT

GENAI OUTCOMES

In a 4-week A/B test of our Generative AI companion in South Africa,
the number of questions asked by users increased by:

documents/Frame_1205.svg

In a 4-week A/B test in South Africa, we saw a:

Compared to users of our non-generative AI companion,
the users of our Generative AI were:

13%

more likely to access information about health services

17%

more likely to know key information about depression after consuming our content

12%

more likely to return

11%

more likely to recommend Big Sis

In a 4-week A/B test in South Africa, we saw a:

In this time period, our evaluation framework found that:

100%

of answers were safe

97%

of answers were relevant

Container section image Container section mobile image

Artificial

Intelligence

and Machine

Learning

Artificial Intelligence

and Machine Learning .

Read our vision

Frame 371 (1)
documents/Frame_1038.svg
documents/Frame_372_1.svg

We involve key stakeholders, including young people themselves, in the development of our AI. We are committed to ensuring the safety, privacy, and protection of our users.

The data we hold is girls’ assets, and
we treat it with the utmost respect and care. Privacy is our default setting, and
we ensure that our practices comply with both national and international data protection laws.

We only collect necessary information for legitimate business purposes, with special safeguards to protect children and young people. We consider the most vulnerable girl as our baseline for determining privacy, safety, and security policies and practices.

We empower girls with age-appropriate choices and control over their personal data, maintaining transparency about our privacy practices.

We put girls' well-being above institutional benefit, viewing data protection as fundamental to our mission.

Collaborative partnerships power our impact

PARTNER WITH US

Ready to explore opportunities to unlock
the power of girls?