Join the Responsible AI Challenge

by Mozilla Builders

Challenge accomplished. Thank you for making this a success!

We are grateful to all participants, speakers, judges, and supporters whose passion and dedication paved the way for Responsible AI development. The event exceeded expectations, with insightful discussions and inspiring projects, reaffirming our commitment to shaping the future of AI.

Stay tuned

Announcing top prize winners

Three people holding a giant cheque for $50,000
Sanative AI

1st Place

Sanative AI provides anti-AI watermarks to protect images and artwork from being used as training data for diffusion models.

One person holding a giant cheque for $30,000
Kwanele Chat Bot

2nd Place

Kwanele Chat Bot aims to empower women in communities plagued by violence by enabling them to access help fast and ensure the collection of admissible evidence.

One person holding a giant cheque for $20,000

3rd Place

Nolano is a trained language model that uses natural language processing to run on laptops and smartphones.

About the challenge

We at Mozilla believe in AI: in its power, its commercial opportunity, and its potential to solve the world’s most challenging problems. But we also recognize its risks and want to help develop it responsibly to serve society. That’s why we’ve created the Responsible AI Challenge, a 1-day in-person event to inspire and support a community of Builders working on Trustworthy AI products and solutions.

The event will convene some of the brightest thinkers, technologists, ethicists and business leaders for talks, workshops and working sessions to help technologists, entrepreneurs and creators get their ideas off the ground. Our goal is to identify and bring together entrepreneurs and builders who believe in our mission of trustworthy AI and give them resources, funding, support and cross-pollination, so they can make better products and build more responsible companies.

How applications were evaluated

Concept innovation

How is your project unique in its approach and how does it solve a specific problem or need in society?

Technical implementation

What technology stack and datasets will you leverage to make your AI solution work?

Responsible AI

Does your project consider things like agency, accountability, privacy, fairness, and safety? How does it address our Responsibe AI Guidelines? (See below)

What types of projects are we looking for?

Projects could be interactive websites, games, apps or other tools that use AI for functional or creative purposes via the lens of responsible, ethical or trustworthy design. We encourage projects that have a strong mission and contribute to the well-being of individuals, society or the planet.

Preference will be given to:

Consumer technology projects

Think general purpose internet products and services aimed at a wide audience. This includes products and services spanning from social media platforms, search engines, and ride-sharing apps, to smart home devices and wearables, to e-commerce and algorithmic lending, as well as artistic expressions or media creation.

Generative AI projects

Generative AI projects and those using transformer models more broadly. We are eager for entries that explore and address the interplay between these recent trends and the timeless need for trustworthy AI.

What do we mean by Responsible AI?

We believe that Responsible AI is demonstrably worthy of trust. It’s technology that considers accountability, user agency, and individual and collective well-being.

These are the Responsible AI guidelines we will use to evaluate entries to this Challenge:


Is your AI designed with personal agency in mind? Do people have control over how they use the AI, over how their data is used, and over the algorithm’s output?


Are you providing transparency into how your AI systems work, are you set up to support accountability when things go wrong?


How are you collecting, storing and sharing people’s data?


  • Are your computational models, data, and frameworks reflecting or amplifying existing bias, or assumptions resulting in biased or discriminatory outcomes, or have outsized impact on marginalized communities?
  • Are computing and human labor used to build your AI system vulnerable to exploitation and overwork?
  • Is the climate crisis being accelerated by your AI through energy consumption or speeding up the extraction of natural resources?


Are bad actors able to carry out sophisticated attacks by exploiting your AI systems?

Meet our judges and speakers


Headshot of Raffi Krikorian

Raffi Krikorian

Raffi is an Armenian-American technology executive and the CTO of the Emerson Collective.

Headshot of Lauren Wagner

Lauren Wagner

Lauren is an early-stage investor and Fellow at the Berggruen Institute.

Headshot of Ramak Molavi

Ramak Molavi

Ramak is a digital rights lawyer and has led the 'Meaningful AI Transparency' research project at Mozilla since 2021.

Headshot of Raesetje Sefala

Raesetje Sefala

Raesetje is an AI Research Fellow at the Distributed AI Research Institute (DAIR).

Headshot of Damon Horowitz

Damon Horowitz

Damon is a technologist, philosophy professor and serial entrepreneur.

Headshot of James Hodsen

James Hodsen

James is the CEO of the AI for Good Foundation, which is building economic and community resilience through technology.

Headshot of Deb Raji

Deb Raji

Deb is a Nigerian-Canadian computer scientist and activist working on algorithmic bias, AI accountability, and algorithmic auditing.


Headshot of Margaret Mitchell

Margaret Mitchell

Margaret is the Chief Ethics Scientist at Hugging Face, with deep experience in ML development, ML data governance, and AI evaluation.

Headshot of Gary Marcus

Gary Marcus

Gary is the Emeritus Professor of Psychology and Neural Science at NYU and the author of five books, including New York Times Bestseller Guitar Zero.

Our Partners

Cohere logo
Fiddler logo
Hugging Face logo
Centre for Humane Technology logo
Credo AI logo
EleutherAI logo

Event Agenda

10:30am - 11:00am Registration and check-in
11:00am - 11:50am Workshop #1 (3 Choices):
Robust and Reliable ML Application Development - This workshop aims to provide an in-depth understanding of CAPSA, a unified framework for quantifying risk in deep neural networks developed by Themis AI, in addition to risk-aware applications built with this framework.
What Is Humane Innovation? - Join Center for Humane Technology Innovation Lead Andrew Dunn for an interactive presentation on building AI with principles from their Foundations course.
Prototyping Social Norms and Agreements in Responsible AI - In this workshop led by Mozilla Senior Fellow in Trustworthy AI Bogdana Rakova, we’ll explore questions related to AI risks, safeguards, transparency, human autonomy, and digital sovereignty through social, computational, and legal mechanisms by applying a hands-on approach grounded in participant’s projects.
12:00pm - 12:50pm Lunch
1:00pm - 1:50am Workshop #2 (4 Choices):
Algorithmic Fairness: A Pathway to Developing Responsible AI Systems - Join Golnoosh Farnadi, Canada CIFAR AI Chair, Mila, as she discusses the importance of fairness and provides an overview of techniques for ensuring algorithmic fairness through the machine learning pipeline while also suggesting questions and future directions for building a responsible AI system.
Operationalizing AI Ethics - Dr. Karina Alexanyan, Director of Strategy at All Tech is Human, Katya Klinova, Head of AI, Labor and the Economy at Partnership on AI, Chris McClean, Director, Global Lead, Digital Ethics at Avanade, Gabriella de Queiroz, Principal Cloud Advocate at Microsoft, and Ehrik Aldana, Tech Policy Product Manager at will share insights and examples of how to be thoughtful and responsible about designing and deploying innovations in AI.
Building Trust into Gen AI: Model Visibility and Tracking Change in Data Distributions - The current generation of Generative AI excels at working directly with humans in human language– and for several reasons this increases the potential for harm. Join Fiddler AI to generate a dataset by collaborating on a toy genAI task and explore techniques for localizing model weaknesses and time-dependent semantic drift.
Wait Wait, Don't push that button!” Human Centered Design for Responsible AI - Join Superbloom for a workshop on human-centered design and AI, where we'll talk about designing for users in responsible ways that take into account challenges, benefits, and what can go wrong.
2:00pm - 3:20pm Keynote Speakers
Imo Udom - SVP Innovation Ecosystems
Margaret Mitchell - Chief Ethics Scientist, Hugging Face
Gary Marcus - Emeritus Professor of Psychology and Neural Science at NYU
Kevin Roose - Speaker, reporter and author of Futureproof
3:35pm - 5:40pm Responsible AI Challenge Final Rounds
3-5 minute pitch + 5 minutes Q&A per team
Awards and prizes!
5:40pm - 7:30pm Closing Reception

Got questions?

Get more info and support on Discord.