top of page

AI Equity Project

"AI is racing ahead in philanthropy and nonprofits.
But the people closest to communities aren’t writing the rules. We kept hearing fear, hope, and confusion in the same breath.
So we built a project that listens at scale—data, stories, disaggregated truths.
To make sure AI in our sector is shaped with nonprofits, not done to them."


-Meena Das, Founder of the AI Equity Project on why this project exists

ALP_MeenaD_Final_1297.jpg

Project Sponsors

Media Partners
(critical analysis from the project published in)

SSIR logo.jpg
hilborn logo.jpg
afp global logo.jpg
charity village logo.jpg
future of good logo.jpg
Candid logo.jpg
blackbaud logo.jpg

2024 & 2025

​2 years of longitudinal AI Equity data

1500

​nonprofit staff and leaders surveyed across the U.S. and Canada

3000+

report downloads and readers across the sector

20+

keynotes, webinars, and workshops shaped by the insights of the AI Equity Report

Things You Can Do
NOW

The short version

What is this
AI Equity Project?

The AI Equity Project is a multi-year research and listening initiative with nonprofits in the U.S. and Canada.


It systematically collects survey data and stories on how nonprofit staff are using, experiencing, and resourcing AI.


Data are disaggregated by factors like org size, geography, role, and identity to surface where benefits and harms are uneven.
The project’s purpose is to give nonprofits, funders, and infrastructure orgs evidence to make AI decisions that centre equity, not just efficiency.


Findings inform AI strategies, funding priorities, policies, and practical supports tailored to smaller, rural, and equity-seeking organizations.

In 2023 we found AI is being designed and deployed in ways that shape philanthropy, funding, and community impact, often without the voices of the very organizations working on the frontlines of social change. If we don’t actively shape AI’s role in our sector now, who will?

​

So, we collected data from 700+ nonprofits in the first year of this project (2024) to understand what is at stake here. We found that AI is already influencing funding and outreach decisions—but we don’t know if these systems are designed with nonprofit values in mind, or quietly reinforcing existing inequities. Most nonprofits use AI piecemeal, primarily for content generation, while deeper mission-driven AI applications—and the collective process to design those applications—is yet to become a disciplined reality. Without clear AI governance, organizations risk unintended harm: misrepresenting communities, reinforcing biases, and making decisions based on flawed algorithms.

​

In 2025, we expanded the project and deepened the analysis—disaggregating results by organizational size, geography, and identity. These crosstabs revealed that smaller and equity-seeking organizations are the most eager to experiment with AI and the least resourced to do so safely. Respondents told us they need training, hands-on support, and shared guardrails far more than they need another tool license. We also saw that people from historically marginalized communities report higher awareness of AI bias, even as their organizations struggle to fund governance and oversight. Together, the 2024 and 2025 findings position the AI Equity Project as an evidence base the sector can use to design AI strategies, funding, and policies that are grounded in nonprofit reality and centered on equity—not just efficiency.

Here is the longer story behind this project

Join AFP ICON 2026 (April) FOR THE FIRST IN-PERSON AI EQUITY WORKSHOP

AI Equity SWOT.png

What have we learned so far about
AI + Nonprofit Sector?

1. AI is already here—just not yet mission-shaped

Across both years, most nonprofits are already using AI, but in narrow ways: drafting emails, social posts, and basic copy. Very few are using AI in ways that are clearly tied to mission, equity, or long-term strategy. AI is present in the sector, but it’s rarely being guided by intentional, community-informed design.

​

2. Small and equity-seeking orgs are the most eager—and the least resourced

Disaggregated results show that smaller, rural, and equity-seeking organizations are often the most curious and hopeful about AI and the least supported to use it safely. They are asking for training, time, and hands-on guidance, while larger organizations are more likely to have internal capacity, governance conversations, and access to external expertise.

​

3. Nonprofits want skills and accompaniment, not just tools

When asked how they’d like to be supported, respondents consistently prioritized training, practical learning spaces, and funder-provided technical assistance over more software or new pilots. The message is clear: nonprofits do not want “AI dropped on them”; they want partners who will walk beside them as they figure out what aligned, ethical AI looks like in their context.

​

4. There is a growing governance gap

Only a small share of organizations, mostly larger ones, are investing in AI governance—policies, decision frameworks, and accountability practices. Smaller organizations are often experimenting without clear guardrails, which increases the risk of misrepresenting communities, amplifying bias, or making decisions from flawed systems. Governance is emerging as a justice issue, not just a compliance issue.

​

5. Those closest to harm are doing the most learning about bias

People who identify as part of historically marginalized communities report higher familiarity with AI bias than those who do not. They are learning, naming risks, and calling for safeguards—even when their organizations have limited resources to act on those concerns. This confirms that any serious AI strategy in the sector must center the perspectives of those most impacted, not just those with the most power or budget.

AIEQ 2025 Cover.jpg
AI equity matters_edited.jpg
bottom of page