top of page

When Responsible AI meets Community-Centricity



This article is part of NamasteData's LinkedIn-based newsletter, 'Data Uncollected'. Subscribe to it if you like this piece!




The idea of Responsible AI, to me, is like building a bridge that stays solid for many, many years to come.

You and I may have the tools and manuals to use those tools, but until we understand the gap between where we came from and what our future should (not could) look like, you and I will never be able to build that solid bridge.

So, when I speak of Responsible AI, I want to focus less on tools and more on enabling the creativity, imagination, and accountability you and I can share. While it is non-negotiable truth that each of us on this planet shares a responsibility towards the algorithms around us, it is also equally true that our actions and influence to/from those algorithms are different. Some of us will always be end-users, while others will be algorithm designers and providers.

That's why, today, I want to explore how it looks when the ideas of Responsible AI and community-centricity meet each other. But, in two views – one as the end-users (that is, our nonprofit industry) and the other as developers.

I. For Nonprofits: embracing AI responsibly.


Principle 1: The AI we use must be inclusive and equity-centered for all.
Imagine a nonprofit dedicated to improving educational outcomes in an underserved community. They decided to use AI to analyze student performance data. The nonprofit can use AI to identify students at risk of falling behind. The AI solution is carefully used such that every struggling student receives tailored support. It centers the community by addressing disparities and ensuring every student gets a fair chance to succeed.

Principle 2: We contribute, share, and influence in building an industry culture of collective wisdom around Responsible AI.
Say several local nonprofits join forces to tackle homelessness collectively. They can create space to learn how to protect data and use it ethically collectively. This collaborative effort not only fosters mutual support but also ensures that the community's privacy is upheld. In a community where multiple nonprofits work toward shared goals, it's crucial to establish collective data ethics principles.

Principle 3: Any segmentation and comparison from AI must not diminish the spirit of generosity.
We need to understand that the spirit of generosity extends beyond monetary donations. When we speak of "community", it includes volunteers, staff, donors, and board members - that is, all the allies and advocates intending to support the mission with heart and care. Say a youth-focused nonprofit uses AI to track and appreciate volunteer hours. To use this AI responsibly, the segmentations from the AI solution must not be interpreted to compare/diminish the contributions of the volunteers. Instead, the segmentations must be used to evaluate how to elevate every volunteer's generosity with time and wisdom meaningfully.

Principle 4: AI must be included with a purpose and intention that centers human relationships.
When bringing AI solutions into organizations, why you need it (the solution) comes before what the capabilities of the AI are. Say a nonprofit uses generative AI (e.g., ChatGPT) to create drafts of community communication. This saves time, yes. But the intention shouldn't be to save time only, but to question – what do we do with that time? It could be that time optimization allows staff to devote more hours to connecting with the community. It enables nonprofits to foster deeper connections with their community members.

Principle 5: The use of AI must foster transparency within the community.
Say an environmental nonprofit employs AI to personalize donor communications. In using that, they should create a set of guidelines under which they operate and engage with AI. This set of guidelines or "AI values" must be communicated to community partners, such as donors. This transparency fosters trust and ensures your community knows your ethics in engaging with technology.


II. For AI Developers and Engineers: building Responsible AI for nonprofits.


Principle 1: The AI we use must be inclusive and equity-centered for all.
When developing AI solutions for nonprofits, we must ensure that the algorithms prioritize inclusion, equity, and justice in design. That means implementing fairness-aware AI models that actively reduce biases in decision-making. For example, an AI-powered education platform should prioritize identifying at-risk students, regardless of their background, to foster equitable learning opportunities.
Principle 2: We contribute, share, and influence in building an organizational culture of collective wisdom around Responsible AI.
To design AI solutions with data ethics in mind, we must enable our entire organization to engage with the ideas of centering ethics and accountability at the heart of the products. Say a tech org can develop AI applications that allow for secure, privacy-preserving data sharing among nonprofit partners, using tools for data anonymization and encryption to protect sensitive community information while facilitating data-driven collaboration.

Principle 3: The values of AI solutions must be seen beyond quantified numbers (e.g., dollars saved, dollars earned, users obtained, etc.) or as an input to increased efficiency, innovation, market domination, or capital accumulation. It must also value the AI's contributions towards the well-being, safety, and non-extractive growth of those impacted by the solution.
The AI tools, especially those created to support nonprofits, must recognize and value the diverse contributions of community members. For example, design AI solutions that track and acknowledge volunteer efforts in addition to financial donations. Or develop user-friendly interfaces that allow nonprofits to celebrate all forms of engagement, fostering a sense of inclusivity and appreciation.

Principle 4: AI must be designed with a purpose and intention that centers human relationships.
The AI solutions designed must be questioned, challenged, and embraced appropriately. So, all those benefiting from it are held accountable with transparency. The real value of Responsible AI is in how the product (AI solution) is designed and how it can keep the designers responsible for their work.

Principle 5: The AI solutions must foster transparency within the end-user community.
Design AI systems with transparency in mind, ensuring that the end-users understand
· how AI is designed,
· how can AI be used responsibly and meaningfully,
· and how to engage with AI sustainably
For example, creating AI interfaces that clearly explain AI-driven decision-making processes, the rationale behind segmentations, and specific steps taken to ensure ethical designs. All of it, then, encourages open conversations about AI usage within the end-user community.

*********************************

At the heart of Responsible AI is not just technological progress but also a deep commitment to building better communities and the planet. And the cost to look at AI without that "Responsible" part is too high.

It is time we bring home a new outlook towards AI – a democratized, balanced, and shared version of opportunities and threats.

One that can happen between you, me, and more like us already living in those algorithms – regardless of our roles and expertise. Together, as end-users and designers, we can work towards building more robust, more hopeful, and more inclusive communities.

I say we commit to holding ourselves and each other accountable every time we come face to face with making decisions about/from AI.

To individual accountability and collective power.

*** So, what do I want from you today (my readers)?
Which principles here resonate the most with you?

Comments


bottom of page