Embracing Artificial Intelligence in Nonprofits: Crafting a Conscious Path Forward

Staff members in your nonprofit are using AI right now. Are they letting you know which writings have included AI input? Do you have guidelines in place to help your staff determine whether and how to use AI effectively and ethically?

Ted Bilich

Staff members in your nonprofit are using AI right now. Are they letting you know which writings have included AI input? Do you have guidelines in place to help your staff determine whether and how to use AI effectively and ethically?

For the nonprofit sector, where resources are precious and missions are critical, AI offers a banquet of opportunities—provided we approach the table with wisdom and foresight.

The recent executive order (and fact sheet) from the Biden Administration isn’t just political prose; it’s a clarion call for organizations, including nonprofits, to take AI seriously—not just in use, but in governance. What does that mean for the nonprofit world? It's time to set internal policies for using off-the-shelf AI tools responsibly.

When it comes to policy development, The Society for Human Resource Management (SHRM) recently put out a draft policy (paywalled) for generative AI chatbot usage that hits the right tone of clarity and responsibility. Nonprofits can draw valuable insights from such templates, adapting them to their own context where integrity and stewardship are not mere buzzwords, but pillars of operation.

AI tools, such as ChatGPT, Grok, Bard, and Midjourney, are not just about automating tasks or analyzing data. They're about enhancing the nonprofit's capability to serve. However, with data (including client data) being the new oil, there's a thin line between maximizing potential and breaching trust. That's where a well-crafted policy comes in, ensuring that the organization stays true to its mission while embracing innovation.

So, what should be included in a nonprofit’s AI use policy? Consider the following:

Purpose and Scope of AI Use:

  • Define the purpose behind using AI within the organization.
  • Clarify which aspects of operations AI tools will be applied to and how team members can suggest new use cases.
  • Identify the types of AI technologies that will be used, such as machine learning, natural language processing, etc.

Alignment with Mission:

  • Ensure that the use of AI is in alignment with the nonprofit's mission and values.
  • Establish clear use cases where AI can advance the organizational goals and benefit stakeholders. 

Data Privacy and Protection:

  • Outline measures for protecting the privacy and security of data used by AI systems.
  • Specify guidelines for data collection, storage, and processing in compliance with relevant laws and best practices.

Transparency and Accountability:

  • Detail the level of transparency required in AI operations. Who needs to know within the organization? What about those outside the organization?
  • Set forth processes for documenting AI decision-making and maintaining accountability.

Equity and Inclusivity Considerations:

  • Address potential biases in AI algorithms and datasets.
  • Develop strategies to ensure AI tools do not perpetuate inequality or discrimination.

Governance and Oversight:

  • Describe the governance structure overseeing AI use, including roles and responsibilities.
  • Establish protocols for ongoing review and assessment of AI tools and their impacts.

Compliance with Laws and Ethical Standards:

  • Enumerate legal standards applicable to AI use, such as copyright laws for AI-generated content.
  • Articulate the ethical principles guiding AI use within the organization.

Risk Management:

  • Conduct regular risk assessments related to AI deployment and use.
  • Plan for mitigation strategies to address identified risks, including those related to technology failure or misuse.
  • Plan for ways to capture opportunities (upside risks) presented by new AI applications.

Staff Training and User Competence:

  • Provide training programs to ensure staff are competent in using AI tools and how those tools may be used within your nonprofit.
  • Create awareness of the limitations and capabilities of AI within the organization.

User Engagement and Public Interaction:

  • Set guidelines for how AI-generated communication is identified and represented to the public.
  • Outline the role of AI in user engagement and service delivery.

Performance Measurement and Reporting:

  • Define metrics for evaluating the performance and impact of AI tools.
  • Establish reporting procedures to monitor AI effectiveness and inform stakeholders.

Review and Updating of AI Policy:

  • Implement a schedule for regular review and updates to the policy to reflect new challenges and developments in AI technology.
  • Keep in mind that AI tools (and quandaries relating to those tools) are burgeoning, meaning that risks in this area are fast-moving.

Exit Strategies and Contingency Planning:

  • Prepare for the possibility of discontinued use of certain AI tools or vendors.
  • Ensure the policy includes steps for data retrieval and transition in such events.

Adopting AI doesn't mean letting go of the human element. Nonprofits must ensure that AI usage reflects their values, respects privacy, and promotes equity. This starts with thorough staff training, regular monitoring, and a willingness to adapt policies as technology evolves.

Nonprofits are mirrors reflecting societal values, and the AI tools they use must do the same. It's not just about compliance with laws or regulations—it's about upholding a commitment to fairness and respect. The policies nonprofits set for AI use are less about controlling technology and more about guiding its use in a way that amplifies their core principles.

In the face of rapid technological advancement, nonprofits have the chance to lead by example. Crafting an AI policy isn't just administrative—it's a statement that technology can and should be harnessed for the greater good. The journey with AI is a perpetual beta test. Policies must be living documents that grow with the technology they govern. By doing so, nonprofits don't just protect themselves and their beneficiaries—they blaze a trail for ethical AI use.

The horizon is bright for nonprofits leveraging AI. With intentional policymaking and a commitment to ethical use, the sector can navigate these waters confidently, ensuring that AI remains a tool for good. As we move forward, let’s ensure our digital evolution reflects our highest aspirations, not just our operational needs. Nonprofits have long been the conscience of the business world, and in the age of AI, this role is more important than ever. By setting thoughtful policies for AI usage, nonprofits can not only safeguard their interests and those of their stakeholders but also model responsible innovation for the wider world. After all, the true power of technology isn't in its circuits and algorithms—it's in the human hearts and hands that guide it. If you're ready to navigate the complexities of AI integration and ensure your nonprofit harnesses this technology responsibly and effectively, don't hesitate to reach out to Risk Alternatives today.