Communiti Labs

Engagement Institute NZ Symposium 2026

AI in Engagement: A Practical Toolkit

Your framework for assessing, implementing, and governing AI tools in community engagement practice.

AI is a tool, not a replacement. The best AI implementations in engagement are the ones where practitioners remain firmly in control of the judgments that matter. Think of AI as a copilot, not an autopilot. Your expertise, your relationships with community, and your professional judgment are irreplaceable.

Section 01

Workflow Assessment Framework

Not every task benefits from AI. Use this four-question filter to evaluate any workflow your team is considering for AI assistance. Pick a task your team does regularly and run it through all four questions.

Is this task repetitive, high-volume, or time-intensive?

AI is strongest when applied to tasks that follow predictable patterns and consume disproportionate time relative to the judgment they require.

Green

High-volume, pattern-based tasks with clear inputs and outputs.

Amber

Moderate volume, but requires some contextual judgment.

Red

Low-volume, unique, or highly contextual work where AI adds little value.

Does this task require subjective human judgment about community sentiment?

AI can support analysis, but it cannot replace the practitioner’s understanding of context, power dynamics, cultural nuance, or lived experience.

Green

Task is procedural or administrative. Human judgment is for review, not creation.

Amber

AI can assist, but a practitioner must interpret and validate every output.

Red

Task is fundamentally about human judgment, relationships, or cultural sensitivity.

What is the consequence if the AI gets it wrong?

Every AI tool produces errors. The question is not whether it will make mistakes, but what happens when it does.

Green

Low stakes. Internal drafting, preparation, or exploration. Errors are easily caught.

Amber

Moderate stakes. Output will be reviewed but informs decisions. Errors could mislead.

Red

High stakes. Output directly informs public reporting or council decisions. Errors erode trust.

Can you explain to the community how AI was used?

If you cannot clearly and transparently describe the AI’s role, you are not ready to use it. Transparency is not optional.

Green

You can write a clear, plain-language statement for your methodology section.

Amber

You could explain it, but it would require significant caveats or qualifications.

Red

You cannot clearly articulate what the AI did, or disclosure would undermine confidence.

Quick Assessment Grid

If you have two or more reds, this task is not ready for AI. Mostly greens with review processes? Likely a good candidate.

Theming open-text survey responses

Repetitive?Green
Judgment?Amber
Stakes?Amber
Explainable?Green

Likely yes, with review

Translating engagement materials

Repetitive?Green
Judgment?Green
Stakes?Green
Explainable?Green

Strong candidate

Summarising 3,000+ submissions

Repetitive?Green
Judgment?Amber
Stakes?Red
Explainable?Amber

Proceed with caution

Drafting engagement plans

Repetitive?Amber
Judgment?Red
Stakes?Amber
Explainable?Amber

Assist only, not generate

Generating synthetic community voices

Repetitive?Green
Judgment?Red
Stakes?Red
Explainable?Red

Not recommended

Transcribing audio/video recordings

Repetitive?Green
Judgment?Green
Stakes?Green
Explainable?Green

Strong candidate

Section 02

10 Questions Before You Deploy

Before any AI tool goes live in your engagement work, answer these ten questions. If you cannot confidently answer more than half, you are not ready to deploy. That is not a failure — it is a sign that the groundwork needs to happen first.

01
Data & Privacy

Where does community data go?

Know exactly where data is stored, processed, and whether it leaves your jurisdiction. For New Zealand and Australian councils, understand your obligations under the Privacy Act and any data sovereignty requirements. Ask the vendor directly: does data leave the country?

02
Data & Privacy

Who can access the data, and for how long?

Clarify access controls, retention periods, and deletion processes. What happens to community feedback after your engagement closes? Is it retained by the AI provider? Can it be used to train their models? Get this in writing.

03
Transparency

Can you write a clear methodology statement?

If AI was used in your engagement process, you should be able to describe its role in plain language in your engagement report. Draft this statement before you deploy, not after. If you struggle to write it, reconsider the approach.

04
Transparency

Would you be comfortable if a community member asked how their data was used?

Community trust is built on transparency. If a concerned resident or ratepayer asks whether AI was used to process their submission, you need a clear, honest, and reassuring answer. Prepare it in advance.

05
Governance

Is there a human review point before any AI output informs a decision?

Human-in-the-loop is not a checkbox. Define specifically where in your workflow a qualified person reviews, validates, or overrides the AI output. Document this process. The higher the stakes, the more rigorous the review.

06
Governance

Does your organisation have an AI use policy that covers engagement?

A general IT policy is not enough. Your engagement team needs guidance specific to how AI interacts with community voice, public trust, and democratic participation. If you do not have one, this toolkit is a starting point for drafting it.

07
Bias & Representation

Could this tool amplify existing biases or underrepresent voices?

AI models reflect the data they were trained on, which often underrepresents Indigenous communities, culturally and linguistically diverse groups, and people with lower digital literacy. Consider who might be excluded or misrepresented and build mitigation into your process.

08
Procurement

What questions should you be asking the vendor?

Ask: Is this a wrapper around a general-purpose model or purpose-built? What happens during an outage? What is the error rate and how is it measured? Can you provide references from similar organisations? What does “human-in-the-loop” mean specifically in your product?

09
Integrity

Is the output real community sentiment or AI-generated?

This is the line that must not be blurred. AI can help you process, translate, summarise, and theme real community feedback. It should never be used to generate, fabricate, or substitute community voice. Synthetic audiences should only be for preparation and stress-testing, never as a replacement.

10
Readiness

Is your team ready, not just your technology?

Successful AI adoption requires change management, training, and ongoing support. Your team needs to understand what the tool does, what it does not do, and where their professional judgment remains essential. Technology readiness without team readiness leads to misuse or abandonment.

Section 03

Glossary

Plain-language definitions of key AI terms you are most likely to encounter in engagement contexts.

A broad term for computer systems that can perform tasks that typically require human intelligence, such as understanding language, recognising patterns, and making predictions. In engagement, AI is most commonly used for text analysis, translation, and summarisation.

The technology behind tools like ChatGPT and Claude. LLMs are trained on vast amounts of text and can generate, summarise, translate, and analyse written content. They are powerful but imperfect and can produce plausible-sounding errors.

A technique where an AI model is connected to a specific set of documents or data so that its responses are grounded in that source material rather than its general training. This is how AI engagement tools can analyse your specific submissions rather than generating generic responses.

The automated identification of whether a piece of text expresses a positive, negative, neutral, or mixed tone. Useful for getting a broad overview of community feedback, but should never be used as the sole basis for decision-making, as it struggles with sarcasm, cultural context, and nuance.

AI-generated simulations of community members, created to mimic how different demographics might respond to engagement questions. Valuable for preparation such as stress-testing surveys, but should never be used as a substitute for genuine community voice.

A design principle where a human reviews, validates, or overrides AI outputs at defined points in a process. Essential for any AI application in engagement. Ask vendors what HITL means specifically in their product, as the term is sometimes used loosely.

The principle that data is subject to the laws of the country in which it is stored or processed. For New Zealand and Australian engagement data, this means understanding whether community feedback is being sent overseas for processing by an AI provider.

A branch of AI focused on understanding and generating human language. Includes capabilities like text classification, entity recognition, translation, and summarisation. Most AI engagement tools rely heavily on NLP.

When an AI model generates information that sounds plausible but is factually incorrect or entirely fabricated. This is a known limitation of current AI technology and is a key reason why human review of AI outputs is essential.

The practice of crafting specific instructions to an AI model to get better, more accurate, or more relevant outputs. Relevant for practitioners using general-purpose AI tools, as the quality of what you ask directly affects the quality of what you get back.

The process of further training an AI model on a specific dataset to improve its performance for a particular use case. A fine-tuned model for engagement analysis will generally perform better than a general-purpose model used out of the box.

Systematic errors in AI outputs that reflect imbalances in training data or model design. In engagement contexts, this can mean certain community voices or perspectives being systematically over- or under-represented in AI-generated analysis.

Section 04

Next Steps

You have the framework and the checklist. Here is how to put them to work.

This Week

  • Pick one workflow your team does regularly and run it through the four-question filter. Start with the task that consumes the most time.
  • Share this toolkit with your team lead or manager. If your organisation does not have an AI policy for engagement, flag it.
  • If you are already using AI informally, write down what you are using it for. Visibility is the first step toward governance.

This Month

  • Run the Guardrails Checklist against any AI tool you are currently using or evaluating.
  • Have a team conversation about AI use in your engagement practice. Use the assessment grid to map your most common workflows.
  • If you are going to market for an AI tool, use the vendor questions from Section 02 as part of your evaluation criteria.

This Quarter

  • Draft or propose an AI use policy for your engagement team. Start with scope, transparency requirements, and human review points.
  • If you have run a pilot, document what worked, what did not, and what you would do differently. Share it internally and with the sector.
  • Connect with peers navigating the same questions. The Engagement Institute, IAP2, and professional networks are good places to start.

Ready to see AI in engagement in action?

We're selecting five organisations for a hands-on pilot. Full platform access, direct founder support, and a real project to show the ROI.

Keep in touch

Stay updated on AI in engagement practice, new toolkit resources, and what we're building at Communiti Labs.

If you are asking these questions, you are already ahead of most. Start small. Start transparent. Start now.