Engagement Institute NZ Symposium 2026
Your framework for assessing, implementing, and governing AI tools in community engagement practice.
Four sections to help you move forward with confidence.
AI is a tool, not a replacement. The best AI implementations in engagement are the ones where practitioners remain firmly in control of the judgments that matter. Think of AI as a copilot, not an autopilot. Your expertise, your relationships with community, and your professional judgment are irreplaceable.
Section 01
Not every task benefits from AI. Use this four-question filter to evaluate any workflow your team is considering for AI assistance. Pick a task your team does regularly and run it through all four questions.
AI is strongest when applied to tasks that follow predictable patterns and consume disproportionate time relative to the judgment they require.
High-volume, pattern-based tasks with clear inputs and outputs.
Moderate volume, but requires some contextual judgment.
Low-volume, unique, or highly contextual work where AI adds little value.
AI can support analysis, but it cannot replace the practitioner’s understanding of context, power dynamics, cultural nuance, or lived experience.
Task is procedural or administrative. Human judgment is for review, not creation.
AI can assist, but a practitioner must interpret and validate every output.
Task is fundamentally about human judgment, relationships, or cultural sensitivity.
Every AI tool produces errors. The question is not whether it will make mistakes, but what happens when it does.
Low stakes. Internal drafting, preparation, or exploration. Errors are easily caught.
Moderate stakes. Output will be reviewed but informs decisions. Errors could mislead.
High stakes. Output directly informs public reporting or council decisions. Errors erode trust.
If you cannot clearly and transparently describe the AI’s role, you are not ready to use it. Transparency is not optional.
You can write a clear, plain-language statement for your methodology section.
You could explain it, but it would require significant caveats or qualifications.
You cannot clearly articulate what the AI did, or disclosure would undermine confidence.
If you have two or more reds, this task is not ready for AI. Mostly greens with review processes? Likely a good candidate.
Theming open-text survey responses
Likely yes, with review
Translating engagement materials
Strong candidate
Summarising 3,000+ submissions
Proceed with caution
Drafting engagement plans
Assist only, not generate
Generating synthetic community voices
Not recommended
Transcribing audio/video recordings
Strong candidate
Section 02
Before any AI tool goes live in your engagement work, answer these ten questions. If you cannot confidently answer more than half, you are not ready to deploy. That is not a failure — it is a sign that the groundwork needs to happen first.
Know exactly where data is stored, processed, and whether it leaves your jurisdiction. For New Zealand and Australian councils, understand your obligations under the Privacy Act and any data sovereignty requirements. Ask the vendor directly: does data leave the country?
Clarify access controls, retention periods, and deletion processes. What happens to community feedback after your engagement closes? Is it retained by the AI provider? Can it be used to train their models? Get this in writing.
If AI was used in your engagement process, you should be able to describe its role in plain language in your engagement report. Draft this statement before you deploy, not after. If you struggle to write it, reconsider the approach.
Community trust is built on transparency. If a concerned resident or ratepayer asks whether AI was used to process their submission, you need a clear, honest, and reassuring answer. Prepare it in advance.
Human-in-the-loop is not a checkbox. Define specifically where in your workflow a qualified person reviews, validates, or overrides the AI output. Document this process. The higher the stakes, the more rigorous the review.
A general IT policy is not enough. Your engagement team needs guidance specific to how AI interacts with community voice, public trust, and democratic participation. If you do not have one, this toolkit is a starting point for drafting it.
AI models reflect the data they were trained on, which often underrepresents Indigenous communities, culturally and linguistically diverse groups, and people with lower digital literacy. Consider who might be excluded or misrepresented and build mitigation into your process.
Ask: Is this a wrapper around a general-purpose model or purpose-built? What happens during an outage? What is the error rate and how is it measured? Can you provide references from similar organisations? What does “human-in-the-loop” mean specifically in your product?
This is the line that must not be blurred. AI can help you process, translate, summarise, and theme real community feedback. It should never be used to generate, fabricate, or substitute community voice. Synthetic audiences should only be for preparation and stress-testing, never as a replacement.
Successful AI adoption requires change management, training, and ongoing support. Your team needs to understand what the tool does, what it does not do, and where their professional judgment remains essential. Technology readiness without team readiness leads to misuse or abandonment.
Plain-language definitions of key AI terms you are most likely to encounter in engagement contexts.
A broad term for computer systems that can perform tasks that typically require human intelligence, such as understanding language, recognising patterns, and making predictions. In engagement, AI is most commonly used for text analysis, translation, and summarisation.
The technology behind tools like ChatGPT and Claude. LLMs are trained on vast amounts of text and can generate, summarise, translate, and analyse written content. They are powerful but imperfect and can produce plausible-sounding errors.
A technique where an AI model is connected to a specific set of documents or data so that its responses are grounded in that source material rather than its general training. This is how AI engagement tools can analyse your specific submissions rather than generating generic responses.
The automated identification of whether a piece of text expresses a positive, negative, neutral, or mixed tone. Useful for getting a broad overview of community feedback, but should never be used as the sole basis for decision-making, as it struggles with sarcasm, cultural context, and nuance.
AI-generated simulations of community members, created to mimic how different demographics might respond to engagement questions. Valuable for preparation such as stress-testing surveys, but should never be used as a substitute for genuine community voice.
A design principle where a human reviews, validates, or overrides AI outputs at defined points in a process. Essential for any AI application in engagement. Ask vendors what HITL means specifically in their product, as the term is sometimes used loosely.
The principle that data is subject to the laws of the country in which it is stored or processed. For New Zealand and Australian engagement data, this means understanding whether community feedback is being sent overseas for processing by an AI provider.
A branch of AI focused on understanding and generating human language. Includes capabilities like text classification, entity recognition, translation, and summarisation. Most AI engagement tools rely heavily on NLP.
When an AI model generates information that sounds plausible but is factually incorrect or entirely fabricated. This is a known limitation of current AI technology and is a key reason why human review of AI outputs is essential.
The practice of crafting specific instructions to an AI model to get better, more accurate, or more relevant outputs. Relevant for practitioners using general-purpose AI tools, as the quality of what you ask directly affects the quality of what you get back.
The process of further training an AI model on a specific dataset to improve its performance for a particular use case. A fine-tuned model for engagement analysis will generally perform better than a general-purpose model used out of the box.
Systematic errors in AI outputs that reflect imbalances in training data or model design. In engagement contexts, this can mean certain community voices or perspectives being systematically over- or under-represented in AI-generated analysis.
Section 04
You have the framework and the checklist. Here is how to put them to work.
We're selecting five organisations for a hands-on pilot. Full platform access, direct founder support, and a real project to show the ROI.
Stay updated on AI in engagement practice, new toolkit resources, and what we're building at Communiti Labs.
If you are asking these questions, you are already ahead of most. Start small. Start transparent. Start now.