iStock-1051655628

IAP2 Canada Webinar: Reframing AI as a Governed Advisory System

Tuesday, March 31, 2026
12:00 PM (Toronto)

About This Webinar

Responsibly Supporting Professional Judgment in Community Engagement AI can effectively support community engagement — when it is used as an advisory tool to support professional judgement, with transparency, accountability, and human oversight. AI tools such as ChatGPT, Claude, and Gemini are increasingly being used across community engagement practice—often informally and without clear governance. As explored in Artificial Intelligence: Its potential and ethics in the practice of public participation (Boyco & Robinson, Jan 9, 2025), general-purpose large language models (LLMs) can support lower-risk tasks such as drafting communications, summarizing feedback, or early-stage ideation, but introduce significant risks when applied to more complex, context-dependent engagement work. This webinar provides a technology-focused explanation of how general-purpose LLMs function, when they can responsibly support engagement practice, and where they introduce real risk. Participants will explore why AI-generated outputs can appear confident even when they lack critical context — and why improved prompting alone cannot resolve issues such as misinformation, bias amplification, privacy exposure, or lack of auditability. Participants are then introduced to Retrieval-Augmented Generation (RAG) as a more responsible technical approach. RAG-based systems ground AI outputs in trusted sources such as IAP2 resources, governing organizational frameworks, policies, plans, as well as project-specific and historical context. Through a practical case example, the session demonstrates how engagement-aware AI advisory systems can support planning, delivery, and reporting—while preserving human oversight, professional judgement, and public trust. This webinar is intended for engagement practitioners, managers, and specialists seeking a clear, technically informed and ethically grounded perspective on AI—one that strengthens practice while upholding the values, accountability, and legitimacy at the core of public participation. Learning Outcomes Participants will: - Understand how general-purpose LLMs function and why confidence does not equal correctness - Identify where AI can responsibly support engagement—and where risks outweigh benefits - Recognize why governance, ethics, and professional accountability matter more than prompting - Understand how RAG-based, engagement-aware AI differs from generic AI tools - Apply IAP2’s Core Values and Code of Ethics as a lens for evaluating AI use in engagement

Register Here

Registration for this webinar is hosted on an external platform.

Register on Zoom