Appearance
How To Facilitate The Agentic AI Workshops
Use this guide to plan and run the four workshops plus implementation follow-up. Keep platform selection behind use-case classification, data readiness, and governance evidence.
Pre-Work
Complete pre-work at least five business days before Workshop 1.
| Item | Owner | Output |
|---|---|---|
| Confirm executive sponsor and business owner | Sponsor | Named sponsor, decision maker, and escalation path |
| Identify target business or operating context | Business owner | Process area, pain points, target users, in-scope systems |
| Collect current KPIs | Business owner | Baseline for cost, speed, quality, revenue, CX, EX, or risk |
| Identify data owners and system owners | IT/system owners | Initial source-system list and access constraints |
| Confirm governance stakeholders | Sponsor | Security, compliance, privacy, risk, AI CoE, platform team |
| Share template pack | Facilitation lead | Workbook and template set distributed |
Workshop 1: Strategy And Use-Case Selection
Purpose: Align AI ideas to measurable business outcomes and select use cases worth deeper assessment.
Recommended duration: 2.5 to 3 hours.
Required participants: executive sponsor, business owner, process SMEs, product owner, enterprise architect, AI/platform lead, change lead.
| Agenda | Time | Facilitation Notes | Output |
|---|---|---|---|
| Business outcome framing | 30 min | Ask what must improve and how it is measured today. Separate aspiration from measurable KPI. | Outcome map and KPI baseline |
| Workflow walkthrough | 45 min | Map the current process, decisions, systems, handoffs, exceptions, and pain points. | Workflow opportunity notes |
| Agent fit filter | 45 min | Classify ideas as agent, RAG/search, deterministic automation, analytics/model, or prebuilt SaaS. | Use-case inventory and "not an agent" log |
| Portfolio scoring | 45 min | Score business impact, feasibility, data readiness, user desirability, and risk/control complexity. | Prioritization matrix |
| Pilot shortlist and gates | 30 min | Pick one to three pilots and define what scale, redesign, pause, or stop would mean. | Pilot shortlist and go/no-go criteria |
Exit criteria:
- Each candidate use case has a business owner, target user, affected workflow, KPI, and classification.
- Non-agent ideas are preserved with a recommended path.
- Pilot candidates have success metrics and decision thresholds.
Workshop 2: Data And Architecture
Purpose: Confirm whether the pilot can be grounded safely and select the Microsoft solution pattern.
Recommended duration: 2.5 to 3 hours.
Required participants: business owner, data owners, system owners, enterprise architect, Microsoft 365/Power Platform lead, Azure/Foundry lead, security architect.
| Agenda | Time | Facilitation Notes | Output |
|---|---|---|---|
| Data source review | 40 min | Identify systems of record, knowledge repositories, operational systems, APIs, and data owners. | Data access map |
| Data readiness assessment | 45 min | Score accuracy, timeliness, cleanliness, completeness, permissions, compliance, residency, and availability. | Data readiness assessment |
| Grounding decision | 40 min | Decide whether the agent needs RAG/search, API/tool calls, MCP, connectors, or mixed grounding. | Retrieval decision register |
| Platform selection | 45 min | Apply SaaS first, then Microsoft 365 Copilot extension, Copilot Studio, Foundry, or custom build logic. | Platform selection record |
| Architecture sketch | 40 min | Draft user channel, agent runtime, identity, tools/actions, data sources, logs, monitoring, and control points. | Target architecture |
Exit criteria:
- Authoritative data sources and access constraints are documented.
- Grounding and tool-use decisions have rationale.
- Target Microsoft platform pattern is selected with assumptions and tradeoffs.
Workshop 3: Governance And Risk
Purpose: Define ownership, policy, security, responsible AI, compliance, audit, and lifecycle controls.
Recommended duration: 3 hours.
Required participants: business owner, product owner, AI CoE, platform team, security, compliance, privacy, risk, operations, enterprise architect.
| Agenda | Time | Facilitation Notes | Output |
|---|---|---|---|
| Operating model | 40 min | Assign ownership across sponsor, product owner, workload team, AI CoE, platform, security, compliance, and operations. | Operating model and RACI |
| Agent lifecycle and registry | 35 min | Define registration metadata, identity, access scope, funding, monitoring, pause, retire, and review requirements. | Agent registry model |
| Agent charter | 45 min | Define purpose, scope, prohibited actions, tools, approvals, fallback, escalation, and memory/retention. | Agent charter |
| Threat model and controls | 55 min | Cover prompt injection, data leakage, privilege misuse, tool misuse, residency, model risk, audit gaps, and abuse. | Risk/control register |
| Responsible AI evidence | 25 min | Define evidence for fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability. | Responsible AI assessment inputs |
Exit criteria:
- Every material risk has an owner, control, evidence type, and residual risk decision path.
- Agent boundaries and prohibited actions are explicit.
- The agent can be paused, audited, monitored, reviewed, and retired.
Workshop 4: Pilot Design
Purpose: Define what will be built, how it will be tested, and how rollout and operations will work.
Recommended duration: 2.5 to 3 hours.
Required participants: business owner, product owner, engineering lead, QA/test lead, AI/platform lead, security, operations, change lead, support lead.
| Agenda | Time | Facilitation Notes | Output |
|---|---|---|---|
| Pilot scope | 30 min | Confirm included users, workflows, systems, data, tools, and exclusions. | Pilot scope statement |
| Validation design | 50 min | Define golden test set, task-completion thresholds, quality metrics, safety tests, red-team cases, cost, and latency limits. | Pilot validation plan |
| ALM and environments | 35 min | Define dev/test/prod environments, promotion gates, prompt/version control, connector promotion, data refresh, and rollback. | ALM/environment strategy |
| Rollout and change | 35 min | Plan launch channel, Teams or business app placement, communications, training, support, and feedback collection. | Rollout/change plan |
| Operations and scale decision | 40 min | Define telemetry, dashboard, review cadence, cost controls, lifecycle review, and scale/redesign/pause/stop decision. | Observability spec and operations plan |
Exit criteria:
- Build scope and non-goals are locked for the pilot.
- Validation thresholds are known before development starts.
- Rollout, support, operations, cost controls, and lifecycle review are assigned.
Implementation Follow-Up
| Activity | Cadence | Expected Output |
|---|---|---|
| Build standup | 2-3 times weekly | Blockers, decisions, risk updates, scope control |
| Control evidence review | Weekly | Updated risk/control register and audit evidence |
| Validation review | Weekly during test phase | Test pass/fail, red-team results, defect backlog |
| Pilot telemetry review | Weekly after launch | Value, quality, safety, cost, latency, adoption |
| Scale decision review | End of pilot | Scale, redesign, pause, or stop decision |
| Lifecycle review | Monthly or quarterly | Improvement backlog, access review, cost review, retirement decision |
Facilitator Checklist
- Keep business outcomes visible in every workshop.
- Ask whether simpler automation, RAG, analytics, or a prebuilt SaaS agent would solve the need.
- Capture unresolved assumptions as decisions with owners and dates.
- Require evidence for governance and security claims.
- Do not allow platform selection to precede data readiness and use-case classification.
- Keep the pilot narrow enough to validate value and controls quickly.