Purpose: This intake pack is completed for every client engagement before any agent design or deployment begins. It establishes the complete picture of what processes exist, how data flows, what systems are involved, and where the boundaries of automation lie. Every field must be filled; "N/A" is acceptable only with justification.
A.1 Process Catalog
List every business process in scope for evaluation. Include both candidate processes for automation and processes that will remain manual (to establish boundary clarity).
#
Process Name
Department
Frequency
Avg Duration
FTEs
Volume/Month
Error Rate
Automation Candidate
1
2
3
4
5
6
7
8
9
10
A.2 Data Map
Document every data flow relevant to the processes under evaluation. This map determines agent data access requirements, privacy constraints, and integration points.
#
Data Element
Classification
Source System
Destination
Format
Frequency
Retention
PII/Sensitive
1
2
3
4
5
6
7
8
A.3 System List (Integrations)
Every system that touches the processes under evaluation, including internal systems, SaaS platforms, third-party APIs, and manual tools.
#
System Name
Type
Vendor
Integration
Auth Method
SLA
Data Class.
Owner
1
2
3
4
5
6
A.4 Risk Classification
Classify each process by risk level. This classification drives the agent design constraints, testing requirements, and approval thresholds.
#
Process Name
Risk Level
Risk Factors
Impact if Failed
Regulatory Exposure
Mitigation Required
1
2
3
4
5
6
Risk Level Definitions: HIGH — Financial impact >100K EUR, regulatory penalties, reputational damage, PII at risk. Requires human-in-the-loop and dual control. MEDIUM — Financial impact 10K–100K EUR, operational disruption, quality degradation. Requires human review of agent outputs. LOW — Financial impact <10K EUR, easily reversible, no regulatory exposure. Agent can operate autonomously with audit logging.
A.5 Non-Delegable Decisions List
Decisions that must always remain with a human, regardless of agent capability. These are hard boundaries that no automation level can cross.
#
Decision
Process Area
Reason for Non-Delegation
Decision Authority
Agent Role (if any)
1
2
3
4
5
6
7
8
A.6 Escalation SLA
#
Escalation Trigger
Severity
Response Time
Resolution Time
Escalation Path
Notification Method
1
2
3
4
5
A.7 Success Metrics
#
Metric
Current Baseline
Target (3 months)
Target (12 months)
Measurement Method
Review Frequency
1
2
3
4
5
6
A.8 Legal Constraints
#
Constraint
Regulation/Law
Jurisdiction
Impact on Automation
Compliance Mechanism
Review Date
1
2
3
4
5
Section B: Agent Suitability Score
Purpose: Evaluate whether a given process is suitable for AI agent automation, human-agent collaboration (assist mode), or should remain fully manual. The score is based on 7 weighted criteria. Each criterion is scored 1–5, where 1 = least suitable for automation and 5 = most suitable.
Suitability Calculator
Process being evaluated:
Repeatability How standardized and repeatable is the process?
20%
3
Error Risk Tolerance How tolerant is the process of errors? (5 = highly tolerant)
20%
3
Rule Clarity Are rules explicit and unambiguous?
15%
3
Data Sensitivity (inverse) How sensitive is the data? (5 = low sensitivity, easier to automate)
15%
3
Interpretation Need (inverse) How much judgment/interpretation is needed? (5 = minimal, rules-based)
10%
3
Empathy / Negotiation Need (inverse) Does the process require empathy or negotiation? (5 = none needed)
10%
3
Automation Value How much value does automation deliver? (cost savings, speed, scale)
10%
3
Weighted Suitability Score
3.00
out of 5.00
ASSIST MODE
Score Range
Recommendation
Description
4.0 – 5.0
AUTOMATE
Process is highly suitable for full agent automation. Deploy agent with standard monitoring and audit logging. Human oversight via periodic review.
2.5 – 3.9
ASSIST
Process benefits from agent assistance but requires human-in-the-loop. Agent prepares, drafts, or recommends; human approves and executes final step.
1.0 – 2.4
KEEP MANUAL
Process is not suitable for agent automation at this time. High judgment, sensitivity, or empathy requirements. Revisit when capabilities mature or rules clarify.
Score Threshold Rules: Processes scoring below 2.5 are flagged as “Keep Manual” and excluded from automation. Processes between 2.5–3.9 receive assisted automation with mandatory human review at every decision point. Only processes scoring 4.0+ proceed to full automation with standard monitoring and periodic audit.
Section C: Weakness Register
Purpose: A living document that captures identified weaknesses across all operational areas. Every weakness must have an owner, a severity rating, a mitigation plan, and a status. This register is reviewed weekly during operations review and monthly during governance review.
Review Cadence: Weakness Register is reviewed weekly at the Operations Review meeting. Critical and High severity items are escalated to the monthly Governance Review. Items in "Open" status for more than 30 days without a mitigation plan trigger an automatic escalation to the Governor.
Standard Weakness Codes (W1-W10)
Every identified weakness must be tagged with one of the following standard codes for cross-client comparison and pattern analysis:
Code
Weakness Type
Description
Typical Impact
W1
No process owner
Process runs without clear accountability — nobody owns decisions, escalations, or outcomes
High
W2
Dispersed knowledge sources
Information scattered across wikis, emails, Slack, undocumented tribal knowledge
Medium
W3
Excessive exceptions
More than 20% of cases require non-standard handling, manual intervention, or workaround
High
W4
Weak audit trail
Decisions not logged, no replay capability, incomplete evidence chain
Critical
W5
Late escalation
Issues detected too late — after damage, not before. Escalation triggers missing or miscalibrated
High
W6
Inconsistent KPIs
Different teams measure the same process with different metrics, thresholds, or cadences
Medium
W7
Unclear responsibility
Handoff points between agent/human or team/team lack defined ownership
High
W8
Low data quality
Missing fields, stale data, inconsistent formats, no validation at input
High
W9
Duplicated work
Multiple teams or agents doing the same task independently without awareness
Medium
W10
No policy gating
Actions proceed without approval checks, compliance gates, or authorization verification
Critical
Usage: Tag every entry in the Weakness Register with its W-code. This enables: cross-client pattern analysis, prioritization by code frequency, and automated detection of systemic issues across the K0nsult portfolio.
Process Assessment Dimensions (7-Point Framework)
Every process evaluated by K0nsult is scored across these 7 dimensions (1-5 scale each):
Dimension
Question
Scale
Low (1)
High (5)
Business criticality
How important is this process?
1-5
Nice-to-have
Mission-critical
Error harm
What is the cost of a mistake?
1-5
Cosmetic
Legal/financial/safety
Data sensitivity
What data is touched?
1-5
Public only
PII/financial/regulated
Rule clarity
Are the rules clear and documented?
1-5
Ambiguous/tribal
Fully documented
Exception load
How many exceptions per 100 cases?
1-5
>40%
<5%
Human necessity
Does this require human judgment?
1-5
Always
Rarely/never
Automation value
Will automation deliver real ROI?
1-5
Marginal
Transformative
Scoring rule: Processes scoring 28+ (out of 35) proceed to automation design. Processes scoring 20-27 proceed with human-assist model. Processes scoring below 20 are flagged as "Keep Manual" with documented reasoning.