← Back to Docs
Client-Facing Document Template

Process Standards

Process Intake Pack • Agent Suitability Score • Weakness Register

Document ID: CNC-PRC-STD-001 Version: 2.0 Effective: 2026-03-23 Owner: K0NSULT Process Engineering

Table of Contents

  1. Section A: Process Intake Pack
    1. A.1 Process Catalog
    2. A.2 Data Map
    3. A.3 System List
    4. A.4 Risk Classification
    5. A.5 Non-Delegable Decisions
    6. A.6 Escalation SLA
    7. A.7 Success Metrics
    8. A.8 Legal Constraints
  2. Section B: Agent Suitability Score
  3. Section C: Weakness Register

Section A: Process Intake Pack

Purpose: This intake pack is completed for every client engagement before any agent design or deployment begins. It establishes the complete picture of what processes exist, how data flows, what systems are involved, and where the boundaries of automation lie. Every field must be filled; "N/A" is acceptable only with justification.

A.1 Process Catalog

List every business process in scope for evaluation. Include both candidate processes for automation and processes that will remain manual (to establish boundary clarity).

# Process Name Department Frequency Avg Duration FTEs Volume/Month Error Rate Automation Candidate
1
2
3
4
5
6
7
8
9
10

A.2 Data Map

Document every data flow relevant to the processes under evaluation. This map determines agent data access requirements, privacy constraints, and integration points.

# Data Element Classification Source System Destination Format Frequency Retention PII/Sensitive
1
2
3
4
5
6
7
8

A.3 System List (Integrations)

Every system that touches the processes under evaluation, including internal systems, SaaS platforms, third-party APIs, and manual tools.

# System Name Type Vendor Integration Auth Method SLA Data Class. Owner
1
2
3
4
5
6

A.4 Risk Classification

Classify each process by risk level. This classification drives the agent design constraints, testing requirements, and approval thresholds.

# Process Name Risk Level Risk Factors Impact if Failed Regulatory Exposure Mitigation Required
1
2
3
4
5
6
Risk Level Definitions:
HIGH — Financial impact >100K EUR, regulatory penalties, reputational damage, PII at risk. Requires human-in-the-loop and dual control.
MEDIUM — Financial impact 10K–100K EUR, operational disruption, quality degradation. Requires human review of agent outputs.
LOW — Financial impact <10K EUR, easily reversible, no regulatory exposure. Agent can operate autonomously with audit logging.

A.5 Non-Delegable Decisions List

Decisions that must always remain with a human, regardless of agent capability. These are hard boundaries that no automation level can cross.

# Decision Process Area Reason for Non-Delegation Decision Authority Agent Role (if any)
1
2
3
4
5
6
7
8

A.6 Escalation SLA

# Escalation Trigger Severity Response Time Resolution Time Escalation Path Notification Method
1
2
3
4
5

A.7 Success Metrics

# Metric Current Baseline Target (3 months) Target (12 months) Measurement Method Review Frequency
1
2
3
4
5
6
# Constraint Regulation/Law Jurisdiction Impact on Automation Compliance Mechanism Review Date
1
2
3
4
5

Section B: Agent Suitability Score

Purpose: Evaluate whether a given process is suitable for AI agent automation, human-agent collaboration (assist mode), or should remain fully manual. The score is based on 7 weighted criteria. Each criterion is scored 1–5, where 1 = least suitable for automation and 5 = most suitable.

Suitability Calculator

Process being evaluated:

Repeatability
How standardized and repeatable is the process?
20%
3
Error Risk Tolerance
How tolerant is the process of errors? (5 = highly tolerant)
20%
3
Rule Clarity
Are rules explicit and unambiguous?
15%
3
Data Sensitivity (inverse)
How sensitive is the data? (5 = low sensitivity, easier to automate)
15%
3
Interpretation Need (inverse)
How much judgment/interpretation is needed? (5 = minimal, rules-based)
10%
3
Empathy / Negotiation Need (inverse)
Does the process require empathy or negotiation? (5 = none needed)
10%
3
Automation Value
How much value does automation deliver? (cost savings, speed, scale)
10%
3
Weighted Suitability Score
3.00
out of 5.00
ASSIST MODE

Score RangeRecommendationDescription
4.0 – 5.0AUTOMATEProcess is highly suitable for full agent automation. Deploy agent with standard monitoring and audit logging. Human oversight via periodic review.
2.5 – 3.9ASSISTProcess benefits from agent assistance but requires human-in-the-loop. Agent prepares, drafts, or recommends; human approves and executes final step.
1.0 – 2.4KEEP MANUALProcess is not suitable for agent automation at this time. High judgment, sensitivity, or empathy requirements. Revisit when capabilities mature or rules clarify.
Score Threshold Rules: Processes scoring below 2.5 are flagged as “Keep Manual” and excluded from automation. Processes between 2.5–3.9 receive assisted automation with mandatory human review at every decision point. Only processes scoring 4.0+ proceed to full automation with standard monitoring and periodic audit.

Section C: Weakness Register

Purpose: A living document that captures identified weaknesses across all operational areas. Every weakness must have an owner, a severity rating, a mitigation plan, and a status. This register is reviewed weekly during operations review and monthly during governance review.

Weakness Categories

Procedural Technical Communication Compliance Organizational Data Dependencies

# Category Weakness Description Severity Likelihood Owner Status Mitigation / Action Plan
W-001
W-002
W-003
W-004
W-005
W-006
W-007
W-008
W-009
W-010
Review Cadence: Weakness Register is reviewed weekly at the Operations Review meeting. Critical and High severity items are escalated to the monthly Governance Review. Items in "Open" status for more than 30 days without a mitigation plan trigger an automatic escalation to the Governor.

Standard Weakness Codes (W1-W10)

Every identified weakness must be tagged with one of the following standard codes for cross-client comparison and pattern analysis:

CodeWeakness TypeDescriptionTypical Impact
W1No process ownerProcess runs without clear accountability — nobody owns decisions, escalations, or outcomesHigh
W2Dispersed knowledge sourcesInformation scattered across wikis, emails, Slack, undocumented tribal knowledgeMedium
W3Excessive exceptionsMore than 20% of cases require non-standard handling, manual intervention, or workaroundHigh
W4Weak audit trailDecisions not logged, no replay capability, incomplete evidence chainCritical
W5Late escalationIssues detected too late — after damage, not before. Escalation triggers missing or miscalibratedHigh
W6Inconsistent KPIsDifferent teams measure the same process with different metrics, thresholds, or cadencesMedium
W7Unclear responsibilityHandoff points between agent/human or team/team lack defined ownershipHigh
W8Low data qualityMissing fields, stale data, inconsistent formats, no validation at inputHigh
W9Duplicated workMultiple teams or agents doing the same task independently without awarenessMedium
W10No policy gatingActions proceed without approval checks, compliance gates, or authorization verificationCritical
Usage: Tag every entry in the Weakness Register with its W-code. This enables: cross-client pattern analysis, prioritization by code frequency, and automated detection of systemic issues across the K0nsult portfolio.

Process Assessment Dimensions (7-Point Framework)

Every process evaluated by K0nsult is scored across these 7 dimensions (1-5 scale each):

DimensionQuestionScaleLow (1)High (5)
Business criticalityHow important is this process?1-5Nice-to-haveMission-critical
Error harmWhat is the cost of a mistake?1-5CosmeticLegal/financial/safety
Data sensitivityWhat data is touched?1-5Public onlyPII/financial/regulated
Rule clarityAre the rules clear and documented?1-5Ambiguous/tribalFully documented
Exception loadHow many exceptions per 100 cases?1-5>40%<5%
Human necessityDoes this require human judgment?1-5AlwaysRarely/never
Automation valueWill automation deliver real ROI?1-5MarginalTransformative
Scoring rule: Processes scoring 28+ (out of 35) proceed to automation design. Processes scoring 20-27 proceed with human-assist model. Processes scoring below 20 are flagged as "Keep Manual" with documented reasoning.