ALD mapped to ITIL 4

ALD is a disciplined engineering method that makes ITIL’s plan/build/run loop more predictable, more auditable, and safer to change—especially when using AI agents.

In ITIL terms: ALD is a practice-level execution discipline that strengthens Design & Transition, Obtain/Build, and Change Enablement using contracts and tests.

Where ALD fits

ALD provides the “control plane” between intent and execution: role-based interfaces + DTOs + contract tests define what must be true, while implementations (human or AI) satisfy those constraints.

Lower change risk Clear decision boundaries Auditable behavior Standard-change candidates Less incident churn Better knowledge artifacts

ALD in the Service Value System (SVS)

ALD aligns naturally with ITIL 4 guiding principles by turning intent into enforceable, reviewable engineering artifacts.

Guiding principles ALD amplifies

  • Focus on value → contracts and tests map to outcomes and policies
  • Start where you are → extract roles/contracts incrementally from legacy systems
  • Progress iteratively with feedback → tests provide rapid feedback loops
  • Collaborate and promote visibility → contract artifacts are reviewable and shareable
  • Think and work holistically → explicit decision boundaries reduce hidden coupling
  • Keep it simple and practical → micro-roles avoid “mega services”
  • Optimize and automate → AI accelerates implementation under constraints

SVS value proposition

  • Governance clarity: contract changes are explicit and reviewable
  • Risk reduction: implementation-only changes behind stable contracts are safer
  • Faster throughput: AI produces boilerplate and tests; architects approve intent
  • Better audit posture: tests serve as evidence for “what behavior is guaranteed”
  • Knowledge durability: contracts/tests outlive tickets and wiki pages
ITIL framing: ALD improves the quality and governability of service design and change without adding heavy process.

Mapping ALD to the Service Value Chain (SVC)

ALD shows up differently in each value chain activity. Think of it as “how engineering executes” within ITIL’s flow.

  1. Plan Define service policies and decision boundaries; build a role catalog (interfaces) aligned to business intent.
  2. Engage Translate stakeholder needs into behavioral expectations expressed as contract tests and DTO vocabulary.
  3. Design & Transition ALD’s home turf: approve role-based interfaces/DTOs and contract tests as design artifacts.
  4. Obtain/Build AI/teams implement behind contracts; enforce DI, pattern fit, and framework-at-the-edges discipline.
  5. Deliver & Support Stable contracts reduce incident volume; adapters isolate vendor churn; decorators standardize telemetry and retries.
  6. Improve Refactor safely under locked contracts; improvements become smaller, test-backed changes rather than risky rewrites.
Practical outcome: The “Design & Transition” step becomes concrete—interfaces + tests become the reviewed transition artifacts.

ALD across key ITIL practices

This section focuses on the practices where ALD provides the most tangible operational and governance benefits.

Change Enablement

  • Risk classification becomes objective: contract changes vs implementation-only changes
  • Standard changes are realistic when contracts are stable and tests pass
  • Normal changes apply to interface/DTO changes or broad cross-cutting shifts
  • Emergency changes can be constrained behind existing contracts with a focused test subset

ALD provides evidence: contract tests + CI results become change documentation, not narrative guesses.

Service Design

  • Service requirements map to a role catalog (policies/strategies/validators/ports)
  • DTOs define the service vocabulary (invariants, error models)
  • Design reviews focus on decision boundaries, not “service/repository” buckets

Designs stay durable because they describe decisions that persist even when implementation tech changes.

Service Validation & Testing

  • Contract tests are the authoritative behavior spec
  • Edge cases and error modes are explicit and regression-proof
  • Acceptance criteria can be traced to tests (and vice versa)

Testing becomes “definition of done,” not a discretionary activity.

Release & Deployment Management

  • Releases become smaller and safer: “new policy implementation” rather than “service rewrite”
  • Strategy selection + feature flags enable canary/gradual rollout behind stable contracts
  • Frameworks and vendors remain at the edges via adapters

ALD enables incremental change that fits modern deployment controls.

Incident Management

  • Triage improves because failures map to a specific role (e.g., PricingStrategy)
  • Cleaner seams reduce blast radius and simplify isolation
  • Operational decorators can standardize correlation and telemetry

Role-based boundaries reduce the “giant service” incident black box problem.

Problem Management

  • Root causes map to a role implementation and a missing/weak contract test
  • Prevent recurrence by adding the failing test first
  • Refactor under SOLID while keeping contracts stable

Problems become enforceable learnings, not “we think we fixed it” anecdotes.

Knowledge Management

  • Interfaces + DTOs + tests are durable knowledge artifacts
  • “Role Catalog” pages link responsibilities, owners, and test suites
  • Less reliance on tribal knowledge and stale documentation

ALD artifacts are the documentation that stays current because they are executable.

Information Security Management

  • Security decisions become explicit roles: AuthorizationPolicy, DataClassificationPolicy
  • Auditable behavior via tests and reason codes
  • Swappable implementations for policy updates

Security becomes a set of decision boundaries, not scattered conditionals.

ALD design control point: If interfaces/DTOs change, treat it as higher risk. If only implementations change and contract tests pass, risk is lower.

ALD as an ITIL-friendly control for AI-assisted development

ITIL stakeholders often ask: “How do we control risk when AI writes code?” ALD provides clear inspection points.

Control points (what to review)

  • Design control: interfaces/DTOs reviewed as formal design artifacts
  • Behavior control: contract tests approved as acceptance criteria
  • Build control: CI gates (tests, linting, SAST) enforce compliance
  • Change control: contract changes are explicit, reviewable, and rare
  • Knowledge control: contracts/tests serve as living documentation

What this enables

  • Smaller, classifiable changes (standard vs normal vs emergency)
  • Clear evidence for CAB/approvals (tests + diffs)
  • Reduced regression risk via locked behavior contracts
  • Safer modernization: refactor internals without changing contracts
  • Faster throughput: delegate boilerplate while keeping architectural authority
Bottom line: ALD doesn’t “trust” AI. It constrains AI behind contracts and proves behavior through tests.

Roles and responsibilities (RACI-style guidance)

ALD clarifies who owns intent, contracts, implementation, and operational outcomes.

Business & service roles

  • Service Owner: owns value, policy outcomes, success measures
  • Product Owner: clarifies intent and acceptance; collaborates on test semantics
  • Change Manager / CAB: focuses on contract changes and risk classification

Engineering & operations roles

  • Architect / Lead Engineer: owns role boundaries, contracts, and design reviews
  • Developers / AI agents: implement behind contracts; improve internals without changing behavior
  • SRE / Operations: standardizes operational decorators (telemetry, retries) and monitors role-level signals
Accountability stays human: The architect remains accountable for correctness; AI accelerates execution.

A simple policy statement you can adopt

If you want one crisp ITIL-aligned governance rule for ALD, use this.

Policy

Contract changes (interfaces/DTOs) require design review and a normal change. Implementation-only changes behind approved contracts may qualify as standard changes when contract tests pass and deployment safeguards are in place.

Suggested standard-change qualification criteria
  • No changes to public interfaces/DTOs
  • Contract test suite passes + relevant regression suite passes
  • Security/static checks pass (SAST/dep scanning as applicable)
  • Deployment guardrails exist (rollback, canary/feature flag if needed)
  • Change record links to diffs + CI evidence