ALD Architecture Checklist
Use this checklist to keep ALD work role-based, test-defined, and governable. It works for architecture reviews, PR gating, and “ready for AI implementation” handoffs.
Recommended review flow: start with tests, validate role boundaries, then examine implementation details.
At a glance
This checklist is grouped into nine sections (A–I), plus a concise “Ready for AI” gate.
A. Context
B. Roles
C. DTOs
D. Tests
E. Patterns
F. Impl
G. Adapters
H. Governance
I. Ops
How to use this checklist
Best use cases
- Architecture/design review (before implementation)
- PR review (contract changes vs implementation changes)
- Standard change qualification (contract stable, tests pass)
- AI handoff: “implement this role to satisfy these tests”
Fast review order (recommended)
- Start with tests (D) — do they define intent?
- Validate roles (B) — SRP/ISP clean?
- Check DTO meaning (C) — invariants explicit?
- Confirm edges (G) — adapters at boundaries?
- Verify governance hooks (H)
- Then implementation (F)
Pass condition: reviewing interfaces + DTOs + contract tests should make intended behavior unambiguous.
A. Context and scope (module / bounded context)
- A1. Bounded context is explicit — clear purpose and “what it does not own.”
- A2. Ubiquitous language exists — key terms reflected in type/DTO and role names.
- A3. Inbound/outbound boundaries are explicit — externals accessed only through ports.
- A4. Change and risk hotspots identified — which decisions/policies are volatile or high-risk?
Pass condition: you can state ownership, language, and dependencies in 60 seconds.
B. Role interfaces (the contract surface)
- B1. Interfaces are role-based, not layer-based — names describe domain responsibility (e.g.,
EligibilityPolicy). - B2. Single responsibility is enforceable — one reason to change per interface.
- B3. ISP is respected — no “kitchen sink” interfaces; clients depend only on what they use.
- B4. Inputs/outputs are intention-revealing — DTOs/types express meaning (avoid primitives when meaning exists).
- B5. Stability — contract changes are rare; role reflects decisions that outlast implementation tech.
- B6. Error model is explicit — typed errors or reason codes; avoid “stringly-typed” ambiguity.
- B7. Side effects are explicit — pure where possible; side effects via separate ports.
Pass condition: an engineer can implement the role from interface + DTOs + tests without guessing.
C. DTOs and domain types
- C1. Ubiquitous language is encoded —
Money,RiskBand,CustomerIdvs primitives. - C2. Invariants are explicit — constructors/validators prevent invalid states.
- C3. Context-specific DTOs — no shared domain model across bounded contexts.
- C4. Versioning strategy exists — additive DTO evolution preferred; translations handled at boundaries.
- C5. Serialization boundaries are clean — domain types don’t leak transport concerns.
Pass condition: meaning is obvious from types, and invalid states are hard to represent.
D. Contract tests (tests as agreement)
- D1. Contract tests exist for each role — behavior is defined before implementation.
- D2. Tests read like business rules — intention-revealing names; includes edge cases.
- D3. Reason codes / audit data covered — evidence requirements are asserted.
- D4. Negative cases included — invalid inputs, policy limits, missing permissions.
- D5. Tests are deterministic — time/randomness/external dependencies controlled.
- D6. Change semantics are explicit — changing a contract test is changing policy (review required).
Example contract test naming style
rejects_application_when_ltv_exceeds_policy_limit()
returns_reason_codes_for_compliance_reporting()
denies_access_when_actor_lacks_required_permission()
Pass condition: reviewing tests alone makes intended behavior unambiguous.
E. Patterns and composition (SOLID + GoF)
- E1. Pattern choice is intentional — Strategy for variability; Adapter for integrations; Decorator for cross-cutting.
- E2. Dependency direction is correct — domain depends on abstractions, not infrastructure.
- E3. Composition over inheritance — orchestrators and decorators assemble roles.
- E4. Framework leakage is prevented — framework-specific types remain at edges.
Pass condition: you can point to where variability lives—and it isn’t scattered.
F. Implementation (AI-friendly execution constraints)
- F1. Implementation stays behind contracts — no extra public surface that bypasses tests/contracts.
- F2. Minimal coupling — no “reach around” into other roles’ internals.
- F3. Performance concerns are isolated — caching/batching via decorators or internal modules.
- F4. Correctness before cleverness — readability; tricky logic is justified and tested.
Pass condition: swapping implementations doesn’t affect callers.
G. Adapters and integrations (Ports & Adapters / ACL)
- G1. Ports are stable — external dependencies accessed only via ports.
- G2. Adapters translate at the boundary — mappers/translators exist; no vendor DTOs in the domain.
- G3. Resilience is standardized — retries/timeouts/circuit breakers consistent (decorators).
- G4. Idempotency strategy exists — especially for message/event handlers.
- G5. Observability is consistent — correlation IDs, structured logs, traces, metrics.
Pass condition: vendor changes don’t force domain refactors.
H. Security, compliance, and governance (NIST/ITIL-friendly)
- H1. Authorization is a role —
AuthorizationPolicyexists where needed and is tested. - H2. Data classification is explicit — handling rules enforced by types/policies.
- H3. Audit evidence is produced — decisions include required metadata; tests assert it.
- H4. Change classification is clear — contract changes vs implementation changes are distinguishable.
- H5. Least privilege boundaries exist — permissions and dependency boundaries are deliberate.
Pass condition: you can answer “where is control X enforced?” with a role name + test suite.
I. Operational readiness
- I1. Failure modes are designed — timeouts, partial failures, retries, and fallbacks are intentional.
- I2. SLO/SLA signals exist — role-level metrics map to business outcomes.
- I3. Rollback strategy exists — feature flags/strategy selection enable safe rollback.
- I4. Runbooks and ownership are clear — on-call ownership and escalation are known.
Pass condition: you can operate it without reading the entire codebase.
“Ready for AI” gate (concise)
A change is ready to hand to an AI agent when the contract surface and proof of intent are complete.
- Interfaces and DTOs are complete and intention-revealing.
- Contract tests fully describe behavior, including edge cases and negative cases.
- Error model is explicit (typed errors/reason codes).
- Adapters/ports are defined for all external dependencies.
- Observability requirements are specified (correlation, logs/metrics/traces).
- Policy decisions are encoded in tests, not left implicit.
Hand-off statement: “Implement these roles so all contract tests pass. Do not change contracts without approval.”