ALD Interview Questionnaire — Level 4

Level 4 (Architect / Principal Engineer) candidates succeed in ALD by owning decision boundaries, designing contract-first architectures, treating tests as governance, and enabling teams (and AI) to implement safely behind stable interfaces and DTOs.

Architect / Principal Engineer

This interview emphasizes architectural judgment, risk control, and scalable delivery—not coding trivia.

Level 4 focus
Decision boundaries Contracts & governance Tests as policy Ports & adapters Change control Scalability AI-safe delivery Leadership

ALD expectation: candidate designs “what must be true” and enables others to deliver “how” safely.

Purpose

What this evaluates

  • Ability to define and defend decision boundaries
  • Contract-first architecture (role interfaces + DTOs + invariants)
  • Contract tests as governance (policy evidence, auditability)
  • Integration strategy (ports/adapters, replaceability, failure modes)
  • Change governance (breaking vs additive, deprecation, versioning)
  • Operational thinking (observability, rollback, reliability)
  • Leading multiple teams and aligning delivery practices
  • Using AI as a workforce without “AI drift”

What this does not require

  • Enterprise-wide policy ownership (Level 5 territory)
  • Org-wide transformation leadership
  • Vendor procurement strategy
ALD framing: Level 4 owns architecture and risk boundaries across a product or large domain area.

Suggested interview format (90 minutes)

Recommended flow

  1. 10 min — background + scope of past systems
  2. 20 min — decision boundaries & contract-first architecture
  3. 15 min — governance via tests + change classification
  4. 15 min — integrations, reliability, and ops readiness
  5. 25 min — architecture exercise (delta + governance plan)
  6. 5 min — wrap-up & candidate questions

Optional pre-read / take-home

  • Provide a short domain brief and current high-level architecture
  • Ask for a written proposal: boundaries, contracts, tests, risks
  • Evaluate clarity, tradeoffs, and governance orientation
Best ALD take-home: “Define the contract surfaces and the proofs (tests) that prevent drift.”

Question bank

Choose 12–18 questions depending on time. Strong Level 4 candidates speak in responsibilities, boundaries, proofs, and risk posture—not frameworks and buzzwords.

1) Decision boundaries & architecture intent

  1. How do you identify decision boundaries from requirements? What signals tell you a boundary deserves its own role/policy?
  2. Describe how you separate policy, orchestration, and integration in a design.
  3. What makes a contract “stable”? How do you design contracts that survive organizational change and vendor churn?
  4. Give an example where the wrong boundary caused long-term pain. How would you redesign it today?

2) Contract-first design (roles + DTOs)

  1. Walk through your approach to designing role-based interfaces (not layer-based). How do you keep SRP/ISP intact at scale?
  2. How do you design DTOs to reflect ubiquitous language and enforce invariants without creating friction?
  3. How do you handle “shared” concepts across multiple domains without creating a giant shared model? (Look for: anti-corruption layers, translation, bounded contexts.)
  4. When do you introduce versioned contracts? What triggers the move from additive to versioned evolution?

3) Tests as governance (proof of policy)

  1. In ALD, contract tests are governance. How do you enforce this culturally and in CI?
  2. How do you structure contract tests so they are stable across refactors but strict on behavior?
  3. How do you handle policy changes safely (who approves, what artifacts change, what evidence is required)?
  4. How do you prevent “silent behavior change” over time in a large system? (Look for: contract tests, change classification, review gates.)

4) Integrations & boundaries (ports/adapters)

  1. Describe your approach to integrating external systems while keeping the domain clean. Where do translations live?
  2. How do you design ports for stability when adapters and vendors change frequently?
  3. What reliability concerns belong at the adapter boundary (timeouts, retries, idempotency), and how do you test them?
  4. When do you add an anti-corruption layer vs a simple adapter?

5) Change governance & risk management

  1. Explain your change classification model: contract vs implementation vs operational. What review gates do you apply?
  2. Describe how you introduce breaking changes without disrupting dependent teams. (Deprecation windows, parallel contracts, migrations.)
  3. How do you manage “contract sprawl” (too many roles/DTOs) while preserving SRP/ISP?
  4. How do you evaluate whether a proposed contract change is justified versus keeping it internal?

6) Operational readiness (prod safety)

  1. What observability evidence do you require for decision-heavy systems (reason codes, metrics, traces)?
  2. How do you build rollback and feature-flag strategies into a contract-first approach?
  3. How do you think about security and authorization boundaries in ALD terms (policy roles, evidence, least privilege)?
  4. Give an example of a production incident and how you would adjust contracts/tests to prevent recurrence.

7) AI-safe delivery at scale

  1. How do you use AI as an implementation workforce while preventing policy drift?
  2. What constraints do you put into prompts and workflows to ensure AI proposes deltas rather than rewrites?
  3. How do you review AI-assisted PRs efficiently? What do you look at first? (Look for: contracts/tests first, boundaries, then implementation.)
  4. Where do you limit or ban AI usage (e.g., security-critical code, regulated policy), and why?

8) Leadership & enablement

  1. How do you align multiple teams on contract standards without becoming a bottleneck?
  2. How do you mentor seniors to write contracts and tests instead of just “shipping code”?
  3. Describe how you resolve disagreements about boundaries or contract shapes. (Look for: smallest viable contract, tests as truth, impact analysis.)
Strong Level 4 signal: candidate consistently turns ambiguity into explicit contracts and executable proofs, while keeping teams moving.

Architecture exercise (contract-first + governance plan)

Use this as a live exercise (25 minutes). The goal is to see how the candidate designs contracts, defines proofs, and manages risk.

Scenario

Context: A regulated decision workflow (eligibility/approval) must be auditable. Policies change quarterly. Multiple teams implement components. AI is used for implementation.
  • Decision output must include status + reason codes + evaluated metrics + timestamp
  • External provider supplies one input (e.g., credit score)
  • System must support gradual rollout and rollback
  • Multiple consumers depend on decision contracts

Candidate deliverable

  1. Decision boundaries and role ownership
  2. Contract surfaces (DTOs + interfaces) and invariants
  3. Contract tests as policy proof + evidence
  4. Integration boundaries (ports/adapters + failure handling)
  5. Change governance (breaking vs additive, deprecation plan)
  6. AI usage model (where AI helps; where it’s constrained)
  7. Operational plan (observability, rollout/rollback)
Evaluator notes (what “good” looks like)
  • Clear separation of policy roles, orchestrators, and adapters
  • Stable contract surfaces that consumers can rely on
  • Tests that define policy and produce audit evidence (reason codes, metrics)
  • Explicit change governance: versioning, deprecations, migration paths
  • AI usage framed as constrained implementation, not policy authoring
  • Operational readiness: correlation IDs, logs/metrics/traces, rollback strategies
Evaluator tip: Ask “How would you prevent drift six months from now?” Great candidates answer with contracts, tests, and governance gates.

Scoring rubric (example)

Level 4 should demonstrate architectural judgment, governance thinking, and team enablement.

Category 0 — Concern 1 — Meets 2 — Strong
Decision boundaries Unclear responsibilities; muddled separation Reasonable separation Clean, defensible boundaries; scalable ownership model
Contract-first design Layer-based contracts; unstable surfaces Mostly role-based; some stability Minimal, stable contracts with clear invariants
Tests as governance Tests = QA only; weak policy proof Understands test-first Executable policy proof; change gates; drift prevention
Ports/adapters boundaries Vendor leakage; weak integration strategy Basic boundary understanding Stable ports, clean adapters, reliability concerns placed correctly
Change governance No versioning/deprecation thinking Some compatibility awareness Strong migration strategy; minimal blast radius; contract lifecycle management
Operational readiness Ignores rollout/rollback/observability Mentions basics Clear plan for evidence, safety, and production stability
AI-safe delivery AI as authority; no drift plan Uses constraints; reviews AI workforce model with enforceable guardrails and review efficiency
Leadership & enablement Bottleneck tendencies Collaborative Enables teams with standards, tools, and clear review models

Hiring guidance

  • Recommend hire: multiple 2s across boundaries, governance, and enablement
  • Borderline: mostly 1s; limited 2s; no critical 0s
  • No hire: 0s in decision boundaries, governance, or operational readiness

Common red flags

  • Defaults to “service/repository layering” instead of responsibility boundaries
  • Cannot explain how tests function as policy proof
  • Optimizes for implementation convenience over contract stability
  • Vendor/framework leakage accepted as normal
  • No concrete plan for drift prevention, versioning, or rollback
  • Becomes an architecture bottleneck rather than enabling teams