ALD Interview Questionnaire — Level 3

Level 3 (Senior Developer / Technical Lead) candidates succeed in ALD by designing and evolving contracts (DTOs + role-based interfaces) with discipline, defining behavior via contract tests, and keeping integrations behind ports/adapters.

Senior Developer / Technical Lead

This template assumes candidates can implement well; the focus is on contract design, boundaries, and leadership judgment.

Level 3 focus
Role-based contracts DTO invariants Contract tests Ports & adapters Change impact Mentorship AI deltas

ALD expectation: candidate can propose the smallest safe contract delta, justify it, and lead implementation behind it.

Purpose

What this evaluates

  • Role-based interface design (SRP/ISP at contract level)
  • DTO/domain modeling using ubiquitous language + invariants
  • Contract tests as policy/governance (not just verification)
  • Ports/adapters boundary discipline and integration strategy
  • Change classification (contract vs implementation; breaking vs additive)
  • Technical leadership: guiding others and maintaining code quality
  • AI usage at the contract level: asking for deltas, not rewrites

What this does not require

  • Enterprise-wide standards ownership
  • Full platform governance and risk posture decisions
  • Deep organizational transformation strategy
ALD framing: Level 3 can design contracts for a bounded area and lead delivery safely within those guardrails.

Suggested interview format (75–90 minutes)

Recommended flow

  1. 5 min — intro + scope
  2. 15 min — contract design & SRP/ISP
  3. 15 min — tests as policy & determinism
  4. 15 min — ports/adapters and integration boundaries
  5. 20–30 min — delta exercise (story → DTO/interface delta + contract tests)
  6. 10 min — AI usage and leadership judgment

Optional take-home

  • Provide a small module with contracts and tests
  • Add a new requirement with compliance/audit evidence
  • Ask for a written proposal: decision boundaries + delta + test plan
Best ALD take-home: “Propose the smallest viable contract changes and the contract tests that prove them.”

Question bank

Choose 10–16 questions. Strong Level 3 candidates can justify contract boundaries, anticipate change impact, and use tests as an executable definition of policy.

1) Role-based interface design (SRP/ISP)

  1. Describe “role-based, not layer-based” interfaces. Give an example of a bad layer-based interface and how you’d redesign it.
  2. You inherit a single interface with 25 methods used by many clients. Walk through how you would split it safely. (Look for: deprecation, adapters, incremental migration.)
  3. When is it acceptable to keep an interface broader? What signals tell you it’s time to split?
  4. How do you avoid “contract churn” (too many changes to public surfaces) while still evolving a system?

2) DTO modeling & invariants (ubiquitous language)

  1. Give an example of converting primitives into value objects. What invariants would you enforce and where?
  2. How do you decide whether a new concept needs its own DTO/type versus adding a field to an existing DTO? (Look for: meaning mismatch, context leakage, versioning.)
  3. Explain how you prevent integration details from leaking into domain DTOs. (Look for: mapping layers, anti-corruption layer, adapter DTOs.)
  4. If a DTO must be auditable, what fields or evidence do you ensure are present? (Reason codes, evaluated metrics, timestamps, decision metadata.)

3) Contract tests as policy/governance

  1. In ALD, why are contract tests considered governance artifacts? How do you communicate that to a team?
  2. How do you structure contract tests so they read like business rules and remain stable over refactors?
  3. What’s your approach when the business changes the rule? What changes first: tests, DTOs, or implementation? (Look for: tests first, then contract changes as needed.)
  4. How do you handle non-determinism (time, randomness, external calls) in a contract-test suite?

4) Ports/adapters & integration boundaries

  1. Describe a time you replaced or insulated a vendor dependency. What port did you define and how did the adapter translate?
  2. A developer imports a vendor SDK type into the domain layer. What do you do? (Look for: boundary correction, adapter mapping, policy about edge-only.)
  3. How do you design ports to be stable while adapters can change frequently?
  4. What failure modes should adapters handle (timeouts, retries, idempotency), and where do those concerns live?

5) Change impact & backwards compatibility

  1. Walk through how you classify a change as “contract vs implementation.” What review level does each require?
  2. Give an example of an additive change that looks safe but is actually breaking.
  3. How do you introduce a breaking change responsibly (versioning, deprecation, migrations)?
  4. How do you minimize blast radius when multiple teams depend on the same contracts?

6) AI usage at Level 3 (deltas, not rewrites)

  1. What prompt would you use to ask an AI agent to propose a DTO/interface delta from a user story? (Look for: constraints, strict output format, repo search first.)
  2. AI proposes a broad interface expansion. How do you evaluate it using SRP/ISP?
  3. When AI-generated code passes tests, what else do you review before approving a PR? (Security, readability, boundary leakage, missing negative tests.)
  4. How do you prevent “AI drift” (silent behavior changes) over time? (Contract tests, change classification, code review focus on contracts.)

7) Technical leadership signals

  1. How do you mentor a Level 1 developer to implement behind contracts without inventing requirements?
  2. You disagree with another senior dev about a contract change. How do you resolve it? (Look for: tests as truth, smallest viable contract, compatibility.)
  3. What does “review contracts, not implementations” mean in practice? What do you actually look at first?

8) Pattern literacy (intentional GoF)

  1. When do you use Strategy vs conditional logic for business rules?
  2. Give an example of a Decorator or middleware-style pattern for cross-cutting concerns in a way that preserves ALD boundaries.
  3. Describe a design where an Orchestrator coordinates policies and adapters without becoming a “god class.”
Strong Level 3 signal: candidate can articulate a minimal contract delta and a contract-test plan that proves intent—before implementation details.

Delta exercise (story → decision boundaries → contracts)

Use this as a live exercise (20–30 minutes). The goal is not to code; the goal is to produce a crisp, reviewable contract proposal.

Scenario

User story: As an underwriter, I want an eligibility decision that includes reason codes and evaluated metrics so decisions are auditable.
  • If LTV > 0.80 → ineligible with LTV_EXCEEDS_MAX
  • If credit score < 620 → ineligible with CREDIT_SCORE_BELOW_MIN
  • Otherwise eligible with ELIGIBLE
  • Decision must include status, reason codes, metrics, timestamp
  • Credit score comes from an external provider

Candidate deliverable (what you ask for)

  1. Decision boundaries (policy vs orchestration vs integration)
  2. DTO delta: existing changed + new DTOs/types + invariants
  3. Interface delta: new roles/ports + responsibilities
  4. Contract test plan: tests by role (names + intent)
  5. Risks: breaking changes + migration plan
What a “good” answer typically includes (high level)
  • Roles: EligibilityPolicy, LoanToValueCalculator, CreditScoreProvider, plus a use case/orchestrator role
  • DTOs: EligibilityDecision, EligibilityMetrics, ReasonCode, and intentful value objects (e.g., Money)
  • Contract tests that encode thresholds and evidence requirements
  • Adapter boundary plan for the external credit score provider
Evaluator tip: Listen for “smallest viable contract” reasoning and explicit treatment of tests as policy.

Scoring rubric (example)

Level 3 should demonstrate strong contract design judgment, not just implementation skill.

Category 0 — Concern 1 — Meets 2 — Strong
Role-based interface design (SRP/ISP) Layer-based or overly broad contracts Reasonable roles; some boundary clarity Clean responsibilities; minimal surface; strong justification
DTO modeling & invariants Primitive soup; unclear meaning Mostly meaningful DTOs Strong ubiquitous language; invariants explicit; leakage prevented
Contract tests as policy Tests treated as afterthought Understands tests-first Tests define behavior + evidence; stable over refactors
Ports/adapters boundary discipline SDKs leak into domain; weak separation Understands boundaries Can design stable ports and clean adapters; handles failure modes
Change impact & compatibility No awareness of breaking changes Some impact awareness Clear migration strategy; minimal blast radius
AI usage & review judgment AI as authority; contract churn Uses constraints; verifies Uses AI to propose deltas; strengthens tests; prevents drift
Leadership & communication Unclear or rigid; can’t mentor Works well with others Mentors effectively; resolves disagreements via contracts/tests

Hiring guidance

  • Recommend hire: multiple 2s across contract design, tests, boundaries, change impact
  • Borderline: mostly 1s with limited 2s; no critical 0s
  • No hire: 0s in contract design or tests-as-policy; weak boundary discipline

Common red flags

  • Defaulting to repository/service layers rather than roles
  • Cannot articulate why tests define policy
  • Proposes large contract changes as the first solution
  • Allows vendors/frameworks to leak into core logic
  • Uses AI to “rewrite everything” instead of proposing deltas