ALD Interview Questionnaire — Level 3
Level 3 (Senior Developer / Technical Lead) candidates succeed in ALD by designing and evolving contracts (DTOs + role-based interfaces) with discipline, defining behavior via contract tests, and keeping integrations behind ports/adapters.
This template assumes candidates can implement well; the focus is on contract design, boundaries, and leadership judgment.
ALD expectation: candidate can propose the smallest safe contract delta, justify it, and lead implementation behind it.
Purpose
What this evaluates
- Role-based interface design (SRP/ISP at contract level)
- DTO/domain modeling using ubiquitous language + invariants
- Contract tests as policy/governance (not just verification)
- Ports/adapters boundary discipline and integration strategy
- Change classification (contract vs implementation; breaking vs additive)
- Technical leadership: guiding others and maintaining code quality
- AI usage at the contract level: asking for deltas, not rewrites
What this does not require
- Enterprise-wide standards ownership
- Full platform governance and risk posture decisions
- Deep organizational transformation strategy
Suggested interview format (75–90 minutes)
Recommended flow
- 5 min — intro + scope
- 15 min — contract design & SRP/ISP
- 15 min — tests as policy & determinism
- 15 min — ports/adapters and integration boundaries
- 20–30 min — delta exercise (story → DTO/interface delta + contract tests)
- 10 min — AI usage and leadership judgment
Optional take-home
- Provide a small module with contracts and tests
- Add a new requirement with compliance/audit evidence
- Ask for a written proposal: decision boundaries + delta + test plan
Question bank
Choose 10–16 questions. Strong Level 3 candidates can justify contract boundaries, anticipate change impact, and use tests as an executable definition of policy.
1) Role-based interface design (SRP/ISP)
- Describe “role-based, not layer-based” interfaces. Give an example of a bad layer-based interface and how you’d redesign it.
- You inherit a single interface with 25 methods used by many clients. Walk through how you would split it safely. (Look for: deprecation, adapters, incremental migration.)
- When is it acceptable to keep an interface broader? What signals tell you it’s time to split?
- How do you avoid “contract churn” (too many changes to public surfaces) while still evolving a system?
2) DTO modeling & invariants (ubiquitous language)
- Give an example of converting primitives into value objects. What invariants would you enforce and where?
- How do you decide whether a new concept needs its own DTO/type versus adding a field to an existing DTO? (Look for: meaning mismatch, context leakage, versioning.)
- Explain how you prevent integration details from leaking into domain DTOs. (Look for: mapping layers, anti-corruption layer, adapter DTOs.)
- If a DTO must be auditable, what fields or evidence do you ensure are present? (Reason codes, evaluated metrics, timestamps, decision metadata.)
3) Contract tests as policy/governance
- In ALD, why are contract tests considered governance artifacts? How do you communicate that to a team?
- How do you structure contract tests so they read like business rules and remain stable over refactors?
- What’s your approach when the business changes the rule? What changes first: tests, DTOs, or implementation? (Look for: tests first, then contract changes as needed.)
- How do you handle non-determinism (time, randomness, external calls) in a contract-test suite?
4) Ports/adapters & integration boundaries
- Describe a time you replaced or insulated a vendor dependency. What port did you define and how did the adapter translate?
- A developer imports a vendor SDK type into the domain layer. What do you do? (Look for: boundary correction, adapter mapping, policy about edge-only.)
- How do you design ports to be stable while adapters can change frequently?
- What failure modes should adapters handle (timeouts, retries, idempotency), and where do those concerns live?
5) Change impact & backwards compatibility
- Walk through how you classify a change as “contract vs implementation.” What review level does each require?
- Give an example of an additive change that looks safe but is actually breaking.
- How do you introduce a breaking change responsibly (versioning, deprecation, migrations)?
- How do you minimize blast radius when multiple teams depend on the same contracts?
6) AI usage at Level 3 (deltas, not rewrites)
- What prompt would you use to ask an AI agent to propose a DTO/interface delta from a user story? (Look for: constraints, strict output format, repo search first.)
- AI proposes a broad interface expansion. How do you evaluate it using SRP/ISP?
- When AI-generated code passes tests, what else do you review before approving a PR? (Security, readability, boundary leakage, missing negative tests.)
- How do you prevent “AI drift” (silent behavior changes) over time? (Contract tests, change classification, code review focus on contracts.)
7) Technical leadership signals
- How do you mentor a Level 1 developer to implement behind contracts without inventing requirements?
- You disagree with another senior dev about a contract change. How do you resolve it? (Look for: tests as truth, smallest viable contract, compatibility.)
- What does “review contracts, not implementations” mean in practice? What do you actually look at first?
8) Pattern literacy (intentional GoF)
- When do you use Strategy vs conditional logic for business rules?
- Give an example of a Decorator or middleware-style pattern for cross-cutting concerns in a way that preserves ALD boundaries.
- Describe a design where an Orchestrator coordinates policies and adapters without becoming a “god class.”
Delta exercise (story → decision boundaries → contracts)
Use this as a live exercise (20–30 minutes). The goal is not to code; the goal is to produce a crisp, reviewable contract proposal.
Scenario
- If LTV > 0.80 → ineligible with
LTV_EXCEEDS_MAX - If credit score < 620 → ineligible with
CREDIT_SCORE_BELOW_MIN - Otherwise eligible with
ELIGIBLE - Decision must include status, reason codes, metrics, timestamp
- Credit score comes from an external provider
Candidate deliverable (what you ask for)
- Decision boundaries (policy vs orchestration vs integration)
- DTO delta: existing changed + new DTOs/types + invariants
- Interface delta: new roles/ports + responsibilities
- Contract test plan: tests by role (names + intent)
- Risks: breaking changes + migration plan
What a “good” answer typically includes (high level)
- Roles:
EligibilityPolicy,LoanToValueCalculator,CreditScoreProvider, plus a use case/orchestrator role - DTOs:
EligibilityDecision,EligibilityMetrics,ReasonCode, and intentful value objects (e.g.,Money) - Contract tests that encode thresholds and evidence requirements
- Adapter boundary plan for the external credit score provider
Scoring rubric (example)
Level 3 should demonstrate strong contract design judgment, not just implementation skill.
| Category | 0 — Concern | 1 — Meets | 2 — Strong |
|---|---|---|---|
| Role-based interface design (SRP/ISP) | Layer-based or overly broad contracts | Reasonable roles; some boundary clarity | Clean responsibilities; minimal surface; strong justification |
| DTO modeling & invariants | Primitive soup; unclear meaning | Mostly meaningful DTOs | Strong ubiquitous language; invariants explicit; leakage prevented |
| Contract tests as policy | Tests treated as afterthought | Understands tests-first | Tests define behavior + evidence; stable over refactors |
| Ports/adapters boundary discipline | SDKs leak into domain; weak separation | Understands boundaries | Can design stable ports and clean adapters; handles failure modes |
| Change impact & compatibility | No awareness of breaking changes | Some impact awareness | Clear migration strategy; minimal blast radius |
| AI usage & review judgment | AI as authority; contract churn | Uses constraints; verifies | Uses AI to propose deltas; strengthens tests; prevents drift |
| Leadership & communication | Unclear or rigid; can’t mentor | Works well with others | Mentors effectively; resolves disagreements via contracts/tests |
Hiring guidance
- Recommend hire: multiple 2s across contract design, tests, boundaries, change impact
- Borderline: mostly 1s with limited 2s; no critical 0s
- No hire: 0s in contract design or tests-as-policy; weak boundary discipline
Common red flags
- Defaulting to repository/service layers rather than roles
- Cannot articulate why tests define policy
- Proposes large contract changes as the first solution
- Allows vendors/frameworks to leak into core logic
- Uses AI to “rewrite everything” instead of proposing deltas