ALD Interview Questionnaire — Level 2

Level 2 (Contract-Aware Developer) candidates succeed in ALD by understanding contract intent, refactoring safely under tests, identifying SRP/ISP issues, and using AI to accelerate work without changing policy by accident.

Contract-Aware Developer

This is a template—swap examples to match your domain (finance, data platforms, integrations, etc.).

Level 2 focus
Contract intent SRP + ISP awareness DTO meaning Refactoring safety Deterministic tests AI judgment

ALD expectation: candidate can read contracts, detect design smells, and propose safe changes without “inventing” requirements.

Purpose

What this evaluates

  • Contract literacy: interfaces/DTOs as behavioral promises
  • Ability to spot SRP/ISP violations and coupling
  • Refactoring with confidence (tests as guardrails)
  • Writing and improving tests (including negative cases)
  • Understanding boundaries (what belongs at the edge vs core)
  • AI usage discipline (constraints, verification, safe deltas)

What this does not require

  • Full system architecture design
  • Owning business policy decisions independently
  • Enterprise governance or standards ownership
  • Deep DDD bounded-context strategy (helpful but not required)
ALD framing: Level 2 understands “what must be true” from contracts/tests and can evolve code safely.

Suggested interview format (60 minutes)

Recommended flow

  1. 5 min — intro + candidate background
  2. 10 min — contracts, SRP/ISP, and testing fundamentals
  3. 20 min — code reading + refactor discussion (behavior preserved)
  4. 15 min — “delta proposal” scenario (changed vs new)
  5. 10 min — AI usage and review judgment

Optional exercise

  • Provide a small module with an interface + DTOs + tests
  • Introduce a new acceptance criterion
  • Ask the candidate to propose: existing changed vs new
Best ALD exercise: “Propose the smallest contract-surface delta that satisfies this new behavior.”

Question bank

Pick 10–14 questions depending on time. Strong Level 2 candidates demonstrate consistent “contract-first” thinking and safe change habits.

1) Contract literacy (interfaces + DTOs)

  1. In ALD, what makes an interface a “contract”? What should and should not change casually? (Look for: downstream impact, stability, versioning mindset.)
  2. How do you tell whether a failing test indicates a bug vs a spec/policy change?
  3. If a DTO adds a new required field, what risks and migrations do you consider? (Breaking change, defaults, versioning, adapters.)
  4. Describe a situation where you would create a new DTO instead of reusing an existing one. (Look for: context leakage, meaning mismatch, bounded contexts.)

2) SRP/ISP awareness (role vs layer thinking)

  1. What’s wrong with a “God interface”? Give an example and how you’d split it. (Look for: clients depend on methods they don’t use, mixed reasons to change.)
  2. You see a repository interface exposing full CRUD for every consumer. What concerns do you raise? (Look for: ISP violation, leaky abstraction, overexposure.)
  3. What does “role-based, not layer-based” mean in practical terms? (Look for: responsibility naming, capabilities, not “service/repo”.)
  4. When would you keep an interface broad, and when would you split it? (Look for: cohesive clients, change reasons, stability.)

3) Tests beyond basics (determinism & coverage)

  1. How do you design tests that are stable during refactoring? (Behavior vs implementation; avoid over-mocking.)
  2. What negative/edge tests do you add when a new rule is introduced? (Boundary values, invalid inputs, missing permissions, etc.)
  3. How would you structure tests when multiple rules contribute reason codes? (Look for: clear scenarios, deterministic ordering, explicit expectations.)
  4. If tests rely on time, what patterns do you use to make them deterministic? (Clock abstraction, fixed time provider, injectable time.)

4) Refactoring & maintainability

  1. A teammate wants to refactor a large module. What rules do you impose to keep behavior safe? (Small steps, keep tests green, no contract changes without review.)
  2. You spot duplicated policy logic spread across classes. What’s your refactor strategy? (Extract role/policy, centralize decision logic, add/strengthen tests.)
  3. How do you balance “cleaner design” vs “shipping value” when contracts are involved?

5) Boundaries & integrations (edge vs core)

  1. Why does ALD want vendor/framework types at the edges? What problems does it prevent?
  2. If a core module starts importing SDK types, how do you fix it? (Introduce port/interface, adapter translation, mapping DTOs.)
  3. What’s a good sign you need an adapter or translation layer? (Schema mismatch, vendor churn, unstable dependency, transport concerns.)

6) AI usage (ALD-safe deltas)

  1. Describe how you would use AI to implement a feature while ensuring you don’t change policy. (Constraints, “do not change interfaces,” make tests pass, review.)
  2. AI proposes expanding an interface to simplify implementation. How do you respond? (Treat as contract change; evaluate ISP/SRP; propose alternative.)
  3. What do you check when AI-generated code passes tests but still “feels wrong”? (Readability, hidden assumptions, missing tests, security/perf concerns.)
  4. Give an example of a structured prompt that asks for a safe delta (Changed + New) rather than a rewrite.
Discussion scenario: “delta proposal” prompt for candidates

Use this as a live exercise. Give the candidate an existing interface and DTOs, then add a new requirement:

Scenario: A decision response must now include reason codes and evaluated metrics for audit. The current API returns only Approved/Denied.
  1. What DTO changes are required? What new DTOs would you introduce?
  2. Would you change an existing interface or add a new role? Why?
  3. What contract tests define the new behavior?
  4. What makes the change breaking vs additive?
Strong Level 2 signal: candidate consistently separates “policy/contract” from “implementation,” and proposes small, safe deltas.

Scoring rubric (example)

Use a 0–2 scale per category. Level 2 should score higher on contract literacy and safe refactoring than Level 1.

Category 0 — Concern 1 — Meets 2 — Strong
Contract literacy Treats interfaces/DTOs as casual code Understands stability and impact Proactively protects contracts; thinks in migrations
SRP/ISP awareness Accepts broad interfaces; unclear boundaries Recognizes common violations Suggests clean splits and responsibility-based roles
Testing maturity Brittle tests; little negative coverage Solid tests; avoids most flakiness Deterministic, behavioral tests with strong edge coverage
Refactoring safety Big risky edits; weak guardrails Uses small steps; keeps tests green Systematic refactor plans; strengthens tests first
Boundary discipline Leaky SDK/framework types in core logic Understands edge/core separation Can design ports/adapters and translations cleanly
AI judgment Trusts AI output; changes contracts casually Uses constraints; verifies output Uses AI to propose deltas; reviews intent and adds tests

Hiring guidance

  • Recommend hire: mostly 1s with multiple 2s in contract literacy, testing, refactoring
  • Borderline: 1s across the board, few 2s, no repeated 0s
  • No hire: 0s in contract literacy or refactoring safety; weak test discipline

Common red flags

  • Proposes “just add methods” without SRP/ISP reasoning
  • Uses tests as an afterthought instead of a guardrail
  • Over-mocks or can’t explain deterministic testing
  • Allows SDK/framework types into core domain “because it’s easier”
  • Treats AI as authoritative instead of draft output