ALD Interview Questionnaire — Level 2
Level 2 (Contract-Aware Developer) candidates succeed in ALD by understanding contract intent, refactoring safely under tests, identifying SRP/ISP issues, and using AI to accelerate work without changing policy by accident.
This is a template—swap examples to match your domain (finance, data platforms, integrations, etc.).
ALD expectation: candidate can read contracts, detect design smells, and propose safe changes without “inventing” requirements.
Purpose
What this evaluates
- Contract literacy: interfaces/DTOs as behavioral promises
- Ability to spot SRP/ISP violations and coupling
- Refactoring with confidence (tests as guardrails)
- Writing and improving tests (including negative cases)
- Understanding boundaries (what belongs at the edge vs core)
- AI usage discipline (constraints, verification, safe deltas)
What this does not require
- Full system architecture design
- Owning business policy decisions independently
- Enterprise governance or standards ownership
- Deep DDD bounded-context strategy (helpful but not required)
Suggested interview format (60 minutes)
Recommended flow
- 5 min — intro + candidate background
- 10 min — contracts, SRP/ISP, and testing fundamentals
- 20 min — code reading + refactor discussion (behavior preserved)
- 15 min — “delta proposal” scenario (changed vs new)
- 10 min — AI usage and review judgment
Optional exercise
- Provide a small module with an interface + DTOs + tests
- Introduce a new acceptance criterion
- Ask the candidate to propose: existing changed vs new
Question bank
Pick 10–14 questions depending on time. Strong Level 2 candidates demonstrate consistent “contract-first” thinking and safe change habits.
1) Contract literacy (interfaces + DTOs)
- In ALD, what makes an interface a “contract”? What should and should not change casually? (Look for: downstream impact, stability, versioning mindset.)
- How do you tell whether a failing test indicates a bug vs a spec/policy change?
- If a DTO adds a new required field, what risks and migrations do you consider? (Breaking change, defaults, versioning, adapters.)
- Describe a situation where you would create a new DTO instead of reusing an existing one. (Look for: context leakage, meaning mismatch, bounded contexts.)
2) SRP/ISP awareness (role vs layer thinking)
- What’s wrong with a “God interface”? Give an example and how you’d split it. (Look for: clients depend on methods they don’t use, mixed reasons to change.)
- You see a repository interface exposing full CRUD for every consumer. What concerns do you raise? (Look for: ISP violation, leaky abstraction, overexposure.)
- What does “role-based, not layer-based” mean in practical terms? (Look for: responsibility naming, capabilities, not “service/repo”.)
- When would you keep an interface broad, and when would you split it? (Look for: cohesive clients, change reasons, stability.)
3) Tests beyond basics (determinism & coverage)
- How do you design tests that are stable during refactoring? (Behavior vs implementation; avoid over-mocking.)
- What negative/edge tests do you add when a new rule is introduced? (Boundary values, invalid inputs, missing permissions, etc.)
- How would you structure tests when multiple rules contribute reason codes? (Look for: clear scenarios, deterministic ordering, explicit expectations.)
- If tests rely on time, what patterns do you use to make them deterministic? (Clock abstraction, fixed time provider, injectable time.)
4) Refactoring & maintainability
- A teammate wants to refactor a large module. What rules do you impose to keep behavior safe? (Small steps, keep tests green, no contract changes without review.)
- You spot duplicated policy logic spread across classes. What’s your refactor strategy? (Extract role/policy, centralize decision logic, add/strengthen tests.)
- How do you balance “cleaner design” vs “shipping value” when contracts are involved?
5) Boundaries & integrations (edge vs core)
- Why does ALD want vendor/framework types at the edges? What problems does it prevent?
- If a core module starts importing SDK types, how do you fix it? (Introduce port/interface, adapter translation, mapping DTOs.)
- What’s a good sign you need an adapter or translation layer? (Schema mismatch, vendor churn, unstable dependency, transport concerns.)
6) AI usage (ALD-safe deltas)
- Describe how you would use AI to implement a feature while ensuring you don’t change policy. (Constraints, “do not change interfaces,” make tests pass, review.)
- AI proposes expanding an interface to simplify implementation. How do you respond? (Treat as contract change; evaluate ISP/SRP; propose alternative.)
- What do you check when AI-generated code passes tests but still “feels wrong”? (Readability, hidden assumptions, missing tests, security/perf concerns.)
- Give an example of a structured prompt that asks for a safe delta (Changed + New) rather than a rewrite.
Discussion scenario: “delta proposal” prompt for candidates
Use this as a live exercise. Give the candidate an existing interface and DTOs, then add a new requirement:
Approved/Denied.
- What DTO changes are required? What new DTOs would you introduce?
- Would you change an existing interface or add a new role? Why?
- What contract tests define the new behavior?
- What makes the change breaking vs additive?
Scoring rubric (example)
Use a 0–2 scale per category. Level 2 should score higher on contract literacy and safe refactoring than Level 1.
| Category | 0 — Concern | 1 — Meets | 2 — Strong |
|---|---|---|---|
| Contract literacy | Treats interfaces/DTOs as casual code | Understands stability and impact | Proactively protects contracts; thinks in migrations |
| SRP/ISP awareness | Accepts broad interfaces; unclear boundaries | Recognizes common violations | Suggests clean splits and responsibility-based roles |
| Testing maturity | Brittle tests; little negative coverage | Solid tests; avoids most flakiness | Deterministic, behavioral tests with strong edge coverage |
| Refactoring safety | Big risky edits; weak guardrails | Uses small steps; keeps tests green | Systematic refactor plans; strengthens tests first |
| Boundary discipline | Leaky SDK/framework types in core logic | Understands edge/core separation | Can design ports/adapters and translations cleanly |
| AI judgment | Trusts AI output; changes contracts casually | Uses constraints; verifies output | Uses AI to propose deltas; reviews intent and adds tests |
Hiring guidance
- Recommend hire: mostly 1s with multiple 2s in contract literacy, testing, refactoring
- Borderline: 1s across the board, few 2s, no repeated 0s
- No hire: 0s in contract literacy or refactoring safety; weak test discipline
Common red flags
- Proposes “just add methods” without SRP/ISP reasoning
- Uses tests as an afterthought instead of a guardrail
- Over-mocks or can’t explain deterministic testing
- Allows SDK/framework types into core domain “because it’s easier”
- Treats AI as authoritative instead of draft output