ALD Interview Questionnaire — Level 1
Level 1 (Implementation Developer) candidates succeed in ALD by implementing behind existing contracts, writing reliable tests, and using AI safely to accelerate delivery without changing intent.
Use this as a template. Tailor language/framework specifics (C#/.NET, Java, Python, etc.) to your stack.
ALD expectation: candidate can deliver correct code behind interfaces/DTOs and verify behavior with tests.
Purpose
What this evaluates
- Basic software engineering fundamentals
- Ability to read and implement from an existing interface/DTO
- Unit test fluency (Arrange/Act/Assert, edge cases)
- Comfort with refactoring without changing behavior
- Safe AI usage (assist, verify, do not “invent intent”)
What this does not require
- Designing new architecture or interfaces
- Choosing domain boundaries or patterns
- Owning policy decisions
- Deep system design
Suggested interview format (45–60 minutes)
Recommended flow
- 5 min — intro + candidate background
- 10 min — fundamentals + testing questions
- 15–20 min — small implementation exercise discussion
- 10 min — debugging/refactoring scenario
- 5–10 min — AI usage and judgment questions
Optional take-home
- Provide an interface + DTO + failing tests
- Ask the candidate to implement so all tests pass
- Require a brief explanation of any tradeoffs
Question bank
Pick 8–12 questions depending on time. The strongest Level 1 signals are clarity, correctness, and test discipline.
1) Fundamentals (implementation readiness)
- You’re given an interface and DTOs. What’s your first step before writing code? (Look for: read tests/specs, identify edge cases, confirm assumptions.)
- What’s the difference between changing a public interface vs changing an implementation? (Look for: compatibility, downstream impact, contract stability.)
- Explain “composition over inheritance” in simple terms. When have you used it? (Look for: basic understanding; not required to be deep.)
- How do you decide what belongs in a method vs extracted into helper functions? (Look for: readability, SRP at a micro level.)
2) Testing (unit tests & reliability)
- What does a good unit test look like? Walk me through Arrange/Act/Assert.
- How do you test edge cases without writing brittle tests? (Look for: test behavior, avoid internal details.)
- What makes tests flaky? How do you prevent flakiness? (Time, randomness, network, shared state.)
- When is mocking helpful, and when does it cause problems? (Look for: over-mocking, testing implementation vs behavior.)
3) Reading contracts (interfaces/DTOs)
-
If a DTO has a field named
ReasonCodes, what does that imply about expected behavior? (Look for: evidence/audit intent, not just error messages.) -
If an interface method returns a
Resultobject instead of throwing exceptions, how would you implement and test it? - What are “invariants” and how do you enforce them in code? (Look for: validation, constructors, avoiding invalid states.)
4) Debugging & refactoring
- A test started failing after a refactor. What’s your step-by-step approach to diagnose the issue?
- You find duplicated logic in two classes. How do you refactor safely? (Look for: keep tests passing, small steps, commit frequently.)
- How do you improve code readability without changing behavior?
5) AI usage (ALD-safe behavior)
- When using AI to generate code, what do you verify manually before committing? (Look for: tests, edge cases, security, correctness, style.)
- If AI suggests changing an interface “to make it easier,” what do you do? (Look for: stop, consult, treat as contract change, assess impact.)
- Describe a good prompt you would use to ask AI for implementation help without changing intended behavior. (Look for: constraints, “do not change contracts,” “make tests pass.”)
- AI produced code that passes tests but looks suspicious. What now? (Look for: review for correctness, readability, hidden edge cases, add tests.)
6) Mini scenario (discussion-based exercise)
Provide this scenario verbally or as a snippet in the interview:
EligibilityPolicy and DTOs. Tests specify:
ineligible if LTV > 0.80, ineligible if credit score < 620,
otherwise eligible. Decision must include reason codes and metrics.
- What would you implement first and why?
- What tests would you add if you suspect an edge case?
- How would you keep your implementation readable?
Scoring rubric (example)
Use a simple 0–2 scale per category to keep interviews consistent.
| Category | 0 — Concern | 1 — Meets | 2 — Strong |
|---|---|---|---|
| Implementation fundamentals | Struggles to explain approach; guesses | Clear steps; implements correctly | Efficient, systematic, anticipates edge cases |
| Testing discipline | Tests are vague/brittle; limited edge cases | Writes solid unit tests; avoids brittleness | Great at determinism, negative cases, readability |
| Contract awareness | Casually changes interfaces/DTOs | Understands contract impact | Protects contracts; flags breaking changes early |
| Debugging/refactoring | Ad-hoc; large risky edits | Uses small steps; keeps tests passing | Very systematic; improves clarity without regressions |
| AI judgment | Trusts AI output blindly | Verifies output; uses constraints | Uses AI effectively; adds tests for suspicious areas |
Hiring guidance
- Recommend hire: mostly 1s with at least two 2s, no critical 0s
- Borderline: many 1s, one 0 in non-critical category
- No hire: repeated 0s in testing, contract awareness, or judgment
Common red flags
- Doesn’t rely on tests/specs; “just codes”
- Changes interfaces to “make it easier” without impact analysis
- Over-mocks everything or can’t explain mocking tradeoffs
- Treats AI output as authoritative