ALD Interview Questionnaire — Level 1

Level 1 (Implementation Developer) candidates succeed in ALD by implementing behind existing contracts, writing reliable tests, and using AI safely to accelerate delivery without changing intent.

Implementation Developer

Use this as a template. Tailor language/framework specifics (C#/.NET, Java, Python, etc.) to your stack.

Level 1 focus
Implementation Unit testing Contract discipline Refactoring safety AI usage

ALD expectation: candidate can deliver correct code behind interfaces/DTOs and verify behavior with tests.

Purpose

What this evaluates

  • Basic software engineering fundamentals
  • Ability to read and implement from an existing interface/DTO
  • Unit test fluency (Arrange/Act/Assert, edge cases)
  • Comfort with refactoring without changing behavior
  • Safe AI usage (assist, verify, do not “invent intent”)

What this does not require

  • Designing new architecture or interfaces
  • Choosing domain boundaries or patterns
  • Owning policy decisions
  • Deep system design
ALD framing: Level 1 implements “how” behind contracts; architects define “what must be true.”

Suggested interview format (45–60 minutes)

Recommended flow

  1. 5 min — intro + candidate background
  2. 10 min — fundamentals + testing questions
  3. 15–20 min — small implementation exercise discussion
  4. 10 min — debugging/refactoring scenario
  5. 5–10 min — AI usage and judgment questions

Optional take-home

  • Provide an interface + DTO + failing tests
  • Ask the candidate to implement so all tests pass
  • Require a brief explanation of any tradeoffs
Best ALD take-home: “Make these contract tests pass without changing contracts.”

Question bank

Pick 8–12 questions depending on time. The strongest Level 1 signals are clarity, correctness, and test discipline.

1) Fundamentals (implementation readiness)

  1. You’re given an interface and DTOs. What’s your first step before writing code? (Look for: read tests/specs, identify edge cases, confirm assumptions.)
  2. What’s the difference between changing a public interface vs changing an implementation? (Look for: compatibility, downstream impact, contract stability.)
  3. Explain “composition over inheritance” in simple terms. When have you used it? (Look for: basic understanding; not required to be deep.)
  4. How do you decide what belongs in a method vs extracted into helper functions? (Look for: readability, SRP at a micro level.)

2) Testing (unit tests & reliability)

  1. What does a good unit test look like? Walk me through Arrange/Act/Assert.
  2. How do you test edge cases without writing brittle tests? (Look for: test behavior, avoid internal details.)
  3. What makes tests flaky? How do you prevent flakiness? (Time, randomness, network, shared state.)
  4. When is mocking helpful, and when does it cause problems? (Look for: over-mocking, testing implementation vs behavior.)

3) Reading contracts (interfaces/DTOs)

  1. If a DTO has a field named ReasonCodes, what does that imply about expected behavior? (Look for: evidence/audit intent, not just error messages.)
  2. If an interface method returns a Result object instead of throwing exceptions, how would you implement and test it?
  3. What are “invariants” and how do you enforce them in code? (Look for: validation, constructors, avoiding invalid states.)

4) Debugging & refactoring

  1. A test started failing after a refactor. What’s your step-by-step approach to diagnose the issue?
  2. You find duplicated logic in two classes. How do you refactor safely? (Look for: keep tests passing, small steps, commit frequently.)
  3. How do you improve code readability without changing behavior?

5) AI usage (ALD-safe behavior)

  1. When using AI to generate code, what do you verify manually before committing? (Look for: tests, edge cases, security, correctness, style.)
  2. If AI suggests changing an interface “to make it easier,” what do you do? (Look for: stop, consult, treat as contract change, assess impact.)
  3. Describe a good prompt you would use to ask AI for implementation help without changing intended behavior. (Look for: constraints, “do not change contracts,” “make tests pass.”)
  4. AI produced code that passes tests but looks suspicious. What now? (Look for: review for correctness, readability, hidden edge cases, add tests.)

6) Mini scenario (discussion-based exercise)

Provide this scenario verbally or as a snippet in the interview:

Scenario: You’re given EligibilityPolicy and DTOs. Tests specify: ineligible if LTV > 0.80, ineligible if credit score < 620, otherwise eligible. Decision must include reason codes and metrics.
  1. What would you implement first and why?
  2. What tests would you add if you suspect an edge case?
  3. How would you keep your implementation readable?
Strong Level 1 signal: candidate repeatedly references tests/contracts as the source of truth and avoids inventing requirements.

Scoring rubric (example)

Use a simple 0–2 scale per category to keep interviews consistent.

Category 0 — Concern 1 — Meets 2 — Strong
Implementation fundamentals Struggles to explain approach; guesses Clear steps; implements correctly Efficient, systematic, anticipates edge cases
Testing discipline Tests are vague/brittle; limited edge cases Writes solid unit tests; avoids brittleness Great at determinism, negative cases, readability
Contract awareness Casually changes interfaces/DTOs Understands contract impact Protects contracts; flags breaking changes early
Debugging/refactoring Ad-hoc; large risky edits Uses small steps; keeps tests passing Very systematic; improves clarity without regressions
AI judgment Trusts AI output blindly Verifies output; uses constraints Uses AI effectively; adds tests for suspicious areas

Hiring guidance

  • Recommend hire: mostly 1s with at least two 2s, no critical 0s
  • Borderline: many 1s, one 0 in non-critical category
  • No hire: repeated 0s in testing, contract awareness, or judgment

Common red flags

  • Doesn’t rely on tests/specs; “just codes”
  • Changes interfaces to “make it easier” without impact analysis
  • Over-mocks everything or can’t explain mocking tradeoffs
  • Treats AI output as authoritative