ALD Interview Questionnaire — Level 5

Level 5 (Enterprise Architect / Distinguished Engineer) candidates succeed in ALD by enabling safe delivery at organizational scale. They establish standards and governance that preserve contract integrity, align to frameworks (e.g., ITIL/NIST/SAFe), and make AI adoption auditable and controllable without stalling delivery.

Enterprise Architect / Distinguished Engineer

This interview evaluates system stewardship, risk posture, and organization-wide enablement—not coding ability.

Level 5 focus
Org-wide governance Risk posture Standards Portfolio boundaries AI policy Evidence & audit Operating model Enablement

ALD expectation: candidate makes architecture a force multiplier for the org by turning intent into enforceable, scalable guardrails.

Purpose

What this evaluates

  • Defining organizational architectural guardrails and contract standards
  • Balancing autonomy vs governance (avoid becoming a bottleneck)
  • Portfolio-level boundary strategy (bounded contexts, shared capabilities)
  • Change governance at scale (versioning, deprecation, compatibility models)
  • Aligning ALD to risk/compliance frameworks (ITIL, NIST, regulatory needs)
  • AI adoption strategy: policy, controls, auditability, safe delegation
  • Operating model design (reviews, councils, enablement, templates)
  • Measuring outcomes: speed, risk, cost, quality, drift prevention

What this does not require

  • Owning every architecture decision personally
  • Hands-on implementation across teams
  • Procurement or vendor negotiations (unless your org combines roles)
ALD framing: Level 5 defines the “rules of the road” that let many teams move fast safely.

Suggested interview format (90–120 minutes)

Recommended flow

  1. 15 min — background + scope (orgs, domains, scale)
  2. 20 min — governance & operating model
  3. 15 min — portfolio boundary strategy & contract standards
  4. 15 min — risk/compliance alignment and evidence model
  5. 25–35 min — enterprise exercise (guardrails + adoption plan)
  6. 10 min — metrics, outcomes, and failure modes

Optional pre-read / take-home

  • Provide a portfolio overview (systems, teams, pain points)
  • Ask for a 2–3 page memo: ALD adoption strategy + governance model
  • Evaluate clarity, practicality, and measurable outcomes
Best ALD take-home: “How do you make contract-first delivery and AI usage safe across 30+ teams?”

Question bank

Choose 12–20 questions. Strong Level 5 candidates consistently reduce organizational friction while improving control, auditability, and delivery throughput.

1) Governance & operating model

  1. What should be centrally governed vs delegated to teams in an ALD organization? Why?
  2. How do you design an architecture review process that provides guardrails without becoming a bottleneck? (Look for: clear thresholds, self-service templates, “contract changes only” gates.)
  3. Describe an operating model you’ve used (or would design) to keep standards alive (councils, communities of practice, champions).
  4. How do you prevent “architecture theater” (documents that nobody uses) and keep architecture executable? (Look for: contracts/tests as proof, CI gates, reference implementations.)

2) Portfolio boundary strategy

  1. How do you define bounded contexts and shared capabilities across an enterprise portfolio?
  2. How do you avoid the “shared model trap” while still enabling interoperability? (Look for: translation, canonical contracts only where justified, anti-corruption layers.)
  3. When do you mandate a platform capability versus letting teams build locally?
  4. How do you manage contracts across teams: ownership, documentation, versioning, and deprecation?

3) Contract standards & drift prevention

  1. What contract standards do you establish (naming, versioning, invariants, error semantics, evidence fields)?
  2. How do you treat changes to contract tests? Who approves and what evidence is required?
  3. What’s your approach to contract lifecycle management (new, active, deprecated, retired)?
  4. How do you prevent drift when many teams contribute and AI is generating code? (Look for: CI gates, contract tests, review on deltas, policy ownership.)

4) Risk posture & compliance alignment

  1. How would you align ALD with frameworks like ITIL and NIST while keeping delivery fast?
  2. What does “auditability by design” mean in your architecture standards? (Look for: reason codes, metrics, traceability, decision evidence.)
  3. How do you handle separation of duties and approvals for contract/policy changes?
  4. How do you balance security controls with developer experience?

5) AI policy & safe enablement

  1. Where do you allow AI generation, and where do you prohibit or constrain it? Why?
  2. What does an “AI-safe SDLC” look like under ALD (prompt standards, review gates, test requirements)?
  3. How do you ensure AI usage is auditable (what was generated, what was reviewed, what evidence was produced)?
  4. What training and templates do you provide so teams use AI as a force multiplier rather than a risk multiplier?

6) Modernization & legacy transformation

  1. How would you apply ALD to modernize a legacy monolith without stopping feature delivery? (Look for: strangler patterns, contract extraction, adapter boundaries.)
  2. How do you manage contract evolution when legacy systems can’t change quickly?
  3. What’s your approach to reducing technical debt without large rewrites?
  4. How do you measure modernization progress in a way executives understand?

7) Metrics, outcomes, and failure modes

  1. What metrics would you use to prove ALD is improving outcomes? (Speed, risk, quality, cost)
  2. What are the top failure modes of ALD adoption, and how do you mitigate them? (Look for: contract sprawl, bottlenecks, weak tests, cultural resistance.)
  3. How do you detect and correct contract sprawl across a portfolio?
  4. How do you handle “policy fragmentation” when different teams implement similar rules inconsistently?

8) Executive communication

  1. How do you explain ALD to C-level leadership in terms of business outcomes and risk?
  2. How do you negotiate tradeoffs with stakeholders (delivery timelines vs governance needs)?
  3. What’s your approach to building a coalition of champions across engineering and security?
Strong Level 5 signal: candidate designs governance that increases throughput while improving auditability—architecture as enablement, not control theater.

Enterprise exercise (guardrails + adoption plan)

Use this as a live exercise (25–35 minutes). The goal is to see how the candidate scales ALD across teams while maintaining control and reducing friction.

Scenario

Context: 30 product teams, regulated decision workflows, high integration complexity, and a mandate to use AI to accelerate delivery.
  • Multiple shared “decision” capabilities (eligibility, pricing, authorization)
  • Legacy systems must remain operational during modernization
  • Security/compliance requires audit evidence for policy changes
  • Teams complain architecture reviews slow them down

Candidate deliverable

  1. Guardrails: what is centrally governed vs delegated
  2. Contract standards: naming, versioning, invariants, evidence fields
  3. Change governance: approval thresholds, deprecation policy, migration model
  4. AI policy: allowed usage, constraints, review gates, auditability
  5. Enablement: templates, reference implementations, training ladder
  6. Metrics: prove improvements in speed, risk, and cost
  7. Adoption plan: phased rollout, pilot domains, feedback loop
Evaluator notes (what “good” looks like)
  • Clear review thresholds: only contracts/policy changes go through strict gates
  • Self-service templates for teams to propose deltas and evidence
  • Contract lifecycle management (versioning + deprecation)
  • AI rules focused on preventing drift: tests-first, delta-only prompts, audit trails
  • Operating model: champions/community + lightweight governance tooling
  • Metrics tied to exec outcomes (time-to-market, incidents, audit findings, maintenance cost)
Evaluator tip: Ask “How do you increase delivery throughput while adding controls?” Great candidates answer with better guardrails, not heavier reviews.

Scoring rubric (example)

Level 5 should demonstrate portfolio-level clarity, practical governance, and measurable outcomes.

Category 0 — Concern 1 — Meets 2 — Strong
Governance & operating model Heavy process; bottlenecks Some structure; limited enablement Guardrails that increase throughput; self-service + clear thresholds
Portfolio boundary strategy Unclear ownership; shared model sprawl Reasonable separation Clean boundaries, translation strategy, ownership model that scales
Contract standards & lifecycle No versioning/deprecation model Basic compatibility thinking Strong lifecycle governance; drift prevention; measurable compliance
Risk/compliance alignment Governance disconnected from frameworks Basic alignment Auditability by design with evidence model and separation of duties
AI policy & enablement AI as authority; no audit plan Constraints + reviews exist Clear policy, auditability, templates, training; safe scale-out
Modernization strategy Big-bang rewrite mindset Incremental approach Contract extraction + phased migration; continuous delivery maintained
Metrics & outcomes No measurable success criteria Some metrics Exec-ready metrics tied to speed, risk, cost; feedback loops
Executive communication Overly technical; not outcome-focused Clear but limited Consistently ties architecture to business outcomes and risk posture

Hiring guidance

  • Recommend hire: multiple 2s across governance, AI policy, lifecycle, and outcomes
  • Borderline: mostly 1s; limited 2s; no critical 0s
  • No hire: 0s in governance model, risk alignment, or AI auditability

Common red flags

  • Architecture as centralized control rather than enablement
  • No clear thresholds for when governance applies
  • Shared model sprawl accepted as inevitable
  • AI adoption without drift prevention or audit trails
  • Modernization plan relies on big rewrites and freezes
  • Cannot define success metrics executives will care about