ALD Interview Questionnaire — Level 5
Level 5 (Enterprise Architect / Distinguished Engineer) candidates succeed in ALD by enabling safe delivery at organizational scale. They establish standards and governance that preserve contract integrity, align to frameworks (e.g., ITIL/NIST/SAFe), and make AI adoption auditable and controllable without stalling delivery.
This interview evaluates system stewardship, risk posture, and organization-wide enablement—not coding ability.
ALD expectation: candidate makes architecture a force multiplier for the org by turning intent into enforceable, scalable guardrails.
Purpose
What this evaluates
- Defining organizational architectural guardrails and contract standards
- Balancing autonomy vs governance (avoid becoming a bottleneck)
- Portfolio-level boundary strategy (bounded contexts, shared capabilities)
- Change governance at scale (versioning, deprecation, compatibility models)
- Aligning ALD to risk/compliance frameworks (ITIL, NIST, regulatory needs)
- AI adoption strategy: policy, controls, auditability, safe delegation
- Operating model design (reviews, councils, enablement, templates)
- Measuring outcomes: speed, risk, cost, quality, drift prevention
What this does not require
- Owning every architecture decision personally
- Hands-on implementation across teams
- Procurement or vendor negotiations (unless your org combines roles)
Suggested interview format (90–120 minutes)
Recommended flow
- 15 min — background + scope (orgs, domains, scale)
- 20 min — governance & operating model
- 15 min — portfolio boundary strategy & contract standards
- 15 min — risk/compliance alignment and evidence model
- 25–35 min — enterprise exercise (guardrails + adoption plan)
- 10 min — metrics, outcomes, and failure modes
Optional pre-read / take-home
- Provide a portfolio overview (systems, teams, pain points)
- Ask for a 2–3 page memo: ALD adoption strategy + governance model
- Evaluate clarity, practicality, and measurable outcomes
Question bank
Choose 12–20 questions. Strong Level 5 candidates consistently reduce organizational friction while improving control, auditability, and delivery throughput.
1) Governance & operating model
- What should be centrally governed vs delegated to teams in an ALD organization? Why?
- How do you design an architecture review process that provides guardrails without becoming a bottleneck? (Look for: clear thresholds, self-service templates, “contract changes only” gates.)
- Describe an operating model you’ve used (or would design) to keep standards alive (councils, communities of practice, champions).
- How do you prevent “architecture theater” (documents that nobody uses) and keep architecture executable? (Look for: contracts/tests as proof, CI gates, reference implementations.)
2) Portfolio boundary strategy
- How do you define bounded contexts and shared capabilities across an enterprise portfolio?
- How do you avoid the “shared model trap” while still enabling interoperability? (Look for: translation, canonical contracts only where justified, anti-corruption layers.)
- When do you mandate a platform capability versus letting teams build locally?
- How do you manage contracts across teams: ownership, documentation, versioning, and deprecation?
3) Contract standards & drift prevention
- What contract standards do you establish (naming, versioning, invariants, error semantics, evidence fields)?
- How do you treat changes to contract tests? Who approves and what evidence is required?
- What’s your approach to contract lifecycle management (new, active, deprecated, retired)?
- How do you prevent drift when many teams contribute and AI is generating code? (Look for: CI gates, contract tests, review on deltas, policy ownership.)
4) Risk posture & compliance alignment
- How would you align ALD with frameworks like ITIL and NIST while keeping delivery fast?
- What does “auditability by design” mean in your architecture standards? (Look for: reason codes, metrics, traceability, decision evidence.)
- How do you handle separation of duties and approvals for contract/policy changes?
- How do you balance security controls with developer experience?
5) AI policy & safe enablement
- Where do you allow AI generation, and where do you prohibit or constrain it? Why?
- What does an “AI-safe SDLC” look like under ALD (prompt standards, review gates, test requirements)?
- How do you ensure AI usage is auditable (what was generated, what was reviewed, what evidence was produced)?
- What training and templates do you provide so teams use AI as a force multiplier rather than a risk multiplier?
6) Modernization & legacy transformation
- How would you apply ALD to modernize a legacy monolith without stopping feature delivery? (Look for: strangler patterns, contract extraction, adapter boundaries.)
- How do you manage contract evolution when legacy systems can’t change quickly?
- What’s your approach to reducing technical debt without large rewrites?
- How do you measure modernization progress in a way executives understand?
7) Metrics, outcomes, and failure modes
- What metrics would you use to prove ALD is improving outcomes? (Speed, risk, quality, cost)
- What are the top failure modes of ALD adoption, and how do you mitigate them? (Look for: contract sprawl, bottlenecks, weak tests, cultural resistance.)
- How do you detect and correct contract sprawl across a portfolio?
- How do you handle “policy fragmentation” when different teams implement similar rules inconsistently?
8) Executive communication
- How do you explain ALD to C-level leadership in terms of business outcomes and risk?
- How do you negotiate tradeoffs with stakeholders (delivery timelines vs governance needs)?
- What’s your approach to building a coalition of champions across engineering and security?
Enterprise exercise (guardrails + adoption plan)
Use this as a live exercise (25–35 minutes). The goal is to see how the candidate scales ALD across teams while maintaining control and reducing friction.
Scenario
- Multiple shared “decision” capabilities (eligibility, pricing, authorization)
- Legacy systems must remain operational during modernization
- Security/compliance requires audit evidence for policy changes
- Teams complain architecture reviews slow them down
Candidate deliverable
- Guardrails: what is centrally governed vs delegated
- Contract standards: naming, versioning, invariants, evidence fields
- Change governance: approval thresholds, deprecation policy, migration model
- AI policy: allowed usage, constraints, review gates, auditability
- Enablement: templates, reference implementations, training ladder
- Metrics: prove improvements in speed, risk, and cost
- Adoption plan: phased rollout, pilot domains, feedback loop
Evaluator notes (what “good” looks like)
- Clear review thresholds: only contracts/policy changes go through strict gates
- Self-service templates for teams to propose deltas and evidence
- Contract lifecycle management (versioning + deprecation)
- AI rules focused on preventing drift: tests-first, delta-only prompts, audit trails
- Operating model: champions/community + lightweight governance tooling
- Metrics tied to exec outcomes (time-to-market, incidents, audit findings, maintenance cost)
Scoring rubric (example)
Level 5 should demonstrate portfolio-level clarity, practical governance, and measurable outcomes.
| Category | 0 — Concern | 1 — Meets | 2 — Strong |
|---|---|---|---|
| Governance & operating model | Heavy process; bottlenecks | Some structure; limited enablement | Guardrails that increase throughput; self-service + clear thresholds |
| Portfolio boundary strategy | Unclear ownership; shared model sprawl | Reasonable separation | Clean boundaries, translation strategy, ownership model that scales |
| Contract standards & lifecycle | No versioning/deprecation model | Basic compatibility thinking | Strong lifecycle governance; drift prevention; measurable compliance |
| Risk/compliance alignment | Governance disconnected from frameworks | Basic alignment | Auditability by design with evidence model and separation of duties |
| AI policy & enablement | AI as authority; no audit plan | Constraints + reviews exist | Clear policy, auditability, templates, training; safe scale-out |
| Modernization strategy | Big-bang rewrite mindset | Incremental approach | Contract extraction + phased migration; continuous delivery maintained |
| Metrics & outcomes | No measurable success criteria | Some metrics | Exec-ready metrics tied to speed, risk, cost; feedback loops |
| Executive communication | Overly technical; not outcome-focused | Clear but limited | Consistently ties architecture to business outcomes and risk posture |
Hiring guidance
- Recommend hire: multiple 2s across governance, AI policy, lifecycle, and outcomes
- Borderline: mostly 1s; limited 2s; no critical 0s
- No hire: 0s in governance model, risk alignment, or AI auditability
Common red flags
- Architecture as centralized control rather than enablement
- No clear thresholds for when governance applies
- Shared model sprawl accepted as inevitable
- AI adoption without drift prevention or audit trails
- Modernization plan relies on big rewrites and freezes
- Cannot define success metrics executives will care about