I use AI heavily for architecture reviews, but I never treat AI output as final truth. This is the checklist I run before shipping.

1) Problem clarity

  • Is the problem statement clear enough that two engineers would interpret it the same way?
  • Are scale assumptions explicit (QPS, data size, retention, latency)?

2) Tradeoff map

  • Did I identify the strongest alternative design?
  • Did I document failure modes and recovery plans?

3) Data model pressure test

  • Are write/read patterns clear?
  • Is schema evolution safe for the next 12 months?
  • Is there an archive strategy?

4) Operational readiness

  • Alerting and SLO signals defined?
  • Runbook written for common incidents?
  • Cost guardrails set?

5) AI output verification

  • Validate against official documentation.
  • Validate with a small runnable test.
  • Confirm assumptions with production-like data shape.

AI is a force multiplier. Verification is still your job.