The most dangerous thing an AI can do is act without permission.

Why approvals matter

AI systems that act autonomously create unpredictable risks. An email sent to the wrong person, a payment processed incorrectly, a database record updated with bad data — these aren't theoretical risks, they're daily realities of autonomous systems.

The approval model

Stewart separates thinking from doing. Analysis, drafting, planning, and organizing happen freely — these don't affect anything outside the system. External actions — sending, publishing, purchasing, updating — require your explicit approval every time.

Building trust gradually

Over time, as you see consistent, reliable behavior, you may choose to pre-approve certain routine actions. But the default is always: ask first, act second. Trust is earned through demonstrated reliability, not assumed from the start.