Spin up several LLM agents (ChatGPT, Claude, local models) that share one repo, chat in real‑time and stream patches back to Cursor/VS Code while you act as PM.