# FortWin Execution Board This file is the current implementation-priority board for FortWin. It is intentionally dependency-ordered, not aspiration-ordered. The guiding rule is: > make critique, promotion, learning, and enforcement authoritative before making the models smarter ## Epic 1: Promotion and Learning Authority These items turn learned critique into real platform state instead of transient inference output. ### 1. Emit `PromotionAssessment` and `PromotionCritique` for every pre-council candidate Why this matters: - establishes the dual-network boundary operationally - gives the Generator structured corrective feedback - creates the first authoritative training labels ### 2. Persist `PromotionAssessment` and `PromotionCritique` in the Control Plane event log and projections Why this matters: - makes assessment and critique authoritative, queryable state - enables dashboard visibility, audit, and learning corpus creation - prevents the assurance layer from becoming side-channel theater ### 3. Make Council consume `PromotionAssessment` after deterministic hard gates Why this matters: - ensures the assurance layer affects real promotion decisions - keeps deterministic hard gates authoritative - improves council choice quality without replacing rule-based safety ### 4. Write `CrossForgeLearningRecord` from compile, canary, rollback, and attestation outcomes Why this matters: - turns outcomes into training truth - lets Generator and Promotion Assurance learn from actual success/failure - creates the long-term corpus for stronger models ## Epic 2: Evidence-Driven Validation These items improve promotion quality by strengthening evidence before activation. ### 5. Add replay-backed validation before council Why this matters: - promotion should learn from stronger evidence than heuristics alone - improves both Generator and Promotion Assurance usefulness - is one of the highest-leverage differentiators in the platform ### 6. Add sandboxed exploit/repair validation for high-risk artifact kinds Priority focus: - `BehaviorGuard` - `AutomatedRepair` Why this matters: - provides stronger safety evidence before canary - reduces unsafe promotions and rollback-heavy outcomes ### 7. Build the labeled training-data pipeline from trace/candidate/outcome history Why this matters: - turns platform history into usable training data - is required before replacing current surrogate models with incident-trained ones ## Epic 3: Real Enforcement Adapters These items turn artifact classes from architecture into actual host behavior. ### 8. Add per-host compatibility and admission checks Why this matters: - prevents invalid stage/canary attempts - reduces noisy failures and false rollback pressure - makes enforcer behavior feel materially more professional ### 9. Implement real native `ServiceHardening` adapters with snapshot, validation, health-check, and rollback discipline Why this matters: - delivers a high-value artifact kind with bounded risk - makes self-healing infrastructure real instead of template-backed ### 10. Implement real native `NetworkContainment` adapters on Windows Why this matters: - gives immediate endpoint value - makes containment artifacts genuinely enforceable on the host ### 11. Implement minimal real `BehaviorGuard` runtime enforcement Why this matters: - moves `BehaviorGuard` from strategy generation into real enforcement - should begin with a narrow, typed rule family and strict rollback signals ### 12. Implement bounded `AutomatedRepair` execution Why this matters: - is a major differentiator - must come only after validation, rollback, and compatibility checks are credible ## Epic 4: Trust and Delivery Hardening These items make distribution and admission more production-real. ### 13. Strengthen feed trust policy Include: - key rotation - trust-root management - issuer-scoped admission - expiry enforcement Why this matters: - tightens the artifact trust chain - prevents feed/publisher drift from becoming a silent risk ### 14. Add separate Promotion Assurance inference service Why this matters: - keeps the Generator and Assurance boundaries clean - prepares the architecture for independently trained/served models ### 15. Add incident-trained retrieval memory Why this matters: - improves grounded generation without requiring larger models first - helps suppress rollback-heavy historical patterns ### 16. Replace surrogate ONNX models with incident-trained models in phases Recommended phase order: 1. `interpret` 2. `rerank` 3. `ranker` / assurance scoring Why this matters: - should happen only after the data pipeline and assurance boundary are authoritative ## Epic 5: Observability and Durability These items make the platform inspectable and resilient as it gets more autonomous. ### 17. Add dashboard pages for Assurance/Critique and Learning Why this matters: - lets operators inspect why candidates were revised, rejected, or promoted - makes the new learned layer explainable ### 18. Add full end-to-end tests for each artifact kind through the full lifecycle Target path: `generate -> assess -> council -> feed -> enforcer -> attest` Why this matters: - keeps the platform from regressing as enforcement and learning deepen ### 19. Add a long-run soak harness with typed-failure assertions Why this matters: - prevents silent degradation from creeping back in - keeps long-run reliability measurable ### 20. Add operator controls for archive retention, incident replay, and artifact retirement Why this matters: - keeps historical state useful rather than noisy - makes the live system easier to operate as volume grows ## Top 7 Next Build Sequence This is the recommended implementation order for the next serious cycle: 1. Emit `PromotionAssessment` and `PromotionCritique` 2. Persist them in the Control Plane event log and projections 3. Make Council consume `PromotionAssessment` after hard gates 4. Write `CrossForgeLearningRecord` from real outcomes 5. Add replay-backed validation before council 6. Add per-host compatibility/admission checks 7. Implement real native `ServiceHardening` adapters with rollback discipline ## Single Highest-Leverage Item The center of gravity is: - authoritative persisted `PromotionAssessment` - authoritative persisted `PromotionCritique` - `CrossForgeLearningRecord` That combination makes promotion measurable, makes critique inspectable, and turns operational outcomes into the learning substrate for both Generator and Promotion Assurance. ## Most Underrated Item `Per-host compatibility/admission checks` This prevents: - invalid target-surface installs - noisy canary failures - false rollback pressure - operator distrust ## Riskiest Item `Bounded AutomatedRepair execution` Do not rush this ahead of: - replay validation - sandbox validation - rollback enforcement - host compatibility checks - stronger promotion evidence ## Execution Rule Do not optimize for smarter models first. Optimize for: - stronger authoritative learning signals - better promotion evidence - real enforcement adapters - safer rollbacks - inspectable critique and outcome state