# FortWin Architecture README This document is the as-built architecture map for the current repository state. It is intentionally narrower than the whitepaper and describes what is real in the repo today. - Canonical vision/spec: [FORTWIN_AUTOIMMUNE_WHITEPAPER.md](FORTWIN_AUTOIMMUNE_WHITEPAPER.md) - Repo entrypoint: [README.md](../README.md) - Operations guide: [FORTWIN_MANUAL.md](FORTWIN_MANUAL.md) - Execution board: [FORTWIN_EXECUTION_BOARD.md](FORTWIN_EXECUTION_BOARD.md) - Attack-graph strategy spec: [ATTACK_GRAPH_STRATEGY_GENERATION_SPEC.md](ATTACK_GRAPH_STRATEGY_GENERATION_SPEC.md) ## Architecture Status FortWin is currently a working single-node split-brain runtime with these implemented planes: - Control Plane - Forge Plane - Forge GPU inference service - Feed Plane - Node Enforcer - Unified Dashboard - JSONL event store + JSON projection store Current implemented loop: `ingest -> classify -> matrix lookup -> retrieve -> generate -> compile -> validate -> promotion assurance -> submit -> council -> approve/stage/canary/promote -> control-plane signed export -> feed pull/verify -> enforcer admit/apply -> attestation -> learning` Important truthfulness notes: - Control Plane `POST /api/feed/export` is the authoritative signed export path. - Feed is the read/distribution boundary; direct feed publish remains available but is not the authoritative promotion source. - Forge now uses a real Exploit Classification Engine plus Repair Artifact Matrix policy slice before generation. - Council now emits phased `DefenseStrategyPackage` objects for the Redis-first attack-graph strategy slice. - Platform mismatch is a hard boundary, not a soft preference. - `BehaviorGuard` and `ServiceHardening` have real bounded execution paths. - `NetworkContainment` and `AutomatedRepair` are still not host-native backends. ## Service Topology ### Control Plane `src/FortWin.ControlPlane` Responsibilities: - accepts operator commands through `CommandEnvelope` - owns authoritative promotion state - owns authoritative signed feed export - evaluates promotion readiness and lane selection - evaluates strategy candidates and emits `DefenseStrategyPackage` objects - enforces emergency override, safety-budget, and compatibility gates - persists runtime artifact state and blockers - persists promotion assessments, critiques, and learning records - exposes pattern memory and retrieval APIs - emits incident timeline data by `trace_id` - archives old runtime artifacts out of the live runtime projection Implemented command set: - `Approve` - `Stage` - `Canary` - `Promote` - `Rollback` - `Suppress` - `SetAutonomyLevel` - `SetEmergencyOverride` Implemented control-plane HTTP surface: - `GET /health` - `GET /api/state` - `GET /api/state/summary` - `GET /api/runtime/artifacts?page=&pageSize=` - `GET /api/runtime/archive` - `GET /api/runtime/archive/items?page=&pageSize=` - `GET /api/events` - `POST /api/commands` - `GET /api/artifacts` - `GET /api/host-facts` - `GET /api/artifacts/{artifactId}/compatibility` - `POST /api/artifacts/register` - `POST /api/feed/build` - `POST /api/feed/export` - `POST /api/feed/verify` - `POST /api/strategy/evaluate` - `POST /api/strategy/assurance/record` - `GET /api/strategy/verdicts` - `GET /api/strategy/packages` - `GET /api/strategy/assessments` - `GET /api/strategy/critiques` - `GET /api/strategy/learning-records` - `GET /api/memory/patterns` - `POST /api/memory/retrieve` - `GET /api/incidents/{traceId}` ### Forge Plane `src/FortWin.Forge` Responsibilities: - persists per-trace pipeline state - builds `TraceSummary` from `AttackGraph` - classifies traces into `ExploitClassification` - resolves a `RepairArtifactMatrixRow` before generation - builds `ForgeContext` and routes generation using platform, service, phase, and safety tier - retrieves structured memory and historical outcome patterns - generates multiple repair intents and candidates inside matrix policy boundaries - generates Redis-first phased strategy candidates from attack-graph paths - compiles candidates into bounded artifact sets - scores candidates deterministically, with optional neural assistance - validates candidates before council and runs promotion assurance - exposes pipeline APIs for ingest/generate/validate/submit Implemented forge HTTP surface: - `GET /health` - `POST /api/pipeline/ingest` - `POST /api/pipeline/generate/{traceId}` - `POST /api/pipeline/validate/{traceId}` - `POST /api/pipeline/submit/{traceId}` - `GET /api/pipeline/{traceId}` - `GET /api/pipeline` - `GET /api/forge/neural/status` - `GET /api/forge/neural/telemetry` ### Forge GPU Service `src/FortWin.ForgeGpu` Responsibilities: - exposes the remote ONNX inference boundary - performs bounded attack interpretation - performs pattern reranking - performs candidate ranking - performs remote promotion assurance assessment - reports model-by-model readiness and provider state Implemented GPU service HTTP surface: - `GET /health` - `GET /api/neural/status` - `GET /api/neural/service-status` - `POST /api/neural/interpret` - `POST /api/neural/rerank-patterns` - `POST /api/neural/rank` - `POST /api/neural/assess-promotion` ### Feed Plane `src/FortWin.Feed` Responsibilities: - serves current signed feeds - serves platform-partitioned feed views - lists and retrieves historical envelopes - verifies feed signatures - acts as the read/distribution boundary for nodes Implemented feed HTTP surface: - `GET /health` - `GET /api/feed/current` - `GET /api/feed/list` - `GET /api/feed/{feedId}` - `POST /api/feed/publish` - `POST /api/feed/verify` - `GET /api/feed/pull` ### Node Enforcer `src/FortWin.Enforcer` Responsibilities: - loads a signed feed file - verifies feed signature and issuer key - prefers platform-partitioned feed paths when present - enforces admission policy and capability allowlists - stages, canaries, activates, or rolls back artifacts - writes attestation records and runtime summaries - runs the persistent `BehaviorGuard` user-mode guard agent Current implementation model: - `BehaviorGuard` has a real bounded user-mode runtime backend - `ServiceHardening` has a real bounded execution path with snapshot, validation, health checks, and rollback - `NetworkContainment` is still not a host-native backend - `AutomatedRepair` is still bounded but not a fully mature real remediation backend - feed processing is still a transitional batch-oriented node path, with a persistent guard-agent sidecar for behavior rules ### Unified Dashboard `src/FortWin.Dashboard` Responsibilities: - shows service health - shows control-plane summary and feed/enforcer state - exposes operator commands - exposes incident reconstruction by `trace_id` - shows forge/GPU readiness and inference telemetry - shows paged runtime, archive, event, and pipeline views - downloads the authoritative signed feed through Control Plane export Implemented dashboard tabs: - Overview - Promotion Throughput - Forges - Events - History - Incidents - Commands Implemented dashboard HTTP surface: - `GET /health` - `GET /api/dashboard/overview` - `GET /api/dashboard/runtime-artifacts?page=&pageSize=` - `GET /api/dashboard/runtime-archive?page=&pageSize=` - `GET /api/dashboard/recent-events?page=&pageSize=` - `GET /api/dashboard/pipelines?page=&pageSize=` - `POST /api/dashboard/commands` - `GET /api/dashboard/incidents/{traceId}` - `GET /api/dashboard/export/promoted` ## Exploit Classification and Matrix Routing FortWin now includes the first authoritative slice of the Exploit Classification Engine and the FortWin Repair Artifact Matrix. Shared contracts: - `ExploitClassification` - `RepairArtifactMatrixRow` - `MatrixPromotionPolicy` - `ArtifactPreference` - `ExploitFamily` - `SafetyTier` Current flow: `AttackGraph -> TraceSummary -> ExploitClassification -> RepairArtifactMatrix row -> ForgeContext -> ForgeExecutionPlan -> RepairIntent -> RepairCandidate` Current implemented classifier behavior: - uses host facts as authoritative for platform family - uses attack graph phase as authoritative for attack phase - classifies by behavioral signals, not CVE names - resolves service family from graph and summary evidence - returns a `matrix_row_id` for generation and validation Current implemented exploit-family starter set: - `RedisChildShellExecution` - `SshBruteForce` - `ExposedServiceRecon` - `DroppedExecutablePayload` - `WebRcePayloadDrop` - `SuspiciousOutboundBeacon` - `CredentialReuseLateralMovement` - `GenericExecutionContainment` - `Unknown` Current implemented matrix starter rows: - `windows.redis.execution.child_shell` - `linux.redis.execution.child_shell` - `linux.ssh.access.brute_force` - `windows.web_server.execution.payload_drop` - `windows.generic.execution.containment` - `linux.generic.execution.containment` - `windows.unknown.unknown.fallback` - `linux.unknown.unknown.fallback` What the matrix currently constrains: - allowed artifact kinds by profile - allowed control families - allowed and forbidden target surfaces - required validations - safety tier and promotion policy metadata What the matrix does not do yet: - load rows from an external policy store - cover a broad production exploit-family library - drive every control-plane approval rule directly ## Platform Isolation Model Platform is treated as a first-class constraint. Current enforced boundaries: 1. host facts and trace normalization tag platform and OS family 2. `ForgeContext` carries platform, service, exploit family, and matrix row 3. generator/compiler constrain candidate space using the matrix 4. forge validation hard-fails `CandidatePlatformMismatch` 5. compatibility/admission rejects incompatible artifacts before stage 6. Control Plane exports platform-partitioned feeds 7. Enforcer prefers platform-partitioned feed files and revalidates compatibility locally Current result: - a Windows Redis execution trace routes to Windows containment/hardening policy - Linux-only surfaces such as `xdp_ingress`, `iptables`, and `systemd` are blocked for Windows candidates - manual cross-platform artifacts can still be registered, but they are rejected by compatibility/admission before promotion ## Trust and Signing The current signer implementation is in `src/FortWin.Security/ArtifactSigner.cs`. Actual behavior: - the class name still carries legacy naming - the implementation is asymmetric Ed25519 signing with NSec - keys are persisted under `data/runtime/keys/signing-keys.json` - Control Plane export is the authoritative signing path - feed envelopes include: - `schemaVersion` - `keyId` - `signatureAlgorithm` - `signature` - trust metadata such as platform family ## Persistence Model ### Event Store `src/FortWin.Eventing` Implemented storage: - append-only JSONL event files under `data/events` - cross-process append lock for multi-service safety - paged reads for dashboard/event inspection ### Projection Store Implemented JSON projections under `data/projections`: - `runtime-state.json` - `artifact-registry.json` - `strategy-council.json` - `runtime-artifact-archive.json` - promotion assurance and memory-related projections ### Feed Files Implemented feed storage under `data/feed`: - `current-feed.json` - `envelopes/*.json` - `windows/current-feed.json` - additional platform partitions as generated ### Runtime Files Implemented runtime files under `data/runtime`: - forge pipeline state under `forge-pipelines/*.json` - `enforcer-last-run.json` - `attestations-*.jsonl` - `behavior-guards/agent-state.json` - `service-hardening/snapshots/*` - `adapters/*.json` - `keys/signing-keys.json` ## Control Plane State Model Authoritative runtime state is `ControlPlaneRuntimeState`. It currently owns: - `AutonomyLevel` - `EmergencyOverride` - `SafetyBudget` - `ProcessedCommandIds` - `Artifacts` - `Version` - `UpdatedUtc` Per-artifact runtime state tracks: - `ArtifactId` - `State` - `Lane` - `PrimaryBlocker` - `SecondaryBlockers` - `NextStep` - `Readiness` - `Health` - `LastCommandId` - `LastCommandType` - `LastReason` - `Version` - `UpdatedUtc` Operational single-node states in real use: - `Candidate` - `PromotionEligible` - `Staged` - `CanaryActive` - `LocallyActive` - `VerificationFailed` - `PromotionBlocked` - `RolledBack` - `Retired` - `Suppressed` Reserved legacy/future contract states still exist in the shared enum but are not part of the active single-node runtime: - `ShadowPending` - `ShadowComplete` - `VerificationPass` - `ArtifactBuildPending` - `FederationAdvertised` - `FleetEligible` - `CanaryFailed` ### Readiness and Lane Selection Current readiness behavior: - `Approve` evaluates promotion readiness and compatibility - target lane is persisted as `Policy` or `Repair` - missing registry artifact, unsupported surfaces, and incomplete rollback plans fail closed - matrix and platform mismatches are treated as blockers, not suggestions ### Safety Gating Current safety gating checks: - emergency override blocks `Stage`, `Canary`, and `Promote` - canary concurrency budget - local activation budget - promotion-per-hour budget - rollback-pressure budget - host compatibility and runtime health gates ### Runtime Artifact Retention Current maintenance service: - archives old terminal runtime artifacts - archives deterministic compatibility failures out of live rollback pressure - archives orphaned active artifacts that no longer exist in the registry - keeps a bounded live runtime set for dashboard/operator use ## Forge As Implemented The Forge is a bounded reasoning and compilation stack, not an unconstrained model wrapper. Implemented stages in `src/FortWin.Forge/AttackGraphStrategyEngine.cs`: 1. `TraceUnderstandingEngine` 2. `ExploitClassificationEngine` 3. `RepairArtifactMatrix` lookup through `ForgeContextBuilder` 4. `AttackStateModeler` 5. `PatternMemoryService` retrieval bundle 6. `EventOutcomeRetriever` 7. `RepairHypothesisGenerator` 8. `ConstrainedArtifactCompiler` 9. `ForgeCandidateScorer` 10. `PromotionAssurance` ### What the stages actually do today #### TraceUnderstandingEngine - builds `TraceSummary` - uses heuristic attack interpretation by default - can use remote interpretation through the GPU service boundary #### Exploit Classification Engine - classifies exploit family using behavioral signals - uses host facts as authoritative for platform family - resolves service family and attack phase - returns `matrix_row_id` #### ForgeContextBuilder and Router - build `ForgeContext` with classification and matrix row - inject `exploit_family`, `matrix_row_id`, `safety_tier`, and required validations into constraints - route profile families using phase, scenario, platform, and safety tier #### AttackStateModeler - turns graph edges into `AttackTransition` objects - marks choke points heuristically #### PatternMemoryService / EventOutcomeRetriever - retrieve structured memory and event-history priors - filter memory by platform/service/phase - rerank retrieved patterns through the GPU service when enabled #### RepairHypothesisGenerator - emits profile-specific intents inside matrix policy boundaries - includes scenario-specific intent shaping for Redis execution containment - carries matrix and exploit-family metadata into constraints #### ConstrainedArtifactCompiler - validates `RepairIntent` - applies matrix control-family checks - applies matrix required-validation checks - applies matrix artifact-kind and surface checks - selects matching registered artifacts when available - generates template artifacts when needed - enforces per-kind action allowlists - enforces rollback presence - estimates performance cost heuristically #### ForgeCandidateScorer - computes deterministic scorecards - optionally applies bounded neural score adjustment - records per-request inference telemetry ### Validation Reality The current forge validation path is implemented, but it is lighter than the whitepaper target. Implemented today: - intent validation - matrix validation - candidate action allowlist validation - risk budget checks - rollback contract presence checks - deterministic performance heuristics - heuristic replay coverage before council - promotion assurance before council - hard-gate filtering before council Not implemented today: - true symbolic execution - exploit replay harness - sandbox exploit/repair validation as an authoritative gate - live differential execution harness - exploit-variant fuzzing ## Forge Manager `src/FortWin.Forge/ForgeManager.cs` Implemented responsibilities: - builds `ForgeContext` - resolves `ExploitClassification` - resolves the active matrix row - selects `ForgeQualityProfile` - routes to profile families - normalizes strategy candidates - applies a deterministic pre-council filter - records typed rejection reasons Persisted/generated manager artifacts: - `forgeContext` - `executionPlan` - `preCouncilCandidates` ## GPU Inference Boundary Current scope of the GPU service boundary: - attack interpretation - pattern reranking - candidate ranking - promotion assurance assessment Current behavior: - launcher prefers GPU service startup - repo-local models are bootstrapped when missing - runtime tries DirectML first when configured - if a provider is unavailable, the service can fall back to CPU for individual model roles - Forge remains operational if remote inference is unavailable Current model state: - models under `models/` are trained local surrogate ONNX models - they are real trained ONNX assets - they are not yet incident-trained production models ## Enforcer As Implemented The enforcer is partially real and partially transitional, depending on artifact kind. Implemented admission checks: - signature valid - artifact not expired - risk score within policy - manual approval policy - capability allowlist - action-type allowlist per artifact kind - host/platform/surface compatibility Implemented lifecycle: - `Staged` - `CanaryActive` - `LocallyActive` - `RolledBack` Implemented artifact-kind reality: - `BehaviorGuard` - real bounded user-mode runtime guard backend - persistent guard-agent state under `data/runtime/behavior-guards` - `ServiceHardening` - real bounded snapshot/validate/restart/health-check/rollback path - `NetworkContainment` - still not a native backend - `AutomatedRepair` - still not a mature real remediation backend This is enough for a real single-node control loop on the currently implemented kinds, but it is not yet a full native endpoint enforcement stack. ## Dashboard As Implemented The dashboard is operationally useful and now more truthful about current state. Current behavior: - `/api/dashboard/overview` returns summary, service health, feed/enforcer status, and inference status - runtime artifacts, events, archive items, and pipelines are loaded through separate paged endpoints - throughput/state panels hide reserved states when they are zero - Forges tab shows: - forge neural status - GPU service readiness - model-specific readiness - inference telemetry summaries - recent inference request records Current executable behavior: - dashboard resolves `wwwroot` correctly when launched as a compiled `.exe` - it no longer depends on `dotnet run` layout to serve static assets ## Executable Behavior Current Windows executable support exists for: - Control Plane - Forge - ForgeGpu - Feed - Dashboard - Enforcer Supporting scripts: - `scripts/publish-executables.ps1` - `scripts/install-enforcer-node.ps1` - `scripts/uninstall-enforcer-node.ps1` - `scripts/start-forge-dashboard-enforcer.ps1` - `scripts/stop-forge-dashboard-enforcer.ps1` Recent runtime hardening: - direct executable data-root resolution finds repo `data` correctly - dashboard executable resolves static content root correctly - DevLab runtime tuning is applied for the current workstation profile ## Verification Scripts Current useful verification scripts: - `scripts/verify-phase1.ps1` - `scripts/verify-sprint1.ps1` - `scripts/verify-strategy-council.ps1` - `scripts/verify-production-loop.ps1` - `scripts/verify-forge-neural.ps1` - `scripts/verify-forge-gpu-service.ps1` - `scripts/verify-gpu-models.ps1` - `scripts/verify-controlplane-paging.ps1` - `scripts/verify-dashboard-paging.ps1` - `scripts/verify-dual-net-memory-e2e.ps1` - `scripts/verify-live-redis-loop.ps1` ## Known Current Limitations These are real limitations in the current app: - Forge ONNX models are trained local surrogate models, not production incident-trained models - Forge validation is still heuristic and deterministic; replay is coverage-based, not a real exploit replay harness - matrix coverage is still a starter set, not a broad policy corpus - there is one Forge service with platform namespaces and hard gates, not separate `Forge.Windows` and `Forge.Linux` services - `NetworkContainment` and `AutomatedRepair` are still not host-native execution backends - the signer implementation is asymmetric Ed25519, but the signer class still carries legacy naming ## Practical Summary What the app is today: - a working single-node autonomous defense control loop - a policy-guided Forge with exploit classification and matrix-constrained generation - a platform-isolated signed feed/export path - a mixed enforcer with real `BehaviorGuard` and `ServiceHardening` execution paths - a dashboard with paged operational views and truthful inference/runtime summaries What it is not yet: - a full replay/symbolic validation platform - a fully native endpoint enforcement agent across all artifact kinds - a production-trained model stack fed by real incident corpora - a fully externalized enterprise policy engine for the repair matrix