Deployment Model
Harbor should feel like an appliance:
Treat this page as the Harbor install starting point: run Harbor locally first, keep credentials local to Harbor Node, then layer cloud or hub features on top.
DEPLOYMENT_MODEL.md
Philosophy
Harbor should feel like an appliance:
- install
- open a local port
- configure connectors
- approve policy
- let agents work safely
For the current first public-host deployment target, use docs/BREAKWATERHARBOR_NET_DEPLOYMENT.md.
The repo also now includes:
infra/env/breakwaterharbor.production.env.examplescripts/redeploy-breakwaterharbor.sh
The public-service compose files now support loopback-only deployment directly through bind-prefix env vars such as:
WEB_BIND_PREFIXDOCS_BIND_PREFIXADMIN_BIND_PREFIXHUB_BIND_PREFIXCLOUD_API_BIND_PREFIXPOSTGRES_BIND_PREFIXREDIS_BIND_PREFIX
Default deployment types
1. Local edge node
- Raspberry Pi 4/5
- mini PC
- Jetson
- small Linux server
- Windows or Linux host running Docker
2. BYO VPS
Optional cloud-assist deployment for premium/fleet services.
3. Hybrid
Local Harbor Node + cloud coordination later.
Packaging recommendation
MVP default
Docker Compose first
Reason:
- reproducible
- cross-device
- easier updates
- easier support
- same shape on ARM64 and AMD64
Later
Optional native installer for constrained environments.
Current Docker stack split
Current compose grouping is:
- harbor-node-api - harbor-ui
- cloud-api - postgres - redis
- dock - SQLite volume-backed Hub catalog data
- web - admin - docs
harbor-nodeharbor-cloudharbor-hubharbor-web
Database and Redis guidance
Use Postgres and Redis where there is real multi-user server-side state, queueing, or cache pressure.
That means:
- Cloud should keep Postgres and Redis.
- Cloud premium state should persist in Postgres, including members, organizations, node enrollments, notifications, license overrides, support notes, and cloud sessions.
- Cloud premium activity history should also persist in Postgres as metadata-only event records. Do not store connector secrets, raw request bodies, message content, or hidden auth in this layer.
- Hub should stay on SQLite for now because it is a single-service metadata catalog with local durable storage.
- Harbor Node should stay on SQLite for now because it is the local-first runtime authority and its state belongs with the node.
- Website, Admin, and Docs should stay stateless for now and avoid a database or Redis until one of those apps truly needs dynamic server-side behavior.
Placement rule:
- If a service runs natively, its SQLite file should live in that app's local
data/folder. - If a service runs in Docker, its SQLite file should live behind that service's Docker volume mount, not as a separate native host process and not as a separate database container.
- SQLite-backed services like Hub and Harbor Node do not need standalone
postgres-style sidecar containers. Their database is the file plus the mounted service volume.
Harbor Node hosting model
The Harbor UI should be reachable on the local network. The public site remains minimal.
Default model:
- local node UI on LAN port
- local node API on LAN or loopback policy
- optional remote access through user-chosen networking stack later
Multi-node and Fleet
Each Harbor Node remains its own trust boundary:
- its own UI
- its own SQLite database volume
- its own local connector secrets
- its own approvals and audit history
Premium multi-node should layer on top of that model instead of replacing it.
Current development shape:
- Harbor UI on 11820 - Harbor Node API on 11821 - defaults to Main Node on site harbor-local-site
- Harbor UI on 11920 - Harbor Node API on 11921 - defaults to Worker Node on site harbor-local-site
- cloud API on 11825 - Postgres on 5432 - Redis on 6379
- primary node stack:
infra/docker/compose.node.yml - secondary node stack:
infra/docker/compose.node.second.yml - shared cloud stack:
infra/docker/compose.cloud.yml
Recommended behavior:
- each node can run fully on its own
- nodes may live on different subnets or sites
- Harbor Fleet coordinates enrollment, visibility, and later routing
- cloud does not become the default custody layer for connector credentials
- cloud restarts should not drop account sessions or Fleet enrollment inventory because those now live in Postgres rather than process memory
Role model:
Main Nodeis the primary node for a local site or networkWorker Nodeis an additional node with its own local secrets and action authority- workers should still enroll to Fleet directly
- the Main Node should describe local site topology and later become the natural routing coordinator for multi-node execution
Helpful scripts:
pnpm docker:node:uppnpm docker:node2:uppnpm docker:cloud:up
Cloud activity retention
Premium cloud activity history should be configurable by environment so operators can tune cost, privacy posture, and product policy without code changes.
Current levers:
CLOUD_ACTIVITY_RETENTION_COMMUNITY_DAYSCLOUD_ACTIVITY_RETENTION_PRO_DAYSCLOUD_ACTIVITY_RETENTION_BUSINESS_DAYS
Recommended defaults:
- Community:
0 - Pro:
7 - Business:
90
Device classes to consider
- Raspberry Pi Zero: likely too constrained for a comfortable default target
- Raspberry Pi 4/5: good edge target
- Jetson Orin: strong edge target
- x86 mini PC: ideal early dev target
- Windows + Docker Desktop: acceptable dev target
- Linux server: strong dev and prod target