59 lines
2.4 KiB
Markdown
59 lines
2.4 KiB
Markdown
# Background
|
|
Some background first:
|
|
the node can have multiple roles
|
|
this includes (but not limited to)
|
|
* Host (can generate events)
|
|
* DNS (can consume the events and act on them)
|
|
* Something else that I might come up with (the architecture has to be expandable)
|
|
|
|
# Control pane (3+ nodes)
|
|
* Quorum
|
|
* Consists of $n / 2 + 1$ nodes
|
|
* Cluster is considered "degraded" if no quorum can be created
|
|
* Stores an event log
|
|
* **Only** leader can append to the log (with quorum permission)
|
|
* Membership authority
|
|
* No joins without quorum approval
|
|
* Leaves are not propagated without quorum
|
|
* Manages epoch (useful for GC)
|
|
* Node $N$ with $N.epoch != cluster.epoch$ can **not** join the cluster, and has to re-join (bootstrap)
|
|
* Can (but doesn't have to) be a bootstrap point
|
|
|
|
# Membership
|
|
* Membership is managed though SWIM
|
|
* Each node contains a small slice of the entire network
|
|
## Joining
|
|
Each node has an array of roles:
|
|
1. That it performs
|
|
2. That it requires to operate (can be moved out to the master, or the shared type)
|
|
3. That it needs for bootstrapping (analogous to 2.)
|
|
|
|
Node can join via a master or via other nodes
|
|
When a node requests to join, the responder makes a request to the CP and asks for a permission to add this node
|
|
* If master allows
|
|
1. The node gets a membership digest from the CP.
|
|
2. The node *can* be brought up to speed using it's neighbors from 1.
|
|
3. Node join event gets broadcasted over SWIM gossiping
|
|
* Otherwise, nothing happens
|
|
|
|
# Host node
|
|
## Bootstrap
|
|
Host node requests `dns` nodes on join (and other node types, such as `ns`, `nginx`, etc... They should really be called something like `dns_processor`, and the internals (how it processes the dns) should not be visible to the cluster, but that's a task for a future me)
|
|
When a new update occurs, it sends the update to *some* `dns` hosts.
|
|
|
|
# DNS node
|
|
## Bootstrap
|
|
First, it gets all the available `hosts` from the CP
|
|
Then it requests their configs and sets map[hostName]seq accordingly
|
|
## Simple join (when other nodes exist)
|
|
It requests it's config from other nodes and that's it
|
|
|
|
<!-- TODO: finish the TODO file lol -->
|
|
|
|
# Minor To-Do
|
|
- auth middleware lol
|
|
- move request logging out of the request handling into a middleware
|
|
- nginx role
|
|
- think about choosing the master for the keepalive message (should be somewhat load-balanced)
|
|
- hivemind lite should not just print `hivemind-lite` lol
|
|
- different transport (maybe something like a custom binary protocol)
|