* add nodes by adding a provisioning key on the first management node with `microk8s add-node`
* Add all other _Controller_ nodes with `microk8s join 89.46.21.119:25000/12345678987654345678976543/1234565`
* Add all other _Worker_ nodes with `microk8s join 89.46.21.119:25000/12345678987654345678976543/1234565 --worker`
* Taint controller nodes so they wont get workload:` microk8s.kubectl taint nodes --selector=node.kubernetes.io/microk8s-controlplane=microk8s-controlplane cp-node=true:NoExecute`
* Taint Postgres nodes so they wont get workload:` microk8s.kubectl taint nodes --selector=sunet.se/role=cnpg pg-node=true:NoExecute`
# Setting up auth (satosa) and monitoring with thruk+naemon+loki+influxdb
* Get shib-sp metadata with `curl https://monitor-test.rut.sunet.se/Shibboleth.sso/Metadata > internal-sto4-test-satosa-1.rut.sunet.se/overlay/etc/satosa/metadata/monitor.xml`
* Get satosa metadata with `curl https://idp-proxy-test.rut.sunet.se/Saml2IDP/proxy.xml > internal-sto4-test-monitor-1.rut.sunet.se/overlay/opt/naemon_monitor/satosa.xml`
kubectl delete pod calico-node-???? -n kube-system ```
### Calico problems
Calico can get in a bad state. Look for problems like `Candidate IP leak handle` and `too old resource version` in calico-kube-controllers pod. If theese are found calico can be restarted with: