2024-04-12 13:28:58 +02:00
# on new install:
2024-04-12 13:26:26 +02:00
* `tofu apply` to create machines
* change hostname to be fqdn with hostnamectl, changing with a running cluster will break the cluster
* register dns with `knotctl add -z rut.sunet.se -n internal-sto4-test-k8sm-1.rut.sunet.se. -d 2001:6b0:6c::449 -r AAAA`
2024-04-16 10:12:45 +02:00
* ./prepare-iaas-debian ${each host}
* ./add-host -b {each host}
* ./edit-secrets ${each controller host}
```
---
+microk8s_secrets:
+ kube-system:
+ cloud-config:
+ - key: cloud.conf
+ value: >
+ ENC[PKCS7,MIID7gYJKoZIhvcNAQcDoIID3zCCA9sCAQAxggKSMIICjgIBAD
+ B2MF4xCzAJBgNVBAYTAlNFMQ4wDAYDVQQKDAVTVU5FVDEOMAwGA1UECwwFRV
+ lBTUwxLzAtBgNVBAMMJmludGVybmFsLXN0bzQtdGVzdC1rOHNtLTIucnV0Ln
```
2024-04-12 13:35:32 +02:00
* Add to cosmos-rules:
2024-04-12 13:26:26 +02:00
```
2024-04-16 10:12:45 +02:00
'^internal-sto4-test-k8sc-[0-9].rut.sunet.se$':
2024-04-12 13:26:26 +02:00
rut::infra_ca_rp:
sunet::microk8s::node:
2024-11-29 07:36:25 +01:00
channel: 1.31/stable
2024-04-12 13:26:26 +02:00
sunet::frontend::register_sites:
sites:
kubetest.rut.sunet.se:
frontends:
- se-fre-lb-1.sunet.se
- se-tug-lb-1.sunet.se
port: '30443'
'^internal-sto4-test-k8sw-[0-9].rut.sunet.se$':
rut::infra_ca_rp:
sunet::microk8s::node:
2024-11-29 07:36:25 +01:00
channel: 1.31/stable
'^internal-sto4-test-k8spg-[0-9].rut.sunet.se$':
rut::infra_ca_rp:
sunet::microk8s::node:
channel: 1.31/stable
2024-04-12 13:26:26 +02:00
```
2024-12-03 09:21:11 +01:00
* add nodes by adding a provisioning key on the first management node with `microk8s add-node`
2024-04-12 13:26:26 +02:00
* Add all other _Controller_ nodes with `microk8s join 89.46.21.119:25000/12345678987654345678976543/1234565`
* Add all other _Worker_ nodes with `microk8s join 89.46.21.119:25000/12345678987654345678976543/1234565 --worker`
2024-12-03 09:21:11 +01:00
* Taint controller nodes so they wont get workload:` microk8s.kubectl taint nodes --selector=node.kubernetes.io/microk8s-controlplane=microk8s-controlplane cp-node=true:NoExecute`
2025-02-04 10:18:57 +01:00
* Taint Postgres nodes so they wont get workload:` microk8s.kubectl taint nodes --selector=sunet.se/role=cnpg pg-node=true:NoExecute`
2024-04-12 13:26:26 +02:00
* `kubectl get nodes` should show something like:
```
NAME STATUS ROLES AGE VERSION
2024-04-16 10:12:45 +02:00
internal-sto4-test-k8sc-2.rut.sunet.se NotReady < none > 16d v1.28.7
2024-04-12 13:26:26 +02:00
internal-sto4-test-k8sw-5.rut.sunet.se Ready < none > 15m v1.28.7
internal-sto4-test-k8sw-1.rut.sunet.se Ready < none > 15m v1.28.7
internal-sto4-test-k8sw-2.rut.sunet.se Ready < none > 14m v1.28.7
2024-04-16 10:12:45 +02:00
internal-sto4-test-k8sc-3.rut.sunet.se Ready < none > 16d v1.28.7
2024-04-12 13:26:26 +02:00
internal-sto4-test-k8sw-3.rut.sunet.se Ready < none > 18m v1.28.7
internal-sto4-test-k8sw-4.rut.sunet.se Ready < none > 16m v1.28.7
internal-sto4-test-k8sw-0.rut.sunet.se Ready < none > 21m v1.28.7
2024-04-16 10:12:45 +02:00
internal-sto4-test-k8sc-1.rut.sunet.se Ready < none > 16d v1.28.7
2024-04-12 13:28:58 +02:00
```
2025-01-28 15:15:11 +01:00
* Enable needed addons for rut: `microk8s enable ingress` `microk8s enable cert-manager` `microk8s enable community` `microk8s enable cloudnative-pg` `microk8s enable metrics-server`
* `kubectl create namespace sunet-cnpg`
* `kubectl label internal-sto4-test-k8spg-0.rut.sunet.se sunet.se/role=cnpg`
* `kubectl label internal-sto4-test-k8spg-1.rut.sunet.se sunet.se/role=cnpg`
* `kubectl label internal-sto4-test-k8spg-2.rut.sunet.se sunet.se/role=cnpg`
2024-11-07 15:09:31 +01:00
* Setup storage class: `rsync -a k8s internal-sto4-test-k8sc-0.rut.sunet.se: && ssh internal-sto4-test-k8sc-0.rut.sunet.se kubectl apply -f k8s`
2024-04-16 10:12:45 +02:00
* **Profit**
2024-04-30 13:19:04 +02:00
# Setting up auth (satosa) and monitoring with thruk+naemon+loki+influxdb
2024-04-30 13:59:03 +02:00
* Get shib-sp metadata with `curl https://monitor-test.rut.sunet.se/Shibboleth.sso/Metadata > internal-sto4-test-satosa-1.rut.sunet.se/overlay/etc/satosa/metadata/monitor.xml`
* Get satosa metadata with `curl https://idp-proxy-test.rut.sunet.se/Saml2IDP/proxy.xml > internal-sto4-test-monitor-1.rut.sunet.se/overlay/opt/naemon_monitor/satosa.xml`
* Publish backend metadata to swamid. `ssh internal-sto4-test-satosa-1.rut.sunet.se cat /etc/satosa/metadata/backend.xml |xmllint --format - > rut.xml`
2024-11-28 14:21:45 +01:00
## Day 2 operations:
2025-02-04 10:20:32 +01:00
### Rolling upgrade:
2024-11-28 14:21:45 +01:00
On controllers:
kubectl drain internal-sto4-test-k8sc-0.rut.sunet.se --ignore-daemonset
On workers:
kubectl drain internal-sto4-test-k8sw-0.rut.sunet.se --force --ignore-daemonsets --delete-emptydir-data --disable-eviction
2025-02-04 10:18:57 +01:00
After upgrade: monitor that calico has working access to the cluster and look for problems like `Candidate IP leak handle och too old resource version` in calico-kube-controllers pod. If theese are found calico cane be restarted with:
kubectl rollout restart deployment calico-kube-controllers -n kube-system
kubectl rollout restart daemonset calico-node -n kube-system