Compare commits

...

127 commits

Author SHA1 Message Date
1c22bfb722
add cert-manager stuff 2024-11-12 15:08:49 +01:00
46ade449bb
add rook instructions 2024-11-08 22:53:34 +01:00
ca2a3ef1c9
lb1.matrix.sunet.se added 2024-11-08 11:09:24 +01:00
5df6b70bbf
mgmt1.matrix.sunet.se added 2024-11-08 11:04:31 +01:00
46a7ccc30f
k8sw6.matrix.sunet.se added 2024-11-08 11:02:28 +01:00
d71a71f226
k8sw5.matrix.sunet.se added 2024-11-08 10:55:41 +01:00
22871236cb
k8sw4.matrix.sunet.se added 2024-11-08 10:55:17 +01:00
d19f81d0c4
add debian fix for puppet-module-puppetlabs-sshkeys-core 2024-11-08 10:25:34 +01:00
d6b200faad
delete old stuff + fix ssh key 2024-11-08 10:21:29 +01:00
1449221f43
new cluster added 2024-11-08 10:20:48 +01:00
487770e350
Change prod microk8s version to match env test version. 2024-11-08 09:11:12 +01:00
c19aaa5a97
Remove swap file 2024-11-08 00:04:12 +01:00
980adbf867
Add example output module for commands to add dns records. 2024-11-08 00:03:04 +01:00
c27a2195cc
Setup new prod deployment 2024-11-07 22:55:21 +01:00
1c72cff364
Remove old style cluster deployment 2024-11-07 22:22:48 +01:00
a375ea111f
add myself 2024-11-07 12:48:12 +01:00
57e1339000
clean out old keys 2024-11-07 12:47:00 +01:00
f8118ef52d
Add new mgmt vpn + ingress for https in dco 2024-11-07 12:38:58 +01:00
c65741753c
Add README.md about postgres password secret. 2024-11-07 11:03:20 +01:00
e61e7654dc
Add postgres deployment for element 2024-11-07 10:59:04 +01:00
2a103b0da4
Add ip secret to list 2024-11-06 15:09:58 +01:00
666b81af9c
Add matrix deploy role 2024-11-06 15:07:53 +01:00
132a2dd771
Add matrix.sunet.se ip to lb 2024-11-06 15:06:05 +01:00
035524863a
Add cluster role to get namespaces 2024-11-06 12:52:53 +01:00
4515bafe6c
Add rexistry example ingress 2024-11-06 12:38:48 +01:00
153a31ae27
Add script to automate creation of k8s users 2024-11-06 07:55:11 +01:00
d5c31c0d32
Add mgmt to lb rule for ingress ports 2024-11-06 07:47:20 +01:00
9df05afe20
lb1: add address 2024-11-05 22:47:25 +01:00
3014c551b8
lb1: Update secret 2024-11-05 22:39:57 +01:00
956deed67a
Update lb sg 2024-11-05 22:38:41 +01:00
888f20a67b
Add user for matrix installer 2024-10-31 10:04:53 +01:00
fd7edba2cb
mgmt1.matrix.test.sunet.se added 2024-10-31 09:24:29 +01:00
8025cb5cc8
Add management node rule 2024-10-31 09:17:53 +01:00
b90a4b9a36
Add management node 2024-10-31 09:16:51 +01:00
f691ae99e6
Open ingress port from lb to workers 2024-10-30 23:56:25 +01:00
9b343f32e7
Add tls secret 2024-10-30 16:05:06 +01:00
6393a8279d
Enable external access from lb to k8s 2024-10-30 15:17:58 +01:00
7c7b85cfbd
Create security group for k8s external access. 2024-10-30 14:56:05 +01:00
b0701f9b66
Make secret a secret 2024-10-30 13:56:20 +01:00
a3b86f45d4
Add test adress 2024-10-30 13:41:42 +01:00
e1e4802def
Create lb cosmos rule 2024-10-30 13:35:19 +01:00
919bab0791
Add matrix puppet module 2024-10-30 13:31:34 +01:00
840af98c51
Open lb port to source ip during setup and hardening 2024-10-30 12:25:44 +01:00
b497844e59
lb1.matrix.test.sunet.se added 2024-10-29 09:10:20 +01:00
47d77c5fde
Add lb. 2024-10-29 08:52:57 +01:00
618b273ca8
Add example serive 2024-10-29 08:52:08 +01:00
ad52a3c054
Make rook deployment fully multizone aware 2024-10-28 14:20:12 +01:00
1384c2df90
Updated rook deployment for faliure zones 2024-10-25 13:18:48 +02:00
8622d20d51
Add security group port 2024-10-24 12:53:00 +02:00
fd5204fb47
Update k8s peers 2024-10-24 09:59:05 +02:00
5a1b44e7c0
Test more recent puppet sunet tag 2024-10-24 09:45:47 +02:00
4f63fa0f60
Readd microk8s node 2024-10-24 09:02:13 +02:00
a3215eae9b
Try to deploy with older version of puppet sunet 2024-10-24 08:19:39 +02:00
9a23e78011
remove microk8s 2024-10-23 16:28:09 +02:00
03839ba41b
Remove channel spec for microk8s. 2024-10-23 15:34:47 +02:00
8cf1b38f15
Downgrade microk8s version in test. 2024-10-23 15:14:16 +02:00
6edd4c82eb
Clean out k8sc1 secrets 2024-10-23 14:12:34 +02:00
f8e206c372
Add script to test connectivity between k8s nodes 2024-10-23 13:44:02 +02:00
eadff19277
Make variable naming more consistent and more typo fixes 2024-10-23 11:15:33 +02:00
f59ab71fe6
Fix some typos and comment out some unused parts for now. 2024-10-23 10:07:08 +02:00
8f70f4a3ff
Refactor the remaining control nodes to new deployment model 2024-10-22 16:24:49 +02:00
d32825ec66
Update worker peers 2024-10-22 13:52:28 +02:00
f9b217e538
Update peer ip addresses 2024-10-22 13:41:34 +02:00
855613625e
Resolve IaC conflict by rename old microk8s group 2024-10-22 09:21:56 +02:00
3f07468b64
Fix typo in comment 2024-10-22 08:17:26 +02:00
7280601637
Adjust security group rules for control node in sto4 2024-10-22 08:05:26 +02:00
d5663f23e1
Set correct nuimber of controller replicas 2024-10-22 07:13:26 +02:00
d9a575f0e6
Add controller node definition to sto4 2024-10-22 07:09:07 +02:00
f8fc2ab777
Add k8sw1 to rook and remove k8sc3 from deployment. 2024-10-21 17:40:24 +02:00
5aef290639
Move and scale out DCO workers with new deployment method 2024-10-21 15:09:34 +02:00
5645cedf71
Remove k8sw1 from rook 2024-10-21 13:21:59 +02:00
19da9ecc8c
k8sw5.matrix.test.sunet.se added 2024-10-21 09:04:04 +02:00
c68c4d85d0
Make sto4 security group rules resource naming statndard consistent with the sto3 one. 2024-10-21 08:15:24 +02:00
8d630a9d08
Prepare sto3 worker nodes. 2024-10-19 22:11:41 +02:00
ecd78cbf8b
Decrease legacy deployed worker count to 1 2024-10-18 22:03:00 +02:00
1f0ad12e63
k8sw6.matrix.test.sunet.se added 2024-10-18 17:03:24 +02:00
e73374e9d6
Add ssh security group 2024-10-18 17:02:29 +02:00
9872f8f923
Add security group rules to allow k8s traffic from sto4 to dco 2024-10-18 14:54:13 +02:00
b53cd52314
Prepare terraform manifests for two workers in sto4. 2024-10-18 13:59:55 +02:00
83c3d61b77
Add a list of used datacenters in deployment 2024-10-18 12:45:46 +02:00
282263bb05
Remove k8sw3.matrix.test.sunet.se to make the recreate flow work 2024-10-18 11:03:55 +02:00
0c4d7fe8c3
Remove worker k8sw4 in preparation to move it to sto4. 2024-10-18 10:16:00 +02:00
d102573cbd
Add placeholder for postgresql patroni nodes 2024-10-18 09:17:07 +02:00
1ac6a1f16a
Add rook-toolbox deployment file to manage ceph in cluster. 2024-10-18 09:14:40 +02:00
44d989698c
Refactor security group generation and prepare sto4 microk8s group 2024-10-18 00:25:01 +02:00
7b779b2c41
Prepare multicloud setup 2024-10-17 08:08:33 +02:00
bdb858df42
Add patch to enable calico wireguard mode for internode CNI traffic 2024-10-16 22:52:12 +02:00
ec943d607c
Fix microk8s security group rule resource name 2024-10-16 21:58:05 +02:00
f5f8c1983f
microk8s sg: Add udp port 51820 to allow calico wireguard internode CNI overlay vpn 2024-10-16 21:55:13 +02:00
fd3fe09901
Add failuredomain to rookfs data and metadata pool 2024-10-16 21:36:45 +02:00
9fed13b09c
Add storageclass and fs for rook cephfs 2024-10-16 13:42:48 +02:00
b5967c57af
Add rook manifest files. 2024-10-16 08:32:16 +02:00
09359e07af
Add new jocar key to validate cosmos modules apparmor and nagioscfg 2024-10-15 16:07:13 +02:00
fcfc9fc0ac
Delete expired gpg keys for mandersson on servers. 2024-10-15 16:03:15 +02:00
7842dd4ace
k8sw4.matrix.test.sunet.se added 2024-10-15 15:50:38 +02:00
7d34d653e6
Add woker 4 matrix-test to cosmos-rules peer list 2024-10-15 15:45:32 +02:00
e35138ce8c
Move workers to other ssh keypair. 2024-10-15 15:38:44 +02:00
da2d2004dd
Extend worker node count and create rook volume on workers. 2024-10-15 14:52:52 +02:00
71919802c8
Add new ssh key for mandersson 2024-10-15 13:56:48 +02:00
5b480f0088
Add new gpg key for mandersson. 2024-10-15 12:35:12 +02:00
1b6c4c5e98
Test 2024-10-04 07:02:54 +02:00
5a36c01199
Mandersson: add new ssh key. 2024-09-16 18:46:12 +02:00
5b5f8f73a9
Update README.md 2024-09-16 18:16:28 +02:00
a2ed169425
Just move prod lb. 2024-09-16 17:31:09 +02:00
11a54df526
Change frontend servers. 2024-09-16 17:25:03 +02:00
6355d795c0
Fix typo 2024-06-05 17:03:52 +02:00
4a7ca09ca3
Add mariah ssh key 2024-06-05 16:59:37 +02:00
b0f691e0b0
Add frontends to k8s. 2024-06-05 14:59:15 +02:00
b49db88a5b
Add appcred 2024-06-04 15:39:57 +02:00
6fdb194e66
Update matrix prod tls secrets 2024-06-04 15:29:30 +02:00
1fd938d438
Add tls secrets for matrix test 2024-06-04 15:15:09 +02:00
b7edd8a465
Remove some unnecessary whitespaces 2024-06-04 13:57:44 +02:00
fffa63b827
Expose ingress externaly and remove external kube api endpoint 2024-06-04 13:53:57 +02:00
0d594247b4
Add health ingress endpoint 2024-06-04 13:12:40 +02:00
d1a85199c6
Make kub[cw] a special nftables case 2024-05-31 12:57:38 +02:00
e87657f209
reenable nftables init 2024-05-31 12:39:35 +02:00
becbb55d1a
Disable nftables init 2024-05-31 11:51:48 +02:00
47382d698e
Add kano ssh key 2024-05-29 09:08:09 +02:00
5c57fb769c
Clean up more sunet server options 2024-05-28 22:15:44 +02:00
894e3117f0
Try without nftables 2024-05-28 21:59:25 +02:00
7901cd54d6
k8sw3.matrix.sunet.se added 2024-05-28 10:44:46 +02:00
a576a123de
k8sw2.matrix.sunet.se added 2024-05-28 10:38:32 +02:00
2aabf9afd9
k8sw1.matrix.sunet.se added 2024-05-28 10:35:01 +02:00
d529ab9ecc
k8sc3.matrix.sunet.se added 2024-05-28 10:29:02 +02:00
f219af697e
k8sc2.matrix.sunet.se added 2024-05-28 10:23:36 +02:00
58cb178906
k8sc1.matrix.sunet.se added 2024-05-28 10:17:30 +02:00
855f06db18
Add cosmos rules for matrix k8s prod. 2024-05-28 10:14:33 +02:00
111 changed files with 20764 additions and 1292 deletions

22
IaC-prod/dnsoutput.tf Normal file
View file

@ -0,0 +1,22 @@
output "control_ip_addr_dco" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-dco : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_dco" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-dco : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "control_ip_addr_sto3" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-sto3 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_sto3" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-sto3 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "control_ip_addr_sto4" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-sto4 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_sto4" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-sto4 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}

View file

@ -3,3 +3,22 @@ data "openstack_images_image_v2" "debian12image" {
name = "debian-12" # Name of image to be used
most_recent = true
}
data "openstack_images_image_v2" "debian12image-dco" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.dco
}
data "openstack_images_image_v2" "debian12image-sto4" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto4
}
data "openstack_images_image_v2" "debian12image-sto3" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto3
}

138
IaC-prod/k8snodes-dco.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global DCO definitions
#
locals {
dcodc = "dco"
dconodenrbase = index(var.datacenters, "dco")
dcoindexjump = length(var.datacenters)
}
#
# Control node resources DCO
#
resource "openstack_networking_port_v2" "kubecport-dco" {
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "controller-nodes-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-dco[count.index].id
}
}
#
# Worker node resources DCO
#
resource "openstack_networking_port_v2" "kubewport-dco" {
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "worker-nodes-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-dco[count.index].id
}
}

139
IaC-prod/k8snodes-sto3.tf Normal file
View file

@ -0,0 +1,139 @@
#
# Global definitions sto3
#
locals {
sto3dc = "sto3"
sto3nodenrbase = index(var.datacenters, "sto3")
sto3indexjump = length(var.datacenters)
}
#
# Control node resources STO3
#
resource "openstack_networking_port_v2" "kubecport-sto3" {
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "controller-nodes-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto3[count.index].id
}
}
#
# Worker node resources STO3
#
resource "openstack_networking_port_v2" "kubewport-sto3" {
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "worker-nodes-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto3[count.index].id
}
}

138
IaC-prod/k8snodes-sto4.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global definitions for sto4
#
locals {
sto4dc = "sto4"
sto4nodenrbase = index(var.datacenters, "sto4")
sto4indexjump = length(var.datacenters)
}
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport-sto4" {
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "controller-nodes-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto4[count.index].id
}
}
#
# Worker node resources
#
resource "openstack_networking_port_v2" "kubewport-sto4" {
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "worker-nodes-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto4[count.index].id
}
}

48
IaC-prod/lb.tf Normal file
View file

@ -0,0 +1,48 @@
# Netowrk port
resource "openstack_networking_port_v2" "lb1-port-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.lb-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "lb1volumeboot-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for lb1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "lb1-node-dco" {
name = "lb1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.lb-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.lb1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.lb1-port-dco.id
}
}

View file

@ -11,5 +11,23 @@ required_version = ">= 0.14.0"
# Configure the OpenStack Provider
provider "openstack" {
cloud = "${var.cloud_name}"
cloud = "${var.clouddco_name}"
}
# DCO Matrix Test
provider "openstack" {
cloud = "${var.clouddco_name}"
alias = "dco"
}
# STO3 Matrix test
provider "openstack" {
cloud = "${var.cloudsto3_name}"
alias = "sto3"
}
# STO4 Matrix test
provider "openstack" {
cloud = "${var.cloudsto4_name}"
alias = "sto4"
}

46
IaC-prod/mgmt.tf Normal file
View file

@ -0,0 +1,46 @@
# Netowrk port
resource "openstack_networking_port_v2" "mgmt1-port-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "mgmt1volumeboot-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for mgmt1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "mgmt1-node-dco" {
name = "mgmt1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.mgmt1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.mgmt1-port-dco.id
}
}

View file

@ -1,3 +1,18 @@
data "openstack_networking_network_v2" "public" {
name = "public" # Name of network to use.
}
data "openstack_networking_network_v2" "public-dco" {
name = "public" # Name of network to use.
provider = openstack.dco
}
data "openstack_networking_network_v2" "public-sto4" {
name = "public" # Name of network to use.
provider = openstack.sto4
}
data "openstack_networking_network_v2" "public-sto3" {
name = "public" # Name of network to use.
provider = openstack.sto3
}

View file

@ -1,109 +0,0 @@
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport" {
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes control node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "controller-nodes" {
count = var.controller_instance_count
name = "${var.controller_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers.id
}
network {
port = resource.openstack_networking_port_v2.kubecport[count.index].id
}
}
#
# Worker node resources
#
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubewport" {
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes worker node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "worker-nodes" {
count = var.worker_instance_count
name = "${var.worker_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers.id
}
network {
port = resource.openstack_networking_port_v2.kubewport[count.index].id
}
}

View file

@ -0,0 +1,177 @@
# Security groups dco
resource "openstack_networking_secgroup_v2" "microk8s-dco" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.dco
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-dco" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.dco
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO3 to DCO
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO4 to DCO
#
#Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-dco" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-dco" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}

View file

@ -0,0 +1,125 @@
# Security groups for external acccess k8s control nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-control-dco" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s control nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto3" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s control nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto4" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto4.id
}
# Security groups for external acccess k8s worker nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-dco" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s worker nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto3" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s worker nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto4" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto3
resource "openstack_networking_secgroup_v2" "microk8s-sto3" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto3
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto3" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto3
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From DCO to STO3
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From STO4 to STO3
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto3" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto3" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto4
resource "openstack_networking_secgroup_v2" "microk8s-sto4" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto4
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto4" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto4
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# DCO to STO4
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# From STO3 to STO4
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto4" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto4" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}

View file

@ -0,0 +1,109 @@
# Security groups lb-frontend
resource "openstack_networking_secgroup_v2" "lb-dco" {
name = "lb-frontend"
description = "Ingress lb traffic to allow."
provider=openstack.dco
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
# From mgmt1
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule4_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule5_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule6_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule7_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8080"
port_range_max = "8080"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule8_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.184.88/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule9_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "130.242.121.23/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}

View file

@ -1,197 +0,0 @@
# Datasource of sunet ssh-from-jumphost security group.
data "openstack_networking_secgroup_v2" "sshfromjumphosts" {
name = "ssh-from-jumphost"
}
data "openstack_networking_secgroup_v2" "allegress" {
name = "allegress"
}
resource "openstack_networking_secgroup_v2" "microk8s" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule1" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule2" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "::/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule4" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule5" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule6" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule7" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule8" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule9" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule10" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule11" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule12" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule13" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule14" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule15" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule16" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule17" {
direction = "ingress"
ethertype = "IPv4"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule18" {
direction = "ingress"
ethertype = "IPv6"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule19" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule20" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-dco" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.dco
}
resource "openstack_compute_servergroup_v2" "controllers-dco" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.dco
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto3" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto3
}
resource "openstack_compute_servergroup_v2" "controllers-sto3" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto3
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto4" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto4
}
resource "openstack_compute_servergroup_v2" "controllers-sto4" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto4
}

View file

@ -3,16 +3,45 @@ variable "datacenter_name" {
default = "dco"
}
variable "keyname" {
type = string
default = "manderssonpub"
variable "datacenters" {
type = list(string)
default = [ "dco", "sto3", "sto4" ]
}
variable "worker_instance_count" {
default = "3"
# Cloud names in clouds.yaml file
variable "clouddco_name" {
type = string
default = "dco-matrixprod"
}
variable "controller_instance_count" {
default = "3"
variable "cloudsto3_name" {
type = string
default = "sto3-matrixprod"
}
variable "cloudsto4_name" {
type = string
default = "sto4-matrixprod"
}
variable "keyname" {
type = string
default = "pettai-7431497"
}
variable "keynameworkers" {
type = string
default = "pettai-7431497"
}
# Replicas per datacenter
variable "workerdcreplicas" {
default = "2"
}
# Replicas per datacenter
variable "controllerdcreplicas" {
default = "1"
}
variable "controller_instance_type" {
@ -23,6 +52,14 @@ variable "worker_instance_type" {
default = "b2.c4r16"
}
variable "lb_instance_type" {
default = "b2.c2r4"
}
variable "mgmt_instance_type" {
default = "b2.c2r4"
}
variable "worker_name" {
default = "k8sw"
}
@ -32,9 +69,30 @@ variable "controller_name" {
}
variable "dns_suffix" {
default = "matrix.test.sunet.se"
default = "matrix.sunet.se"
}
variable "cloud_name" {
default="dco-matrixtest"
variable "k8sports" {
default=[
{"16443" = "tcp"},
{"10250" = "tcp"},
{"10255" = "tcp"},
{"25000" = "tcp"},
{"12379" = "tcp"},
{"10257" = "tcp"},
{"10259" = "tcp"},
{"19001" = "tcp"},
{"4789" = "udp"},
{"51820" = "udp"}
]
}
variable jumphostv4-ips {
type = list(string)
default = []
}
variable jumphostv6-ips {
type = list(string)
default = []
}

View file

@ -3,3 +3,22 @@ data "openstack_images_image_v2" "debian12image" {
name = "debian-12" # Name of image to be used
most_recent = true
}
data "openstack_images_image_v2" "debian12image-dco" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.dco
}
data "openstack_images_image_v2" "debian12image-sto4" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto4
}
data "openstack_images_image_v2" "debian12image-sto3" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto3
}

138
IaC-test/k8snodes-dco.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global DCO definitions
#
locals {
dcodc = "dco"
dconodenrbase = index(var.datacenters, "dco")
dcoindexjump = length(var.datacenters)
}
#
# Control node resources DCO
#
resource "openstack_networking_port_v2" "kubecport-dco" {
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "controller-nodes-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-dco[count.index].id
}
}
#
# Worker node resources DCO
#
resource "openstack_networking_port_v2" "kubewport-dco" {
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "worker-nodes-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-dco[count.index].id
}
}

139
IaC-test/k8snodes-sto3.tf Normal file
View file

@ -0,0 +1,139 @@
#
# Global definitions sto3
#
locals {
sto3dc = "sto3"
sto3nodenrbase = index(var.datacenters, "sto3")
sto3indexjump = length(var.datacenters)
}
#
# Control node resources STO3
#
resource "openstack_networking_port_v2" "kubecport-sto3" {
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "controller-nodes-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto3[count.index].id
}
}
#
# Worker node resources STO3
#
resource "openstack_networking_port_v2" "kubewport-sto3" {
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "worker-nodes-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto3[count.index].id
}
}

138
IaC-test/k8snodes-sto4.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global definitions for sto4
#
locals {
sto4dc = "sto4"
sto4nodenrbase = index(var.datacenters, "sto4")
sto4indexjump = length(var.datacenters)
}
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport-sto4" {
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "controller-nodes-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto4[count.index].id
}
}
#
# Worker node resources
#
resource "openstack_networking_port_v2" "kubewport-sto4" {
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "worker-nodes-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto4[count.index].id
}
}

48
IaC-test/lb.tf Normal file
View file

@ -0,0 +1,48 @@
# Netowrk port
resource "openstack_networking_port_v2" "lb1-port-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.lb-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "lb1volumeboot-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for lb1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "lb1-node-dco" {
name = "lb1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.lb-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.lb1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.lb1-port-dco.id
}
}

View file

@ -13,3 +13,21 @@ required_version = ">= 0.14.0"
provider "openstack" {
cloud = "${var.cloud_name}"
}
# DCO Matrix Test
provider "openstack" {
cloud = "${var.clouddco_name}"
alias = "dco"
}
# STO3 Matrix test
provider "openstack" {
cloud = "${var.cloudsto3_name}"
alias = "sto3"
}
# STO4 Matrix test
provider "openstack" {
cloud = "${var.cloudsto4_name}"
alias = "sto4"
}

46
IaC-test/mgmt.tf Normal file
View file

@ -0,0 +1,46 @@
# Netowrk port
resource "openstack_networking_port_v2" "mgmt1-port-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "mgmt1volumeboot-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for mgmt1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "mgmt1-node-dco" {
name = "mgmt1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.mgmt1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.mgmt1-port-dco.id
}
}

View file

@ -1,3 +1,18 @@
data "openstack_networking_network_v2" "public" {
name = "public" # Name of network to use.
}
data "openstack_networking_network_v2" "public-dco" {
name = "public" # Name of network to use.
provider = openstack.dco
}
data "openstack_networking_network_v2" "public-sto4" {
name = "public" # Name of network to use.
provider = openstack.sto4
}
data "openstack_networking_network_v2" "public-sto3" {
name = "public" # Name of network to use.
provider = openstack.sto3
}

View file

@ -3,107 +3,121 @@
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport" {
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes control node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "controller-nodes" {
count = var.controller_instance_count
name = "${var.controller_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers.id
}
network {
port = resource.openstack_networking_port_v2.kubecport[count.index].id
}
}
#resource "openstack_networking_port_v2" "kubecport" {
# name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# count = var.controller_instance_count
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# resource.openstack_networking_secgroup_v2.microk8s-old.id,
# resource.openstack_networking_secgroup_v2.microk8s-dco.id,
# resource.openstack_networking_secgroup_v2.https.id
# ]
# admin_state_up = "true"
#}
#
# Worker node resources
#resource "openstack_blockstorage_volume_v3" "kubecvolumeboot" {
# count = var.controller_instance_count # size of cluster
# name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
# description = "OS volume for kubernetes control node ${count.index + 1}"
# size = 100
# image_id = data.openstack_images_image_v2.debian12image.id
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#resource "openstack_compute_instance_v2" "controller-nodes" {
# count = var.controller_instance_count
# name = "${var.controller_name}${count.index+1}.${var.dns_suffix}"
# flavor_name = "${var.controller_instance_type}"
# key_pair = "${var.keyname}"
# security_groups = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
# data.openstack_networking_secgroup_v2.allegress.name,
# resource.openstack_networking_secgroup_v2.microk8s-old.id,
# resource.openstack_networking_secgroup_v2.microk8s-dco.id,
# resource.openstack_networking_secgroup_v2.https.name
# ]
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 0
# }
# scheduler_hints {
# group = openstack_compute_servergroup_v2.controllers.id
# }
# network {
# port = resource.openstack_networking_port_v2.kubecport[count.index].id
# }
#}
#
# Controller node resources
##
## Worker node resources
##
#
resource "openstack_networking_port_v2" "kubewport" {
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes worker node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "worker-nodes" {
count = var.worker_instance_count
name = "${var.worker_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers.id
}
network {
port = resource.openstack_networking_port_v2.kubewport[count.index].id
}
}
#resource "openstack_networking_port_v2" "kubewport" {
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# count = var.worker_instance_count
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# resource.openstack_networking_secgroup_v2.microk8s-old.id
# ]
# admin_state_up = "true"
#}
#
#resource "openstack_blockstorage_volume_v3" "kubewvolumeboot" {
# count = var.worker_instance_count # size of cluster
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
# description = "OS volume for kubernetes worker node ${count.index + 1}"
# size = 100
# image_id = data.openstack_images_image_v2.debian12image.id
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#resource "openstack_blockstorage_volume_v3" "kubewvolumerook" {
# count = var.worker_instance_count # size of cluster
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-rook-vol"
# description = "Rook storage volume for kubernetes worker node ${count.index + 1}"
# size = 100
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#
#resource "openstack_compute_instance_v2" "worker-nodes" {
# count = var.worker_instance_count
# name = "${var.worker_name}${count.index+1}.${var.dns_suffix}"
# flavor_name = "${var.worker_instance_type}"
# key_pair = "${var.keynameworkers}"
# security_groups = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
# data.openstack_networking_secgroup_v2.allegress.name,
# resource.openstack_networking_secgroup_v2.microk8s-old.name
# ]
#
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 0
# }
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 1
# }
#
# scheduler_hints {
# group = openstack_compute_servergroup_v2.workers.id
# }
# network {
# port = resource.openstack_networking_port_v2.kubewport[count.index].id
# }
#}

17
IaC-test/pgnodes.tf Normal file
View file

@ -0,0 +1,17 @@
# Postgresql patroni nodes
#resource "openstack_networking_port_v2" "pgport" {
# name = "pgdb1-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# ]
# admin_state_up = "true"
# provider = openstack.DCO
#}

View file

@ -0,0 +1,137 @@
# Security groups for external acccess k8s control nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-control-dco" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s control nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto3" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s control nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto4" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto4.id
}
# Security groups for external acccess k8s worker nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-dco" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s worker nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto3" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s worker nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto4" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups dco
resource "openstack_networking_secgroup_v2" "microk8s-dco" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.dco
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-dco" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.dco
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO3 to DCO
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO4 to DCO
#
#Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-dco" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-dco" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto3
resource "openstack_networking_secgroup_v2" "microk8s-sto3" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto3
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto3" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto3
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From DCO to STO3
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From STO4 to STO3
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto3" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto3" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto4
resource "openstack_networking_secgroup_v2" "microk8s-sto4" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto4
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto4" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto4
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# DCO to STO4
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# From STO3 to STO4
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto4" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto4" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}

View file

@ -0,0 +1,109 @@
# Security groups lb-frontend
resource "openstack_networking_secgroup_v2" "lb-dco" {
name = "lb-frontend"
description = "Ingress lb traffic to allow."
provider=openstack.dco
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
# From mgmt1
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule4_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule5_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule6_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule7_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8080"
port_range_max = "8080"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule8_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.184.88/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule9_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "130.242.121.23/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}

View file

@ -7,191 +7,206 @@ data "openstack_networking_secgroup_v2" "allegress" {
name = "allegress"
}
resource "openstack_networking_secgroup_v2" "microk8s" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule1" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule2" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "::/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule4" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule5" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule6" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule7" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule8" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule9" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule10" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule11" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule12" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule13" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule14" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule15" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule16" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule17" {
direction = "ingress"
ethertype = "IPv4"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule18" {
direction = "ingress"
ethertype = "IPv6"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule19" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule20" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
#resource "openstack_networking_secgroup_v2" "microk8s-old" {
# name = "microk8s-old"
# description = "Traffic to allow between microk8s hosts"
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule1" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 16443
# port_range_max = 16443
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule2" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 16443
# port_range_max = 16443
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule3" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10250
# port_range_max = 10250
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule4" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10250
# port_range_max = 10250
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule5" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10255
# port_range_max = 10255
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule6" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10255
# port_range_max = 10255
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule7" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 25000
# port_range_max = 25000
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule8" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 25000
# port_range_max = 25000
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule9" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 12379
# port_range_max = 12379
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule10" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 12379
# port_range_max = 12379
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule11" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10257
# port_range_max = 10257
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule12" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10257
# port_range_max = 10257
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule13" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10259
# port_range_max = 10259
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule14" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10259
# port_range_max = 10259
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule15" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 19001
# port_range_max = 19001
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule16" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 19001
# port_range_max = 19001
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule17" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "udp"
# port_range_min = 4789
# port_range_max = 4789
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule18" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "udp"
# port_range_min = 4789
# port_range_max = 4789
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule19" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "udp"
# port_range_min = 51820
# port_range_max = 51820
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule20" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "udp"
# port_range_min = 51820
# port_range_max = 51820
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_v2" "https" {
# name = "https"
# description = "Allow https to ingress controller"
#}
#
#resource "openstack_networking_secgroup_rule_v2" "https_rule1" {
# # External traffic
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 443
# port_range_max = 443
# remote_ip_prefix = "0.0.0.0/0"
# security_group_id = openstack_networking_secgroup_v2.https.id
#}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-dco" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.dco
}
resource "openstack_compute_servergroup_v2" "controllers-dco" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.dco
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto3" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto3
}
resource "openstack_compute_servergroup_v2" "controllers-sto3" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto3
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto4" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto4
}
resource "openstack_compute_servergroup_v2" "controllers-sto4" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto4
}

View file

@ -3,16 +3,53 @@ variable "datacenter_name" {
default = "dco"
}
variable "datacenters" {
type = list(string)
default = [ "dco", "sto3", "sto4" ]
}
# Cloud names in clouds.yaml file
variable "clouddco_name" {
type = string
default = "dco-matrixtest"
}
variable "cloudsto3_name" {
type = string
default = "sto3-matrixtest"
}
variable "cloudsto4_name" {
type = string
default = "sto4-matrixtest"
}
variable "keyname" {
type = string
default = "manderssonpub"
default = "manderssonpub3"
}
variable "keynameworkers" {
type = string
default = "manderssonpub3"
}
variable "worker_instance_count" {
default = "3"
default = "0"
}
# Replicas per datacenter
variable "workerdcreplicas" {
default = "2"
}
# Replicas per datacenter
variable "controllerdcreplicas" {
default = "1"
}
variable "controller_instance_count" {
default = "3"
default = "1"
}
variable "controller_instance_type" {
@ -23,6 +60,14 @@ variable "worker_instance_type" {
default = "b2.c4r16"
}
variable "lb_instance_type" {
default = "b2.c2r4"
}
variable "mgmt_instance_type" {
default = "b2.c2r4"
}
variable "worker_name" {
default = "k8sw"
}
@ -38,3 +83,36 @@ variable "dns_suffix" {
variable "cloud_name" {
default="dco-matrixtest"
}
variable "cloud2_name" {
default="dco-matrixtest"
}
variable "cloud3_name" {
default="dco-matrixtest"
}
variable "k8sports" {
default=[
{"16443" = "tcp"},
{"10250" = "tcp"},
{"10255" = "tcp"},
{"25000" = "tcp"},
{"12379" = "tcp"},
{"10257" = "tcp"},
{"10259" = "tcp"},
{"19001" = "tcp"},
{"4789" = "udp"},
{"51820" = "udp"}
]
}
variable jumphostv4-ips {
type = list(string)
default = []
}
variable jumphostv6-ips {
type = list(string)
default = []
}

View file

@ -1,2 +1,3 @@
### \[Matrix\] ops repo.
Files in IaC is infrastructure as code definitions for a microk8s cluster in opentofu format.
### \[Matrix\]

View file

@ -1,95 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: 3DB7 65E9 ADBF 28C9 068A 0225 19CB 2C58 E1F1 9B16
Comment: Magnus Andersson <mandersson@sunet.se>
xsFNBGTxo4wBEAC2zcQZ7AMqErGhnRwWf7DnrB0+tu7XCXgCCruMG1opjmy/nsOv
rVGam7eEYVz5x1jtcIH7ojY4PnFkHE2ggPBL6G6/2PmQvhrUWtuYtJRCdN6HchWk
J45scYu2D/X3PPu0Q5gASTtQZ6WT2H+lbJkzx/1xiFKahP1NhvMw2iyjX263in7w
z5yRgmJDa9de3maHEHE+MTxMrgVb3bPKKz/tbbwa0xOySAV/MxFhUSytL+A+jBEI
nZ+8TeiCbQiva0ouQAp5xmCBgCrKRV2CsXcMPZFc+vnKYjdO4jcvOlpBhe1ttclq
kWpXuvHl3iIxixRMCYK0BWu8yDsmIrZQmzvSszazifN++5svDC++y9hvFhWc45pr
HZueMeVNxmmoUNRwUgSgZ62u2ipMwZMA9dtgxY9C+6LvC1ruQKfi2Na6AaepQPlV
/Lkg5GFY9br360CA6mx8yb43l+uFhGwxEroXSQWQ8iPI9r6jViJ14DLrEwk7MQd5
7K2woUQrGqx4EWBq/TFsa8/xaJHuvtVI//REUwaiot+cwzx83EUK59XDJk7K18qd
BfhaK9UYCvqfYQC7CR4534LjjX5f9K4l605MNu+PsjNUcmjgp88rV/wv5RgFh5sm
YnMUaLPRRBpPWmsW+AKeBlvOI8e4ME7OBuWusCzDNzV1ad1sPcLOVxNp8wARAQAB
wsHPBB8BCgCDBYJk8aOMBYkB4oUAAwsJBwkQGcssWOHxmxZHFAAAAAAAHgAgc2Fs
dEBub3RhdGlvbnMuc2VxdW9pYS1wZ3Aub3JnRRvtE0ooJuJq34ScByjfZaqxkOSF
Xki9eGiIz61NPyYDFQoIApsDAh4BFiEEPbdl6a2/KMkGigIlGcssWOHxmxYAAHkv
EACb7uiDpYkeT3zmEyj0GezvFePmxaNuezP+axJkwI8nptomGyqGnTwPovnR68W4
AfVC/CXzzrYP3tNqZ7sF003GgbvW/5ho75kIX0wy/lBHyADrjoOsnpQS6N2sLud9
t3haihTyu0Cs+UELJKPY7Orkl0apPEGCPYkfn9wNxQac4NPD82K0WOkLi0opnfrZ
A+JI6lx1zgb4MIIxfs0PZ+27b8bc2JCl5WNmBIElNldypmFdTuIfgsLu+N9YH/Q6
lM0DOQPcgWbpTxGjzqFnMBqVv/xMNjWAUe1Gt3gY7Spa5JUigkirsQYrw/4ua5n6
l8pmkAZZ1FlUO3dWLDe5U4uaQe7vNL7sPEq8Eg0/KWUPCiVV+SYq3u7pmScMyT9f
i34kSy09e9JEqDeomVh4tKABSTYnO4b+29HQtOfOW1jdT2gk8HaOhivK47wIdLnb
T8LAxH06ImO0yBCIJz36y1mg9LB+y0X+YRIoTLoanNflj5uSBLILXlG4FKdeY+37
bTxFoLKVn55tFc/kT6RW1b+94ruFd5kAWBCBFUMSDve5OSFBJ3dnmKu884Rf7XZu
cn4aBTHwbP1MvBJQzJ6YWr+4tKuhHIOx3JLLCTfrFx5DmFOLejw65UGeUF4RNNvu
7X9VKp1TCzzUEK0aoAuvxIQe6ZczedqRckO/oXolHg/eOM0mTWFnbnVzIEFuZGVy
c3NvbiA8bWFuZGVyc3NvbkBzdW5ldC5zZT7CwdIEEwEKAIYFgmTxo4wFiQHihQAD
CwkHCRAZyyxY4fGbFkcUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBn
cC5vcmcqKwTUOwrRkE+tMtaZLV+DfMEpzLxgZo380Rl8boRH1QMVCggCmQECmwMC
HgEWIQQ9t2Xprb8oyQaKAiUZyyxY4fGbFgAAkngP/2Y7AjS0nt/+PDc9onDizg4t
tedqdtn/6npkW3kSOZOmH3IMQ9vRe36m0vhmPMbxX91MP9jdxr7xAjshOSP3eP6E
SiRf17RO60nafzzTeXD5ooUnG4GA+Fim/uOiX4OiIw98QdspugZwCks68YzLJXBA
z/MfwUV4Cv2Jxgx/n2ctxt9dm5mzSy0AfW2TYckNd/YUjrNSuJdf7Z5aVk/lhylg
FslhxKBcoArkBr2eCDW7S5Cwo1p8f0KW7CrOK4O772tDeqLk6CzrEEOOuuV1TzT4
2Ubc5MYMbpB9xVnA55a2N9OHLytnsEyXYBIHZzk1DkKJdC1VzxoFbwk6ZBAfOVcw
7nN27bJW0IQPKY5wo6y0rkit2Q/HGUAtvueMHLA1/J6bVxBaygPK4f2/li0fAp9W
YAQSp5qCtRDvgw6B3oDyUOhudZnvd7QWNAAbsGzIAeO8HjVlgcEdT4S82yNZTk/s
EG5DLe46b/DBUx46L3enKI+m0MUepzJk56WY+DdxaT0P7uWxa7iRz4h4ey9X90Vf
of0ik3GvjqaNdPwC6CTlIoT8rW5vBEVrt4zbzRSTcL4p13tO/BDorDrqA/1eLm1t
ReZzVDmCeaT9XreitsZKXBW0QR5mwcVl4r+dMBceqwgRWpzD/zQb8w2qQmqquGNh
5Z6I0FzOztMEQVcdpp4AzsFNBGTxo4wBEACrCG8QKLcZsqWciN196fFALKXpDDZZ
P4A9a8t1i5XqsxSZrriQmZVNs6lGGpuWcYcSqYdBUpaZTyaIO1yjRnN4flWf4VEF
aB/ABWZn7ZyIR7Ot5fEg87Cf+av0O0jFUxltJkZrMDvMLv9FGwJ23aByIf90dETp
e6w0oW0cFV8JnFqGdaDjgj/l5gLb0K0ktAaeFf9AKXQqBFJEVZsOzzgNdXIHWmBx
SV1xRZxeT8j+zRGgVapIxDBREwzY8k3G5iq5rwyfQQJLaLHP6uvEGwZ6eGb1lB8M
cjXx/RbLpiIDLXUZy02/yI3jCb2voMAjTto2GZnLRWOCvcWEo9xVrHHzP9P91W6l
wOXqorQVHN0m6YKNN2yRl6w6ashh+L3LxVxbpLeYC6qcHjziKVWbYCFTOivZSf1O
TTmd4ChnnxLqA8Jus4uP8I3w+sXobNdR6eMlLxM6rXh5OhjNdYogcmfkGfbQK6vw
m/A9lIYejWRxvmk2fkAG9uUJG69+S70nwp1KpJIVk3EqyaNnrcqLbTcr52M8KGZr
Mm4NtRAkkcZf4irs7uJRtaSZLMpEUKyYCY9Lnn/p70x/7cuhGoZ6YjFG74GTR5pU
HQH9qXyVnUJ+R5xJAAV6FuLyqUlM26sSR9hFbOXW+RK12c3qB5wOLsyWyGdZkZAG
mGWDK/XWVurK3QARAQABwsHEBBgBCgB4BYJk8aOMBYkB4oUACRAZyyxY4fGbFkcU
AAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmdX6lwHCIKO+5/k
ZISh2+NkcDaejjar1UgV8OzRAWmQwgKbDBYhBD23ZemtvyjJBooCJRnLLFjh8ZsW
AADwMxAAp3WXjnbP/S1hYpie/Sy5BaU9yoPI9BtJnzN8AUfQGo9nfFupXzQw6JG0
bJfXrhUswxJMhYE/iyqI2maRp9396dGAYGKIx11X/G4u5P0fQGZhSj9MV3+G0oyH
qBq9vgrYUH5fBUtAOIwGYUEucoAz/NDiH62MOFHtAbxKv7CZ57jdGSxnjDEvRlBT
1Pebum/NDsihwOmFJKBe8LWfh9Z6+V/0yR6cnRFchrz9dA3HVhECK/t6VrHMMfIa
hM52dQMX4IBp/vaqTUxtWqA/wujC1fdZSmcctTlP8/dkFSg6JEx97zdZa5m9sLIY
ip60chbSFSKo3cgk02cN1MV7Ys2drPqZYeMLG8wxnuvtcSX34gVjvNauEZUU0+9P
FUMf7baMs40kGVsstEodx4IoaVj6l1zTlWP+FlTnLvYao8p3PppikzPr11XPbeln
d9sWjxfzwYSdzxu4wEplGInrxhHZ0zy+ObutY+wDYlGWELyZw8mde6iQIxTBEIsX
nLfaJEO6EPvbFSUeE+2LYfOYUUD8LmKm/KHnUVAOwy48yOVccd+gnBoMVg9yR6bJ
/+YmwUEw0QrDIikOSl61e+APzEqoX0F1/eWvL37ksdqe8jadSfOP/rtltjnGI7dy
MmlM4LqRZyrnPRnc9WT4ODohyS+ZJ8sXc0HpyRJejES7BzrYUTDOwU0EZPGjjAEQ
ALNw5tV/CbqB6a9K0DjBxhgrlBU4nZUvlCTfL1hIp8kvhm10LgSU/RMFnLQtO/R+
4ildcdEgLPYL3X56xbfExrgtfYJudr+jIbszGfMgz8Hwatc1BhfXhPSfxfcgIgTz
hMcaAJ47YYnNv+rpKEkTr+HZWpzOi31mazxMjRdpet91liMaizI5teeXFHG3uAQw
WxY+PVGNN04YKeR1170/nZgqV/G1mVVjyK8VxalbQH6DE7IDCzviy7L5QqO7e8hf
RJNV6ROmK3G5cOzw10qQ+c9JTPmRWeTQiBMkgLcV4GiI9uatRz2PDWP8tqs1yUfl
l+5CcZ+xebCeyxRj3DfIeyAHS9xSJJNZhi8uAN76tLsaC/m6JeSYq0M81dtpqOEG
xPYym3A/bnJfSrKKf/ySsDXbNfJ6vg5jI+y6Z+9yZmzWJsZhmCEu1zAbYHjdME76
PRdHXWepCWdws98LQvx57SMgYGIP2iCftgMjB7GZDWNRwRTrIG3FziPZLugD7aHn
+8DUtqwx1VZyosv3yDhlf+mBwFuPY9RFhFwLnw4pV76C7IbgpEOA/Pif2ErDzK2a
w9bXirRduivj0aYFy0bYwoRytlwNWDjg+GN/nY4OmMwyTefES9VW8aqRLMngdnSz
MM2IbzQ0aw1HhrgtZjxx1rL4fvYbN/dtM6pqErbWvUWjABEBAAHCwcQEGAEKAHgF
gmTxo4wFiQHihQAJEBnLLFjh8ZsWRxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNl
cXVvaWEtcGdwLm9yZ9QVdGY2VE4v5B92uztaDtE7PIHo2i8W7fkiajqgKUJ2Apsg
FiEEPbdl6a2/KMkGigIlGcssWOHxmxYAAHQND/9I5EohbJ7qy0BFgGXFRZASal/O
osGWskRqaW3S5uqk/oEXCaOT6Z8tRYSRv8Im4v7f5NursYhfZanANbbqrwXJf3fW
r+qW/Ms1AWO7gRITuLqLSSWj4iHIl4Cz4/v9tQoQoMmbfLj/0qE5otWkmm0w5Ffg
AUcjW41PGu2PA3/hZerkkLwchFwXP13hActdk3ha/hCCskeVNDJD33tDTMqnz2ah
sC0/pNkuCk16p4HFAyQJYU0GLmKVPtlaLBBfe4Oo9ArgUcjz9zNdmXJGNu0aVKed
UoODwujT0jcQPXSEdVHjI/jwkA0m/PiLBW+wxvi58DQCnL03KIEPou8P9LnIRrrP
hR7Iebuuj6BKLAZrgWGXc15VXAiIhsirIJ/vDyHCm0IJNbuN+qTpUt12o/lvhVlD
deKu/TlnrSD8v6CU9kmELj6ChC6uihB6LLOT0XokahM3twvkabzFqlYgsw319VhA
4PzkYsRIoiMMzFHzPTJvvG+XMqu8k2lniLvg6UJZqP6T/MPpPqDitfcMkd1PBPms
nsvA+tsreS8BVFA0OE+7DGmytjjpmbwBq5EebVgINGpAk9AAgyT7GcXgzv4NqSmX
VxqlIQIfbAQOtwh16wuaJw7p1H/Qz4UaNra2KCcLXmfVp3Ks9RuyPi89rfOu9B4B
HEZm15ijrxpKeb8Ffg==
=Hq+H
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -1,350 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: GPGTools - https://gpgtools.org
mQINBFpk+4kBEADeRoAvo+sbj6GFc7NOY0P1QasAiSTqYztHsfUk/dht8izxcd3G
ba3QBX4CZlKZkpk8cYbmztOO2EroJWjKg6SDQUfCh+QE/Hx+f2xfVax3118cEbRo
ClnfA1CD9+vMEvFRB39UrGckxHp0wxcUtSQXwxTSVhw1HxRVLPDj5xJ/W5H7GwEA
Rmrj4S0OIRU9gJUlqLW8k18c8Di1asr0y99Fuek5Mw/oRZhkx8+41dRB0lG2BNOZ
ncyDi9wSLi8sC4udCZuv0pVtTHR/onGrEYi71PpY7XYAL0YZnp6RB8ObdchMNZVH
1WKED/aK2MVJoAMXmpA7SVRaGw+dUvJBwtnKqSUT1lNgX6dTyKdxPKdvzmcb1htM
byx1mc1ruQy+sLQ+ZtWgRENvrPpYZ2vcQ3vSZ9YOXt5m75JzbfGNHyCAWhziwhVX
u+UqPPKU8k8GVato3tB6PcAugG5OCqGE9EmJG7a+VFThDjJeWAGBolWydBeIqk7B
NeM3zA/K0PwY/YlLc01bmiquCS5Z5MQwPHRGMKmuEtfJmSBOjv7CGnNFm4kkfRzr
5TR6J9oWlFZXs0cJcJWxrwutYszBbNUgRebij5U6ju7QnJYH5zqctIYafpqotcoR
I5VaZmeYhGBaZdf6ZW/bcDSQJYoi9wsfKU2Mcgb5GY3Aey6z9EYxNoqstQARAQAB
tCBFcmlrIEJlcmdzdHJvbSA8YmVycmFAbm9yZHUubmV0PokCVwQTAQoAQQIbAwUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAIZARYhBB+yL7RJJlg7x0ABIkmkJRyW4KnU
BQJbBAtPBQkTaxLGAAoJEEmkJRyW4KnUBrIP/0AlLjA+Q2N9c7JYOzAfAECFwkEw
zeQHAOX4P3HM6ZvQlmBOdI3DqpmRC0OBee0jYJhGwzIpYvang5wwDGvj3bELfpRs
C7152f/aKfktMJILHts4Z8uIgIlYFy6SIqL/JjffBuYWesh/06/8kMe+QUJe6IL/
8lPZJ1ev1NqvSt9Np2M9GFiHrcxcarfdnPUZib+XZBkU+SGJ1LUeiUZFWdUImxci
sh4Gcd+yw/P05oZrp5YzNs1lj+8hbl+BPJo4zG76tCaLnRBQ7A0cKUj2EHSwgHV4
Ff8UMw/Rv44B4W5R05fmXiC0kY00e9cFzOml1QKJ9Ke1lBI7Kn0wRFIyIoZ9DdcT
JAVCLmZkhLz6aLBloQPqu4vjmi07C5Fl93GGRHGUIfS0PSmQ2OGzr5aZAh7NJrHh
SnxkocL55rtjycKyazBY+DdB2a5RZ8JD+UQLlznLlKuUFLdCx9ktbTo015Dp4P8R
m4UrDadKR0O452cgxJTlH2MNKatYm43j/N9q4S6KZB9Vfpu5SpwTQNwxkS0xl+W6
HqgekA6u82+OMmTqf6ULiDTbtj9kRpz349YQ/GKheEZZKCeFmDtxQ1zTENRuC5tq
wDYe8orM5+6wmsnSkSlVMHp48NNnIeVnxDs2HGa1+Pq1RnSoHdFrVeQT9+uu1QPA
LkKCskTxB0ngy34kiQIzBBABCgAdFiEENU1PghGRV/Ch1uInE6D159p8CZsFAlpq
+7sACgkQE6D159p8CZsTqRAA08a08z9Bu3kOqIU3+m8BLRCMX0Hyd6p0HbzcA0Aw
OgsMLLTQolMJdwb21kJleJRa/Q30J33YZoS1J0qfTY1xuLjaRMvJEXX5ujnWS7s8
X0VMx1SWG1Kj7/WFSuS1iNzTQjitaQrmixMFxwdflXK1CExWdeeA10MJD1mibaXh
yhysOf2fnTvvkBy+FHOTOkAs0d8NRnitWOyRkoYwFnrTFTYUy/HCfGm7EOm3C4Fl
pbeJUC9XSmqpdlTTtRqxMmDscx3Ou8lTs4m1bmu9jsAjssw79WokaBM845jtjcXQ
4vhvqaUuTnJ/dV/KCdnABlnzWdDqmfIks5kacCmZ+5oRMR3OqV2arM/f0R3Bb4VW
J1SvZJYENRlg4wPLWw1iXlR6kf0eXGpQ3qF5dnKjOU8QSmnJHuVh642/++9k3+71
xttRfzsA7yjQOVJCc6YrZ2OSTZ+hy5xGQ9gkODFrRRHmw4ozh4+OTv0dsZJunOEW
Ih5M+lI6zdjMfOUX80L4BhVZAZdBqG/g4VrLEUWdDPoF/9VSJuo/HaZyxEom8Z2i
9939mJ+gtKAPfS9RxqHS7bGb44L2goFQQeRa2g4otx7HEaS3rtWoytjzlsQI5mHK
uZXhYfCabjRHkMh/c/h3SGzSqgufMQ2fuqiR2JqgXW/j9krhLRzmlkg3GTuy/lp0
eyyJAjMEEAEKAB0WIQR8IIKN3HGIvXMXDmHKWPvCG180XAUCWobCvAAKCRDKWPvC
G180XM6MEADRshCqez3sQ48oeafWhxn+kYGIcm5W9LYCwCheom6AfW7yDJ+x+StI
9Oi07aQVuqenqwjkp+lLsjTSnmb9bqeMP9d4zp9QDPK+C+NBsqy6T5FgAnOYg9yh
KScNwTYlXpQGERBGJyTZesH4VibdeAmwn9OjqnntMVuDXPknS3gst+yRwJvKGDeF
Ht6z0MfLk0/qMoutg0qDQsSz3zGNOaLQPn1d3IHeHwv9DsNF+2ZSqPWBIAzHmbbS
0qZ1hUIt1KMCmteG07hRJkgw0FUC5STxwJVy2vwhUFhR4B6vsz6lTfRP2jMMH9Qy
hJdNQD/DfloYOU+Z/i5laY3LTl5VUNUoOdicWi7zfnHrfGIRRb7oDINwbz6BntzO
E92MAXWGBJ2/bSw2cCpiNJXjVHGWTGyF8GvvV9OsAztwgV1Nu46QEUVxJMz2e3Di
2vyZA+qjRnV3AN/Nz8iF2acI+kKJllg2plYFn55MshbRFTpLJYVroJfLYCyDqsh4
bOCetU08JkKdW1BMRD3IYNMxBI7e53qsyPYbLMXyO7jZYkLUZQHZ1qJ+K0IhbTrS
By7zJj787d/Dmdd0qJgj8mLX45os0xXFHJJFunQZekUNUvo0NCF7MsOrg7cGdltV
XKfRqzoc+4KOCtrj/eP4bO76ua3k6IWVZS7SXqTHvhmlk61HB7FX5YkCHAQQAQIA
BgUCWtdrrAAKCRBqlHSqmtx07PB2D/9Byz6k41mtK+0CsYCVYuW1oDOHLS48Ne+V
zbxtLrZWLysB6Fe9mZ6CA3cXsdJzF1j1q43/WwmNj3Eyia73swfAqIl3V29oCgCi
oW+y8omgGlSJdPCPXIH8hxdrKRfimpWhr18oonZTXMNudYOggQ5qsbjzND3GWVTU
UbDWsQHK1+HwGhNHKorJML42mLDY1su/emAzaTnhZ4hDy65UXDdp1GbMyNb3ysx0
Hym16isvre2oLpjcPoxEjbOPeW+VRTiGH8f5yH6WZDaTyG9v9+JycoyPC5DCvCTO
Use9rLhsM/pl9QKbnRXRaiOFL4ssSSGjxM8UW6eMbG5beaDMuGyZC676dW/jm1Bh
KiuGmR66GOv7xpDMI14JWFCSqlPTMxzw7A1lZG0BlrOrlFRzSuGdIfHNiJ1sQKy1
413H3Y7/Gzrv9/tUeXOXfYFBHERald9bPxvOmpqNBdS0+KxpJnHC4ilhlIArAYgp
HKHrWsPQdbIGcjA3qdd4CyhZjh9wQ5d8v4GCRzh/qRAWeTpGghGaRKe3QhNKrZ/v
Jx4Tu4QN/a/bShWrIwrpt/bYvTG8bq1zjtA46Va2sVcHAQUdZXzRIHZxK8o3ywKA
IDkwaZtlUAmlYYK7P9np0N9suaRRj2L+t4nmtC6BqP6+xy3paP/EXEaOe20VXK+H
XcUNOsu+ookCHAQTAQIABgUCWtehHgAKCRCi1Dq/l8s0RcNUD/0eYHM6pfQ4gqNx
0wpoPwjcQthU9AGr/nthGr5Pa1rDLor9BgfimKDUOfHSUaEJfaz/mEKSKjCNRAlf
1PFFidwgcwPDLFc39CtjPBKQo9lj73X9YB9WBCPl3b/Knr3IZdas27fca/vseY78
Vmj3ynBfGm2H+4tkDTfy84F6hLIJ2c5gitNPKJ+C1fuldSaHU4enab3tADqE9443
PiVPbUJSmxHW+iYEqs2ALyvIgEmFVPqfgyMXLISOjQboam8aFm8vdo7T8NcdXoZI
vhzwq79iEFBEU9GpblyJGktueVHrWXTsqAOq/voIs57joMwZ8yNTwVhlG34KxZR7
49e1Ha7RAQbRfuhC5tlQfE4iRX2uEwPw9ZkRMy2VFID7DrAPfUW9SvOZXFlGeoCU
x8yf+9b6A9XCShRvck17jvADQq9WmHoiJTJgyh55Q8WzqW/7zsSr0WwxjhSu3ADG
TXy94ZTSkvjM7TxrZ4W5aKApBgp9NoYj4i/iKOf1rA1CaddqAbiJGncygmWXI5AD
E5oeTV156T4dKItyymPDZtT5G5JdYO01Ho3fX1I5/u05FFTOhMeQidtnWghHKmif
b+GkHrV8p8lKPc8TuvbjPG7Wj+xViLCTnRuOuEIp/mmRyHicRlgDfswQcTdBrjTg
PGidlrlHeCEqzs6l7G+QtSCE01rLUokCMwQQAQgAHRYhBBl/zlwTROqiPn/AajJO
JO09WgAcBQJa2CxTAAoJEDJOJO09WgAcCpoP/3YqRdWBoZR9JVechCjGUqbwOXeK
/IPxtUclYKUnQoGQ7UvXt+b00i4Vq7N6UY22fcja1yKFlccqq5toOVmX4OVZasIX
SAITSSwiuY0yy5SOmNrU7deCaGwpiEvZP5U3NP17/h4dS360MwNOOUFgMXIvpPmu
+uHkXEN+VysxkGPfU6IkTGS/mjzJwu2/kHD4/gjPUiKGfTN1OfoJDSZh0qqRu/Xd
s22wUsKaJHKHCGmZqx9fZRDLlw1ulG+yYh4e7y27Gp7hDZfuKCMR0ANGRguT1Ahd
nhzxOhEz5gnwU/z6r/G6OWA+vtZQuq7hW8hVI0/1kXRJBhB+GQ1oZRRnG2ysfVal
LB+N1z6/W0A5r0Nr4yfLXH87fFg10jQ93EtOcsmTg2FRCs2OKWG0HjaGrDTUFjn8
kF5uFqRgcuYKYyzx3fqY1I89l+Kjj4JF/o7gF9xNy497xlJfN7SDUNAGWZpw02V5
yygondXpVGHMuP6dB3tRg3au3lJgJflEIh4DOadpkZHg5XLkj4zPAoFS7AWqRwEV
jDCnxy0kEPL/A/1uNJMMuRo307gJMG9diACBCtRrRARXaAoFnQwkNnkwCdtLL+mV
mEcd3Lsj3jDeJHjq2YimcMiXKLerIyrRj6rnWe7qG3fT+N/b59YW/OftKFIr7+m+
oznpZP3S1sMPMrz9iQIzBBABCgAdFiEEQyrFVeaoSJN1vFzSKpaP2Q4JqQEFAlrY
Pn0ACgkQKpaP2Q4JqQF0wg/+J0utkOBJV7pZueQt+8JUQMvud3x7nTraviyAMKyP
yPi9mF+GYxpqBmLpXP/Gwy5Mm1AIom5Rml8ZelSax7j09wlj3Q6isVSWtFCRruAl
nRs058Key3EIO+i9IkuBPs9QOZOT9djaGpX5B14OE+ErAqbhqs1SX9x2CHYK2+Gb
3If5uF2+tVKt37g0Z7az0GClv0YZFw7Sc1s9hXrUsSl9sh2Qki1fVtqYQtDR5Pix
vbE5zdfvG3eQjS2fxif2E1YJVxcrLmDzocSZV/lLaO/uWP0yqumK1u7i2y+7s6Ye
dUGjgEf7Lib1o8IuE1TgwXksvZDNhX+MU37LnAHckZd6RJGH8doyDoGIme0/HLV9
s77T/7tapkx5QSOnXPkrjP9/KSJS3XAohMZdAUsipHphT/ZNjRbhRA2VgfjTaeha
JnX8z+CcvUZUTofO/H35izk4nn2Bs1z0jhs+hymb2Z4QoBnuGhNLycP34vG/3nUO
Sp7QD7hXCJHCA3mpfxh0s9YX5kKP9KWCuBgNY2ZzU3GonFwELo99Enmv0NYrbWRq
8GDIhaw3dTriPtZf8GexWsE49e+oArpxO/bvH/K+8asnqapgXD7jkKgfMMP+iHZx
JpzDWNiJFpBmu4G9UDU9x1vtFL8gNckuRckhFrORuVyB1YyZQEjmkidKgxmdIXGx
fMuJAjMEEwEIAB0WIQRZis+gxN9b9JAAC16ajKfVlQ/z2AUCWtmWswAKCRCajKfV
lQ/z2HWYEACvBm+6+uBwF+TbNPyTNkzBLkt8fDio0npDXm9862ZA2hvSxG7CeDnh
WjsEtrIUCH2gQz/0FX90hPKa+xU8P7LihC6wcRwJ+n51tsThrK5LopblUgb/E2yn
YFW2OCxz/1VPfJQ3pYJuiCraUClzsOMZyRkPkR3I3iwpl+gD8L6ZXVKshuaCzFjZ
/i8MCgBtnmxMTov+gVDdo4xPPVxv0wpde7N0mgRztSdwtCC6onVyp9es9Li/n0TW
+/zJgokGCxRSRHABLwm1f4715Kbmkuf0l38N8HF3SAmFBrE9azTAim1XmYfVvSHG
EIOoQcQRS/6focvWRxe3RHjJMEvvTU8i/gqyhQPLEf6fQz4TMTSnGV+9ltxC/etg
4zhFwFx74SLlW87WDi4KIfaeNnTSKZ54hQ8nDVbKBv+AelJuvGjVAbZUEcYEU+jx
zmXC/wxynQgCIN1FtpLe+g1EmbIiK/nhBILmlegHMOKGhLvC+0XVQTHki4dRB1FD
ca4aks7HM/G9PU7HH80r0vtE9EZi/f02hYJStCerTtmPyZPNqf5ulq2KcgWEI52l
EQEB9rHprd/cP5czwvKiDz2Oc3CELVbXmVGJhLWj/uJoH2FpZN6sPo9l5cqM2FKM
u7Cx78sGTZ7UpcDCEhff+EvKJTskOyILiFqGpNAQW4uFmSNEO+KEkYkBOQQTAQgA
IxYhBDHDr3PuPeAtcUmD+VgDNU0h/p8IBQJa2dCYBYMHhh+AAAoJEFgDNU0h/p8I
MrUH/2npDWMIOChTblB7lxqG+BBt9JgP95gpJT6FQGp9c4Y4GahiicCGm9v2z1+p
jid9vHnsb787LkBtt65z6ijIeIAhj0D21sbS6zVidRn85GUWvbimlnoks0reF1tG
7SBRUcfmMh5tr3nb0N3NYrs4EKGVCv6CMQ1FHAbdxUQ2Annu9rXJUHZEYc8KRD2Z
GK9h8ZXxWEMiqWlU9HlOLN1HdJyg6z0ymBHVmdlQtRDJULEKC/Cln8caBAxYu+vc
fX8XvIhJs8qMM8IBodkJwDUxDngbOTAuVwN+BarBb+mqgC4LNhhsGQDsMumjXfLA
H8a3KmF9tULRLRSJX5S7PXhttQaJAhwEEwECAAYFAlrgSisACgkQPWr7Q098QA/u
/g/+PXizHzsLtbpxH9Get87H5OeH5DcObvLwVE8sKDYaZ20d1cMd3KAW7gq2aBWt
Zp7jUmjjFjaEVkz3Of5OO8iU5teqFgo0Wkt8uSJNdNUyaqwfwsEXKUoC7QhSDPwI
MCUnL0SbBFkb6kEqKSq8VR6UWnrHpH2XMuD9M6aL9DPGVuEivKHPwXXXIaOe1U42
tEQk4TBrD0Ok/WlUH1U9yzYRH2HhEszKePpFGfZycgvmBxYVCmHSarDV2a+DwDN2
yrT7zfYMmwpTv3NaURZRs+4g3D3+AbW2kiPI1yddgtoRG80M/gkKwdIUxPCw9/+K
dwfq1tvre5Donu3ks2qMilWA2IghuTl1R1zNI93Hi8G1nYvlu9KtajzjVKtiYfZB
D2HsNllqsRhbaS5LX9xTOnQpb2TXeUMDu3v/FEOob4pNqOVOrPwAscagbnsvShvr
pWeCO7+X+9hvoaZOSqimGf7UOC9oLtOgJPXyLZVw+KGWIIBA2+tul49jJfmOCn84
nxyo0F7oO/9LtjbXWRrFDIS+pUR7Zf2dsBQJytdXQ6AesyHuvHg76zWLJJkPCR82
QCaDxUN2P5ZltmhnYekqLXpx4rppQOMyhUEwRWhnUhW7MXlOo7dYPFJo0g1zNYMj
w5J4M16z2CvQ8KE0RUAhViXVFDVGBVfr+npNvzROQBHYuj2IRgQTEQIABgUCWvK6
TwAKCRBYqdDzyTDbFsW8AKCGkFUnZJ7qMAY15kYsODXwBVdzKQCeJcsKFg9X8ny4
H1FPp+LlhH6xVKGJATMEEwEIAB0WIQRBkFEu1zqayxIDQdcEDFEAfBqYLQUCWvlW
VAAKCRAEDFEAfBqYLZGFB/0Z4nly7EJCcyrAbimNzuAZmfkVohgQ0vvrW/NVn6YA
aLQHQoI/feAJCL6stqC3KxY3pTVcKUFZk/gBn3BT6FDptC9GPZ9mcnWEiUc1Czw2
DK/lRobhMbrNzoQcEgsa/++uIrvJAuhW2zg9N5F6p1edRkBt6gyR2X65GeopNcm2
vUP/vjmx7Lbr5JD+vhq8CuepO075T7y+aciDewijPj7I9m2zeuQmGBhlKiq37gV1
WnKZ40NBm/7/9mRq95AXq7pdHOtGjqqjDtlobI8Nf+3gZgH9c1+1C82N2MhDEu3z
CC2Lqp4U2wrsEZLzq7hhgur6r3dxgCiqQLOBl4GBfaMaiQIcBBMBAgAGBQJa8W7Z
AAoJEKJ/4hHFKvzW/JIQAIh+YOtAVhWkj9iHlVe+me7S8A4DtCDK3RWJjuAIx2JC
F9VbXxhqShP3V43LCbHVXaJEZ2eoT3iLx6qqHiSn03T33OQFbgscITkw79aF0Mn2
M6MTmt9MXVTJ6Hb0X03sHu8bEYeyd3HR52eOOnl6qrhEE+7ZXU5QpuTkk2GOGE0G
B/Th03AI46iami5nlLi5tsaOQR6CNtL9bBz0KALLHEAL1dHvZUTZ4tqiHAgEROH6
BxWQ4HUi1uwAGUXC0Dn2tJ8/Z9xmFwZNRqRPYgcd62PXKXYr6O6Gt8cfbW579XJ4
oBPh4QbQ65Oy9V/qV8UGP+xlufNnqNXv0nYDhuJcNL40pUHGX+50X9EVAp1Ocsba
bd+84P0oYmI4Wfc/06VmGqgCP2bzPH6m0dmjemx0O9JXDa4ydhBwrzUg8RcooKE6
xiLDOjdefR8iWsHCEFabM/oNv1xV/UR4H2DscOj20X421+U4HOpxCBsW54hKSPVC
Wx27f4UZ565ZsmJl7w9B+6CJBB9fbI9Cc/tnFDIHkCWVr6dXCcGtMsP+fTo7GARb
Z21XvtP8twzXnNZNXo7WB6sX9MdTXfyx1TGxv9smN3egLi7jwbG7VuzxVSP9OQR4
tl1qYpGCkiiwLQ12ppoHavOzkdSd76WTp4oojx1G2Z2jgOHSmeDaLhXdVrFm1yjD
iQIzBBABCAAdFiEEP38RmoiukSTBIo6eSKTSzT93sqMFAlrq1wIACgkQSKTSzT93
sqMAYw//XPwLyCnPRIo9jdjtz62wvBX8wmi98iqDQgBB03ArJXD5oM06jT//htaS
tOAaWbQzfQEDgB0mLJPeIOFYYP6sJSrIhOhR0/KQGYkMzAIpR4cIHN/Lm/FSapi6
2J/60FX4wr55UZK2yVHrZxvkrUBjK+maYK8LQfjKIAkxTFJcvyDPeI7oF0emqibn
sGuW/43zDQyaci/oYRa5mB86Zxo7MdrJymxLk2UXzWCIX0AAcXCai2DQMwjhMuyU
v7YGJaCRMfx/vcnhtFcFMxw5ZJ8TGsRG0KGlHhjIB9He0tFuNFXwsbcqtnuMmrY0
sijGqWyGa4//SNv1bzv7+APr4VMN5T/V8C70qFHg55pnpeR1tczLy2xqfbDHK1of
WPInCB/H/ooca1U0B5PK5mQIigA6VeBHgdkyWd5AOaOsXaldywEgamcsrEGYljLd
YxbRhBmxh/RsO5BZTCVauCG8J0kVcul9+Qdh6OixQn+UOdLqEOLONqdK04SM3ceZ
WKpHg6PhnYOe+kGER74tCKy+R6bz5ig3HDks0vPraY/mlAoCSJ9i0QZdvZz/f7kq
ISqgrdT7Oa7L7A0MXQbkOAbMo3AOyEDFgk7jH7hsIUntymvBLGxPtBsXV6Q7LSQO
wHr3t6EFuY5ukBpbxJfr8dQbWLZTNuCRn/k63XdRpGr0035Ys4e0H0VyaWsgQmVy
Z3N0cm9tIDxiZXJyYUBzdW5ldC5zZT6JAlQEEwEKAD4CGwMFCwkIBwMFFQoJCAsF
FgIDAQACHgECF4AWIQQfsi+0SSZYO8dAASJJpCUcluCp1AUCWwQLVAUJE2sSxgAK
CRBJpCUcluCp1D/WD/4xwPTlXKxH12D3vP/2wmkqbf2iudU+TlmcEERIEin2aBw5
+GzxY9l8rNi8HWz5c7xA67Gss81I6RJJ0yXvuMwrfRPE8/FRFLXfgsf0twbxUI9x
krxXrqMho+zIXAXHA4K8ClKxJ62sp/IvxEZ8uuM/BAsoNOGWdDjxUWJutmgcwqKW
t+6v16fz0R8/7W21F8J0aru3XTrygPsddx09LsUN5E0aenUrYq9XN6VnobQpserM
Pd5lDc6wxrmlPPVNH+TP2Ne4QUKFhPvvNKqtafc1QBdpnttOHQ+xokN+KFDp5kht
YRrXJnzSVRQgE4qvmddPxn0ihTC7Rb+JVinZUBEFfUDKK+cLJmsU4o3N6OeRpjbO
qw7nlqOfqAS2iM4kabeqwUZ72fCSRk2cVkTJx+P30eztdLHISZBldFMV0iXyL1L7
uNUxeA9zbKvy2XDsJqiuVh0v6saJunKs9Lbi7i+6R7strE2ixYqLP+gyOkJSNAhW
EqCZf95rN86ymZkGHB7qDRjJP3KjPqtpqQnZIRFJsRlaVrTDu2v5XxqBVoITh/tk
2tP7N0DPb5rDbl5e0ABGgagEAXeW6loWootI9Jy08Fh4MCWwKHwQhF+xtZVtKpP8
N3AKf/rwNeK4320LhmqUJ+UtzaEuFN/EhK51uH8d948bA27GawVaKUOjHy4D5IkC
MwQQAQoAHRYhBDVNT4IRkVfwodbiJxOg9efafAmbBQJaavu4AAoJEBOg9efafAmb
yqAP/2rH578p7e7bvmsoPFBbdhzHhij0GVBPXt3nOvHrXLrQi+7uY7iNJoLW4MV2
tSvHU6OTISzbmgura7GbtGsMRRx5U3Q8fvY6W1VKoiPhMBLGBLAFsp25e3C/CMYX
2s8ymMnXlF4OiycdNDBQBkn7E5qhMLflkdVus2u5IGFKluNQJIi5Sm7Qseq/26Da
pmiuSq+DgLYiOs6dpjxWbLLSocdVA0GAC1lJhCF3usjDAEe9Vs9VY3NG/IVMpOR8
1kWeOWMJuQIzF0iCKyDbeoYIL075QnnrCXVeh60c/O9Op7t4J5VqLNINxeB2nRkp
1sa7+Y8w8WDz4Rcx3No4lqusCKFTDRGBjjtzIDNGz9j+dslx2DCQlYWnO78JyM+U
aBuKwuE5x/e8b68Vx0q1mhbEF3Kx3fsklmxmsmKPJo4BmPTD3m7Ah/sAEKHCQxjl
qPbsc5bBgJESG6w2ex/EbbA5kDPfpN3ILSCd4l8Tb+YnX9hDmbjVT1Z1BwIf+sUm
8h3cZkOt3VdJkF+/TeA7++G5wHb3MSgreMe9t87C7/mR6PsTcCS5DV6EJdpJoE4o
DuDvX9/ttacOM8SDGPc+snI1DgwvsBZzF+Vy+NuKyh74eImJDN4EOE1kKbiiqD+R
XQEuom8ypLHgPE5eKUoTlu+q+/Kugs0L5boNOnMyFVqtSFbHiQIzBBABCgAdFiEE
fCCCjdxxiL1zFw5hylj7whtfNFwFAlqGwrMACgkQylj7whtfNFxOxA//WzvhwdID
eLPJQSnHW8hLwA8R23aqOPL7A4uyDp40QWpt43fXGFKrOnHRvsOvp7VBq+kCRXbA
eTV4pWQVwu2mKFnCpdHWS61FYegQGMwsBNHscsV5/OByctQ9grPrcQXyJmDv75vb
PJGlnM/WYMEbuDlHTtiDY9l6aiTaQyxu5sPSe0FdEfT3+kIliy0xXBctUBaTrC8+
nTIfnSIfXunXdujeHVpIWYOiGxuG9wCsA1UeznIqVj1eC9k2aAVYw7anJ2bGC4Wn
CqJcTGWgZ+3z6PxzI3+c5OvqpVCdoeKTZSWpV/S+oqZPf8cAd46uZJmaJ+TExA2P
qHzDYLm7xouHQldC7K+fJMEsPwa80erlKJAz9l5GEAuIJi/FZnn2Zd0nkYlUJjXR
4t4Py3Rk5yMfowiyplyBS99DbmdI0STuIkAbrIP/TAxLTVr7XRgkX4McbawRK+7w
By0v9f6mKWAVw9HgAwEQUB2Ytr2qv6WHev+4g0zdvYejLeh3wXSIVPG8YO2M1jwu
RB3JFTSzSkQNPBDwDDS+Sn60Hv1jyiW0MoSnwuGEkDkBl1Ec5UmotExig8iaor/c
tAabWrUxbcn732j+zPDu+FetdsIxj2UcJFUwxcolqv2mGBPcJvNjJbOBAhd3c44j
qG98R3Srsd8ic6Hc6tDNzqhO4gO9K9lsmjKJAhwEEAECAAYFAlrXa7MACgkQapR0
qprcdOzJvA/6AhNvUVyFvtaLy0B+1qhK6Oo1xz9r4ctW+582q53b+SzYcV+kIAiB
fdkFFQO+ipeIxp9YAkLRroCoYcm7hgJWFV3YP5ZyhL63dWR548znWyZRn5/AW3Q3
sSkkzscTsdM84MPFKoywXhL5QqMgoBZ+oNVgWALzwzHfCPBVHv++iD1/DA8PUovK
eP0t6JmY1Kjm7TI/NL1hM9P/RjfPXjFVRscrQ9mDBKw0ZDxrZ1FflWfgujOMdzYe
VtUet/tX8VpQ/zLAOYwxp/R15m9OfMfnIR3qpXai1EAEhnmwGXXtGCmFthHBoqlr
0Pa5ZlzH4m4khpF3vCaATnw9LiA32xAaZtGuIVhur68QTp3pmz4gJsmIpSIe5vGE
qJPslNKP1zOCyS4VibX8k5vVKxqu0ln4uOD8dPosYsyUjoDOICFVlTEpKUgtK5Sz
ym1ZSeB71vvciMC7haaonyCnpVgsX1LG4hP9kY0dloXAuxl+z8rgphN99W22fPkL
2GmR5nBpg/3/A5a73xFajzEWQPQ4aowk+LkgrXkNnqCt46gNq5s0vfg/STVsTsEd
LbhKZZy9kZ73kuwgwrX3DzAfe28pJnWheazFEn8ClcVJQGVj60DqsTF0MRH7FKDY
XInzZWMR926aPGIQnpPSb4GKEI4STJNTf/YfnUF2OfW1FqjjGAkcIiaJAhwEEwEC
AAYFAlrXoR4ACgkQotQ6v5fLNEUa+w//csuigTGtWof0aZ8YpR502opz332kxNdQ
NxRwxdMbISBZheapP50P/DRmIRqXJ+O9qRzs8Nlofhm/Rp4S4xi+Q7j9IfmJmMY2
8e0ccpPxFLJl4YFhSNUJinuXvByj2yTlRExnIwb9toJu5zBfVK7F64etx1nAYU4j
Ri2D8Ns+jv86wZBaeVdwUkxaoq+p4+tjr3f4XNOGPhzaaqv59c3LxPm8otpUaVox
NoSlE+RpWlJ4+a6tfATRQhGg5fHKc6A0ygOPgWiMgySTqn6wUV9YAH/gp6yBB53d
x110nNXZJVCIH/LOKm+OVoU8dRETOIkG0f2YJPF7/l75owZ3J8AhpwRpQbBDlQjz
dtxrewqiyFhUD+3jq4mIxag3gB+Kkf8+bA8i7Br0Io5RGTkAD/eePNtCt/ONRKsb
So7GxgdQaWsyJffhHgBBnf/Zy3iHnxJiyaQ8ju7WiITmG1jZdAUlQ95uNEatHrjU
01Hpwp9kGz8BVcxeQxKiihXgp/XBpRnN0cMCqH5xH7V6nw1EcVdXvrq0dU6B8qKk
D8IOBbTYZ+j5OAZO4knBJ/ZUYEVUET7ZVUD1coQ32Wa+MbbR9+O+8n3T73m/uBxy
MHqpY6EZqwm7uzMkWJo1x3mBDOFHT95N5detTR1KfKAPyK+yM7cjhXb7pXcml4Mi
109Zy53NVFuJAjMEEAEIAB0WIQQZf85cE0Tqoj5/wGoyTiTtPVoAHAUCWtgsUwAK
CRAyTiTtPVoAHPXbD/47v1b9gNCTWTdR5mDF4Xgj4u3wc22YFXbcyDNyGrd4S2tP
HkvGhT7zWfYs+T9zdtE5n7ZauObJN8NS6RyIkAwIxVVdfuJZLu3OCKjflCSGhwTf
CS0Es8cFUqL6yGYZ2Fwq9tAwHLH8N0Bc2geY6dH9TlcI8RkeEocnj+ESKRAuwPF/
9ancDpvI1/ZyYCduZeDbesXBHY2wV4fiWYxn2Np4DR7USqoEBHDgyQMg4trGDNcL
Omym1JwCXEE+RK1Q2r7MiBbLUqR3tdCx3oUCypwbeFd8JTTZ23QBgLyi3BKoIZC+
hgpFmDogYkHc+cHjpsYtycI0pRoZSTRKOs6Kayp9DBLhkkMnvMGjBCDqjPc/tmZH
Q5gFvMQuagUT5gSEjNIyoTPBvuoWPsCZsAZgZ237DlIacwVoWc5zwMXDfFmYyJWt
ZTiSpyTGPtRvBEzte/DiF1SqmL1G3TAvBq/ljzQmnBMaBY3oWIXuuR089zz/ZZhd
pYg7c9eizM2G5ETYH/eM85M+b7QPBihOTButa709NuiE8WCszp4vXZ12PpLArBFc
MKODMrnv4lrhEwzjq6qT8lk8rOVwj7q2YYqRGodaYPHm0MZIalZwlIBxqKRZPWpd
ak2FT1eXIcS0Dfpt45hHBtdKuiVjIQzqCYHTh2rb+apjozzVvRzHm7eAENw5F4kC
MwQQAQoAHRYhBEMqxVXmqEiTdbxc0iqWj9kOCakBBQJa2D59AAoJECqWj9kOCakB
R5QP/iCvOUwkm5gYA1NRoI8p0YtKV6jVVr012o9dms16pBSHwSPR5VtVCe1CoV1C
16QiImyh9ncECTbh5kFYyLpZ2anAQWlbbeEmznwSZ1X67kJawnL/yZ3Aq+35oh+3
mzjO++B7J+uvhN44Go1qWKTO9iWfWlK74UEZYwdPgRwkPP+MPGfvna9Z/yh/Fbsx
cuclk2oyevt0xIypNc9SHc6LxR2Nj5yIay6RURS7IQ3xtM4fejR5C3pJVYRxVgFI
XpeAAg5+2HvsOXE+IqqiIPVOJiuM/KBQ4I3R0i0mlWbFzzct8VFXbRmpBI0HCwb8
3FWX8QjLspMpyWTzZAHu98eyf8WRO1KqYhK51wiw02IkJC1MouXkcZHxpWJPNDsl
Sr6dC8AlB11TfQ+s1eB0NVUYcvlK5lohbYL14rXz075ASYDsPxwMju0EebR6YDcS
tq7CHGCCqs4Vm29cLcI0qvn2Z38HGnw29yTtfM6yThEBGPUq0IJ/xCJbpHQP4xlK
jpROVmwFqxOdgIewxAoLpd8GOOIFaVDknCfqB6Z+m/GbPeDy53/Ux1aYlRyD6wYX
jI1CrtaGGAlBtSS26Xm6yO5+Y5Zuz0XNQeafv8Q630wmZX8Y6LXVPhLN2s2TyUV0
1ET85otSLX8veUg1VpkL73S8g9+MnVgn0RTcJJY2VH4DuZHpiQIzBBMBCAAdFiEE
WYrPoMTfW/SQAAtemoyn1ZUP89gFAlrZlrMACgkQmoyn1ZUP89ipog//dZIRIsW6
kamerPl+ZwKdOHsWGQ1BkJF63j79Qtykfqkt2eiQBYQdqs3wvyuDjMWyLi6fWAoF
aSiLLdZtoAeBL2EvLV1SeZSgSyqLwYJCHEdnK2TcS6kJCV3h36ZZ0qlRS+ebfyjB
01eV/jNQpZUUvZPzMaDGQT6YfZFo1mr9wsRyYVKo1WPfBTrBHaki0hR0cOL9uvGz
N0Og58ikQNioVAzqGf4rfUlAZS555FDCdQkNTpPPO5bxr4IXIQ9Zsw2RvO5U6e4p
5qVEO9zNsA0iwEyg4rLtpm+ZXVwn6DkU5rFwfn9aqZqme/cRLVwWFGHSgLiRPIJ7
Th1Cs4L7rGhaYP92HHDZK2u5wPvnNUzbSDiWUgYyV9jTpjBIORS4zFD4AdbVFeMM
ENOfQqYrlOrnHhbrLUtFltM8VUhJb25DnuZ3GG5uLS1cakxKgCAP9JC1ssMQZiB4
/OhNyo9zSLp+8Hd+KKsQwu5abl2Bv6CzsS+rJomCjeYL6P95gCJOS0+PjZZ6nyTf
uOMZegygyiKXqq0VHnxS/UL9M0vjdT786LtBODWiaJU+GPWNxa8LNSo890IqH4RJ
RjkhF2/wWhYtrSt+Mhlqv2BErts2d00Lf1RK2G31mFXMNzuR8/PkRs9bBHHg+hur
KCClEZetocS1QsBhlHxO6pWS+A8F9Sv8DxeJATkEEwEIACMWIQQxw69z7j3gLXFJ
g/lYAzVNIf6fCAUCWtnQnwWDB4YfgAAKCRBYAzVNIf6fCFjXB/9+t3dUvOCOCBOT
Z+WvNbMzka2zAmJTpqCdN8U4b4NwlKrvCdqNXHCtpSSQqlJa01O8liUpaoyCq0ZX
uORYhjBtpNeDhZqtEOVrwc9LOfp6bwN1X/tElITgPAsXHOIe42RH+hVRvT5kwb1t
WvxCSTlXYkkjGSkn47vnUSLhhhPKa2MIcR5xNMkvAXn3seTX1JBTcduRDkjeo89E
8dEk7RuE1zdzoJ63sKv4FNL97lcS+U07IqzO9xN+45qgaIx05uskVBMqrBcWXr6I
7azilkK4lil0atodzWRYM/buef3WMTUhrCauNZla8BYJTw8PBirIpwIulcQ9SyfL
nalHLyROiQIcBBMBAgAGBQJa4EorAAoJED1q+0NPfEAPSxoP/R3i9I+V/GNszB0V
JXWUVFBS2luxRV0Yhdj5duErVv80NOY33s2Z1irdavNw21KgB/YTUGCZqrSzEtDi
vp9NwAUnWstQ1XKBtdxzajDjOU4YZKKwo08N+EH9YjCyvJrGrzd37jcp7abq9Gf1
KNQXIrJ3UQoSsWYA8h8x0GSzNc+bIBoPJ/ccHOC7YHjxku6Q6lg0lyZB4lXYz8Xq
WPTyuZ1A/qzMx8OaLO6EoF6i+VwmfNzvH2eiRxErfEpieC8ZJRTagq2m4JgrbGfG
suHQUc0bqB/LcZi+fde5UkONvVMIn1IwsObBO0qYB9musuNIYF2nKr7F3j8PZTfK
D2kj6+SoN8iN+EtHM4B6rGPGhwR0DndPKyIk1dxp1RPw31KEytImnyoFdreNLSdl
hD6KL6XfPclZoz7e/+Nhs45FDqaTCrPetXZHbZchNp/lGIHxpNHzI7rks6AJh64X
uhNLjGnzIshIhOsm+oBxLzEznpWPF/DaD8dDxbXD+6NMr2yUBMyY0Mu+bxLpaSII
Fm/NFKygl5ETQSGJx1s/KXkd0z5dALgoyvEyr9O600z92nXZ0ZcRXdHaLTBOkjCp
SJFkCm/phxHMAHC9pZS99bBGAFNlqSx6ACpjB6gDrPs5lQtitR+OMQPXNdiOv9HP
WC0XrXgMCrllUS2hX7tT4P7dpbHwiEYEExECAAYFAlryulMACgkQWKnQ88kw2xYN
lQCfRa2coErcne1WA3okQha/V0Ut1xYAmwUZ7Wr08ZH37oCNCjjEDNxpQ5tciQEz
BBMBCAAdFiEEQZBRLtc6mssSA0HXBAxRAHwamC0FAlr5VlUACgkQBAxRAHwamC1D
Mgf7BLmT5BrrY+2qs96UUnWE/fBKQLToJLjTHJQuNouTQjt4ELS/UBCqK8R/DK1W
VKSuaOlv4Rwgfyti0Ft+YcnsMxulTfNgnsMEVMB+Nj8qNClP426RStNxj27m0CR5
CBvgf/KOvinAsOwXbetl2VEbJCU8J4Dp385nRfuMN51OmTBYp0d3HSkSksPU1MbT
dC9EuD9cu/n6ju5Rf04MUQHjl79q/YNPxJ1mc1UTOzzUVxS4g3epDImmsgwXO4eM
K/RaaDHz/WYlMKfqjefONTeRfcRtf5MgfEjQ6F3aHnSZiSEHPef+Y3LMWfgmJccV
9Ua/XJGUwg8hu81F5Kev8aNjeIkCHAQTAQIABgUCWvFu4gAKCRCif+IRxSr81j+Y
EADSX5NU5AQsE7NkeIckQ/LuDH8W3OaY+IQRk3k5RTpFq0rnwfXdnwalLa0p0SMv
k+IDYiXlhcjqq0B1pPBTVZzNftEpkEP766BNgjz/uL2Ual04DoLf53Dn0D5ySpLU
Dd2XzeeEJAY54Zkjre2kLkkXCooxaZpkzuktKDVXp4R+rynDjWK7fMeJpZl7+BXh
MDqQXK6S5MQJbQ6eNfJwz9ONu256DLVVvmAYB7RbBFItTgste8a5gBMnrlo+ydC1
ud5nOrzQJjbt/tU00392iL3XDstaMLC2+eNAjxFKq8KSEEbaPMTkFgV330Tjd6Lu
fJc7c+fTz5HCuHoyakRQ0yiUpVkXjpqmjSz99mhw6rC5/++bQLCHPHoJtIdjvfb+
uzxIO8v3r5SfcGUZghTOzgKBE66QQYtXm0pv2r5B/OMi7Y3cpQEyh3ABntXzB+DP
pu592DLDQblwBTBCYmZ6Tr8cCubGaOd6WY9YMOqUucGeKJRYhdZGJ47LRSjjILBw
WeSAbUpmm6nqFUdRGDt6OhNeWqr4R5u77JCfwpVTVVUPtR75ypdZYpft/qNkTGMq
JwLjIIeiNVNln3FEpIA7y+j7ub4bFqsvmpYy9DR2s2tTVs1P6xA8Yl5I3D9q/302
k8YsqZagPA/gfh1jVpwu1+XrjfaGiTAM9kidxKKmGjnGTIkCMwQQAQgAHRYhBD9/
EZqIrpEkwSKOnkik0s0/d7KjBQJa6tcCAAoJEEik0s0/d7Kj8kMP/286fEObmNvo
9RK7K0v5K3qXxDycoCSScieTjSH6z4CX2k4agyMB16+OTa+u8cutPyZ1OCmLG/at
eAnmFDkSsdfRqw6qdeO/FL9eFgRkZfT/3uLIkjdr2Sz7kirkvTOUgj28RC6+9HR7
cH0eAKZbcIJvu8TLcfhktftve0JGAnWs0ag+m6N7mjNO4ROrfb8JX9xrKRgPNUjr
Beyd8e8sptBSs63yDDXXXPcZCe5Wa9N4Cv0fxumyK3KmSfME4KozLzy+BtSBdQ/g
/ZiJrzhGSNfpDlunz9/ffVxk+p7VbxDkAP+KxpnuGvSWQzdTtFO9/eSynwwPWVFS
0aFwQsyvEqqeWD0zk5VJaeRfws7CYVJuD7EoHSBocHknmd4zsF17eeH7XGpstCIv
LvFtKLuwau0CmGkYyy563GYJYA1sQkLTjeiQ5ebdrNMUIBRhigrQfERS2yt6ht7X
LvfFRuMUqOhbku0Y3vXQ7+uoNum9pxnEHvPp6GQwd0HyfUREBQls/QQqRG4XB4l8
H3/TJDqi1R3os6D0naXhvQcKX2gbGrogD5tNqKrCeMAZmJJSswmhijhEQFzKgTxD
NRjCvY7UpyTwcbLQUsLwCmBEnlXBqYYfH3rsDWt9Waqv91Yap2tz44DS70U7XKsL
PTTLlFZyeAV4ijxTsXnXF+UHrbwBlcsguQINBFpk+4kBEAC7yqRbae/se4eUF14+
pePi+ZRNyTsdhc+vW2pRDQwtgSi6Rgs/cnTQnjKbxWhvmGxM6lKm+N59J+N9aGOM
Sf+BGWabxfN/HIf4+slaI+89ErOnKUtXcXnhxPxLGfa1OpEeFSMihjNFoCTCEngt
k+D2ng41GOKb/qh3IItbLasqev6Iy6A88L9UZt9Orznzd5vctR6nk+nJwnsbVE65
HQuT7jiMr9oWKIqZpsB3va7kGwzcC8j1EGIFHxdT509exAaoAEco/aN7ywLD3JWP
CYXEj1Xnq1GZHkh/2W9gHDQzLKEXdoJ0hOnmGHhRdC/MHZZYrTSkvb1byBJIamC7
HTK4rkU3jDJZwXeCUpdxuxlh4KHv0OzvMwJ5r/yS4j5KhbUhTcJ41gSDIy3bH9/W
guAKOY9TYLitBZBcOUNnA6Oc3iTZDex0Al2Iz0kYAJK4NJxJ6QIe7VwgXV/HwdBU
hy9tGbfSYXzPDusfjv8Q5qtz42S1RPkP4w0ochb1KdD76i5N/HAknHvTbC626IQ+
HAELyG94exy0mhjjzIVLdUfz05BtKirZPMqaNFpAeLJbJ4pW7iU3IHlb2Hj1oBKO
fkZ/6T/sIqwigJZj4aFsrl1IcoAW8y2XHOMpQ20/PGf0PA0kbJ8jfmAn/2Qnpz09
g656aSbiawkbgz8OR6+WtX6X9wARAQABiQI8BBgBCgAmAhsMFiEEH7IvtEkmWDvH
QAEiSaQlHJbgqdQFAlsEC2sFCRNrEuIACgkQSaQlHJbgqdRSCg/+NtXKzjOKS+VD
uW2u4lRcl8wm8QVIwFoLl0cEiLeCBTNnYqrvgaaLTcI3XL1N6wFvWHcIRGOoVX/7
3pJUYfDlN8Xjgf9vDMEhAdELW/lYq/mgC9O0dPoNRuLlWhMMzKvfHIz0ntQ/TBUD
ocl4Cz26NfeGCDBPoi1VOW+C5Wer65r10DT4C9WhDjSJXUS1p3jjzUw41LTBKNqw
svmevMotadzSe2t8b8wGGFanYonXDJlMAZdaZj8kAjMnOWT50ws7GhxGmEr3HrLg
J6B9Kcq21O3Oxl2Hb+QyYTXmaNaDs2vboe11z+/y8COMlKc+yHZVMsfoqgowhVEX
f0A/nPMCqHDS+WrHza8SmKtdt9ECgI4nzTZycwBmNk9JrQs1M+PTezNB82RAxXYs
WmPYJBKenJOoMLVZLHqydpHNR6c0pPPr3cOg4SZWEdX3nIZBDCilx1rRPPGOyS4m
0ncKvpNsUqK9a37XrD5VBtqMJS4QcLcPwYDLzJoF1geB4ZBh48d9eOq/0uDgLIMs
QQH0qnHyzAjTgTY8gU+w3fUVRlVvlf2tW90jtMtP6vvzx1rTh7uEW1BospGhly44
PgjdLYK0aaD3QEal6SJBMN2h85UnJGivgm867+2icMCrCEV97vHeGBoBHpxw59Ok
BnCrYW7FDz6A4xjXuY2RsynF+XpPNa+5Ag0EWmT82gEQANLhyooD5T8reN9yqakS
OBnGoe5NqWbDwuObbrFFI/jiq8SV8MzDMOC4MRgRKSeR/gulj40r+fL5s8bs9QFa
3WbK3galHerfQ9F+TPCJaVWc0GkH2M2P3yH7ANTugqnp+JENzPF3ybypht2c8Rue
M8QIkcCUG1HTf4G2JYky6HppkcCgWgpQjoXVJ1foyPfxPDqwTITWPR9mgpYu89+e
CV2eePZS+F7EcXGZPji3KGSISThDOpu0tBBTiLtS5fBmcrO0u1g7GNy9KVSwixvp
HiCPsxeodjIH5o4Bg4/5OBlzoLsLfgdKQyQ8HHCP/VtmGwZIhVrAVT08lN3dI9cF
osfYtpzLlE8C2suEhX0yGJHPnfel8qC1Ly5px5MDIAkWExssHPN/g9VEMsE0taZw
hdPmkcHm8rhz1cxBA9mb2J2lBP6pOp+l2csLLIr5g2e2gV9KdNq4ognNWD+khWWS
tKVp1x1Cy2CiJ64pYGzkFxFPFu89L9XG8Gjt0XwoYM01gn20K3qb3yrSvjFh3exy
qfE51DuqUntAXxv3KmgITjs6/OCHd+qXjJKfWh7bEnNmywtQZHsfqC3p8Ob7juP9
R0HuxlV3q72BVXsfGBPtWPnNeMHCszAQLoaoeoyznXTDddDye8BoBJvhcyI1fayL
tOxD5nkXoW/ddk0ym+JlNBnNABEBAAGJAjwEGAEKACYCGyAWIQQfsi+0SSZYO8dA
ASJJpCUcluCp1AUCWwQLjwUJE2sRtQAKCRBJpCUcluCp1HPlD/9MZaFi/ndpzRU2
8U8ePpD1uzSaG8voOK9lWmVX1lTboliBZ11iqZC/nSVi2gGmlr1snFJ3Rp85C1Bu
syWC17FtcSyJ0n9ZncfkA49RFJmYGoxd0ZhDWBRGbNugTMjEzopSKv/OQsmm+UBd
IPdYAw8S4grfpDStTNFFAvs3Rg3c9WlLysGcr6swnsu2VmRhkQk8Ct5DD1sCA/mV
d5+ib6P0UizD0GwvQyG2oUwDTwa53QU5ItJ7S7YkCWbsv73uvuU+se/JK9T16MJm
PwOTuTQjXh09yvUU5Eqy+AZjTewZiV1ouLIZG2IPbBJeXyMbH7ZfcocjzCN1nlaS
oMewhlaiSKLrS8JVKhIjGUhFgJWQZcgacBludhkiILmMsRaQak68pYAToS/RTjTc
lUYo2xKeSLYH03NmxWa5D4WVXQUeVLmHFkI7wVPXXHsvBnXn+J+cG8mawDewNIv/
TsVpMgqv/KX9LcKIrA09tmrQSPxcsxdJSl2Ar7UAkx/so5fFgC99C7UXGqcUFR6K
7WpOuoDNii2BJLm+nNerbNOKAFdAbaotFavkjHVxOAIYwUH3wopDaQJ/Bf9SmGer
0OALs9cStcG/MVrR7R7mf7YH/piTO1CjCKWluklHmEXAc09X9nD9TxxdO86DdNjq
j12T4eep8d/jFKethdMgvtVy+k8V4g==
=FFHM
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGL/dDsBEADK7apJAq+EJ+0I2qmHDl3vZsS9z/e2x4tYCLDLXSDEnDvsmeZw
3/Is555uDGgLYmY91K86l7ef/p7zRR2wbs3nguSo6KTbMiasi6HjvW+PcdW8SqVg
cskocFX7rL44p09+Huesi7JbQZDN46nH+gn39u6Tk8Qpm3gZP9JwptoAZWcJzujW
pY0Ni78KKbSFJjjL2YTisUlGButlFOYEwmvjVJ3+v6DIVeahqxvdtggy7sQZfRDc
u48SMsre/xqHvnLEcAChqK3etIyWtee/CNcn2cOJoVlCZsK/a+btAWC3shL/g64C
lFBjZU/EztXkwk9HPyxFoin985jINzdiWm8nJZeUxv+2KT/XNOLTU2nNSBtbz35L
dhju/P3oRBmfYDzsppDskwlYMaMubWTlntn6GR6u+f89I7xeXsdUeKzprRGo9yO4
Lm/5FsTmwUlkWsyjYt5KrLtCAZCoukc/1TOp8ZixgADO+fCPGixpYjlLSY2SiL/+
zLCHXPJzNi4a4vrO3GrV+zHdnNGdJ4p+VWXACwhnFyLkOW4T/nF+f04BS+J9yFMw
80bk8tC3Ok4HFWFG1XHaVoafpSoy04Zi5gHAAI+xcEPE4te2Gqb/CRZik0RdUzCg
fJFXMJycS6YwuXwUrlgB17mBXHDt7Yg4hiMGziQa+pnDmrgQQwg5WQs4PwARAQAB
tB9Kb2hhbiBXYXNzYmVyZyA8am9jYXJAc3VuZXQuc2U+iQJUBBMBCgA+AhsDBQsJ
CAcDBRUKCQgLBRYCAwEAAh4FAheAFiEEEzdr+JK1hxGBohjpvk7C7q3ywxsFAmb6
s+gFCQXlraQACgkQvk7C7q3ywxs+5BAAlXIVK4vapth7VX56SBE2y/HIRcrGjDlx
N4iLaU16yfma3YYEh2LpWQz7yMTJj+3YUaMuMcozL72BiNAQbK9lKOAqoScaScEL
DKLDg0ngnELKuo4ACFYTiKQ8Iy+Z15WHD2WV/Sf8CM2wuWrvEcYn377Y3qkOapwA
9DcU8D24UjgB/zpPiNVIMJNmwZLljKgzdqA4jVjWQW0UGl1NWM7ynJdPA6H4EShv
3ZAbyy47DO8inSdtCU172LZpONbVjone32tOGaT2yqNE89bRTFcN6/5LhJABaG4F
DALPzSCZjCJ0cYWVNJAmrYmLm+WSuYagAvR4eR9/FClZiSFQ1k/hUakYYtpSxNFh
MrYnps5xr63uFGJ4atVytA779dqy0Y6wwsUCnxq7gZxpAyq8afPdf3km14kb14ud
9wfIZgYPa9j0LRX4AZRPkrRx7021vhCkgjLKRn0zP7FYyBeKJvv4GsuFFg+wCGHc
oorRf8xuC0sxanwmRRkO/3iITNUVK6wYOyJzFiYAnNHZgHsepS7D7nrZf+27Rjp4
eag86shxmhuvSH4yLvQ9L4FK0Oa/Myi3VWo7ckqLhrFe82zENRQ7MzVbZmBkHAdT
NZxZFNOXwsMpsYat6Xb2erJ3ai0XGTqZ1ckdChS2M6FHohk/0H78LfWrZ5DteApr
1PhC9KUrttG5Ag0EYv90OwEQAMsHA8GGcQksqV55xg0TfGqSWtHVhXlA5GK40LLd
8gUn7/0d8ymlo3ZOJWjG9NIpnNQ9mLRW3TsDYC/8ToJ5tlwIS9/fRoCfc7TMFWLC
GrxpyhxrJVzgxZVE9qlKjafKOg/7ojXN8zolNlcUHWH3ehj2Fl9bwsavFDlFphK4
plS5xUUqkjZIq3e40YNSNL4Swt6HWMwQ0taPWVTwcaX5ruN8jV26kFGA4EbacvAy
ezyXucx4dBZSaPhqIHWIKvGrWiNcPfkTxP4v+c6OAm9fXO8ybBVN7kOZKogRgLIM
xsgE0siSt6nKH5k+zJwIhN4l/yaI1I6M/fIVJsLlikED52FdRfpSunh5yrskZ0gg
cPXyyZ7pPF7f/jjIpNBeD4xror/Ejk7lm0WSbUhfiFpQ7sr4lhyq3cpJg5S0P3Hy
WPTc81+8J6iEbdDImwDt/+aG5huBsyYYSWJwa/SKuFdWMZMemG1wundhbgzMvJPZ
RyqvKjKH303AStqU1R7ROvxyGF5tJxlQWky990X1+DUo+YDmrgWgf9/owqWE8t89
OmL7kNfXqD5sgzAKL7fluOfrohBim6KlyN40CWeiT7/xqd4NKZsDiKFqNLZhFTJB
W1uHerqLj4p6T5wOv66yKcaAuHNq8XyP9ypiYZhLHPNc4mh2jUjSlbso4Xn1eRJ0
QOxzABEBAAGJAjwEGAEKACYCGwwWIQQTN2v4krWHEYGiGOm+TsLurfLDGwUCZNoz
TAUJA/s5JQAKCRC+TsLurfLDG+ywEACFvXSt22vx1p3ss8Zk+bZK7M2q46ld6FfN
kxotGZFMGvLojs4Oqa04Wt2fLaZiYWgFtBfMkkYlbAJBFk5C5Q/5ikQkSxIs/Uow
vGQ2F4zphFliqvUNUcuRkqHjCOc61jKygs/V1UaWkY1gjAu8XmqwSt+rGmKhh5Ob
MFlRcgErD9e9KerCHuRmL7Tw12onhfuG5gK60DE0shrxkvZm5xPbjzysin32Pc9+
sK09PDIn6nFv8kfYBYcpfeFxaj18cMZ5lqf3WNwRYJk4Znu7eZTsUiIgzZ5BBrpq
OBX3LoOrb89s7PLSNfg+dzKQBj3rBCEvkklzZHcFH5u02DxepvyGnd6FljQbnjGU
J0OPjn4jcpdFHpGCG5Z//01qlr8+xx7kQiXFv+ENwrAbsKI1RW243oi7qwR0h6+6
RsNn/EESXAszeJNKDxAoLh477bM0FsZn5BpG6pDhNJMVQ7M55r/AE4xD1QaoyXtc
HYHedGVYhofLw3vyv6hsPNJiS/s9LwXf/jMNAaM+p5gFbnKRL00/0ix0zYf6x6vc
VbQhYLD0yTw3Boy6k9rHrfLNQdwkYWpk/JY1ruEGSMjbhiyvo6CH7EhLPI59Sj5S
OrEXCYTUCVVU7PUvs7QVcx5GgzBtmuBg47Ep2w9RfYhdG/5hTF45zVhstPNB9jId
/diBWsGA9bkCDQRi/3S9ARAAzZJjGVo7GtqGXyLCvvMDtMPDIDGtWllDmV2oYI+y
YPggUsWN3lzWvAUaE/YLxxFknU/TegCGNCMQog7NCmZgeAlf5Od9nDALOattk/VN
YyxD/BQOs11aMhBr4WP7+WkaaAjhMGaRwkadOMRIhLcOMplwNCZyOv9mKfptsHYZ
MmAY67/8QnqHiIY7TB2lUJTMJfyy0kWmg92EXPYPFpp57WabM9gSAzBi4SPEBf63
hfpfARTQMh0G7hYZH0IJja2tyrAKjSMFdmzGUY3vk5083hEYxsXjP5DoWARLpVXX
8VDlRRN6Q80xtVPLK1XYnOPfj5X4aBSUSzaPkwE5F2ybhygiQOJIw7+xcN9mO3eq
axGoah//FXQLI5r9muugQAY/+WJf6aepgwgPl8uuuwLIJOqiih+TQHqh0kMe7Ovk
4IrV1DGZ+v0nuvpdVneN4lSvjefMStjDAPJQDWkmHXUyeJPMBRBXWI42sRqjzhBN
s7ShQGU6eZwYvI0cS2dZny12ca7vBz1Zgkj8cfv+G2Xt88jDxxqm/HxHP57jZHZ6
IKOKKV19sOuJgIhSWX7VkPpOVYoE9ZfQ1DdCO8Du6USNPEqPFvP24lJqGqPtA4CN
5F1vTwpRHM9EHG7zgkHlZTRNlKMApoimfzrYJt1UxsvcpCO4mG8PuVGbOGdAKfvn
RwEAEQEAAYkCPAQYAQoAJgIbIBYhBBM3a/iStYcRgaIY6b5Owu6t8sMbBQJk2jNd
BQkD+zizAAoJEL5Owu6t8sMbhPMP/3SrFn1VTnVJW685T9bKDSOJDakBae4m1YAp
CWLPQ+S2/MF/7d38MgpgM6c7gai9khfxgr0JvUR+gD1Y9GGCXU9cIWP0cYmLBhpu
b5PEnbyZI6DUc0bKyAbfnAVWJv+toj71fLt+Lo7V9n38DafnKAg3YtxOxif9Th9t
vRKQsa5q9VPj0OOFfj/PyubZgMRqZn7uamAOMRhtKX3x41K0At61QGzhecmw/6Xl
APiJ+lbsjvX/Cgn/mKpKIw65q6ehABo1T0Ls+eVQRef+RDmfIGO4D8RUu3G+jt+I
wOfY49vKxhi6rTuC18HyxPrs7uwjpxUj7CDM/LKt/tXQoffc63F9GtREdzmJHSE0
9UZC2gcwYt4mziB9b7qbsrjubWC3b9ivcFdvWPmWgPcIgFJYroYKfn/DITnfjrlR
8EbmyoCVmqgJ9hTvLU/0z6bW0sEIkpS5ameCau0X933G6TacAEQvdy7WonzUrPsy
/GUZ3AFJqe7Eftvt0D2PTvPXetVnG6LJIp/OikEoNy7TDkFAS9yvB+KXaJ+iOg9x
BCFc0lA0rh2PvbQQdyg2YPStg3o43hlKAl/RsyYCAvIUFWggHnrk/pLbBMxVnv+T
/tm3SFDgtag4o/tI295NpFiroDu8zhPJTv2F2GxGPZmNawjw8hyqy2lF8oH9tD6N
mhxC7iIM
=P0dT
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -1,118 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBFJK9qIBCACypED81H1N4YmhMJrb4uOtTDzo+lFZDVVOcK11+NhTFl+AZZFn
WH/7UPn+q5ZZBd/IhONfb5QGw5FzTyBWHsbAteXgCvHAIyumwhQzhZnow6myyC6/
MwDhomT5rb3MkCKCyQMNfj/yMgL6ZRsXVhlGOLMmOekRfKe2wiC5BhRaQQwPZPwg
FS5D0Tro8Xfxjk98u8rNpQXi9walRAffRY+byhkPiBj0sVA2RXK9Dx2DL3EY0xx0
7r6Qhs2XkbXNDDCHRuChhHSHwWC16VS9x7Nhfg2EwKqmMGRNREikjwzDl/aHKz+F
XTLONdmc83sRyklqgH90f3na6s/RT5XTb08xABEBAAG0IExlaWYgSm9oYW5zc29u
IDxsZWlmakBub3JkdS5uZXQ+iQFVBBMBCAA/AhsDBgsJCAcDAgYVCAIJCgsEFgID
AQIeAQIXgBYhBIqgkoGkEvxr5Qru5Nc61kMK1HjWBQJjPBkgBQkS0lX+AAoJENc6
1kMK1HjWwG0H/jMMu5r/CafcnFftGP8rAErrx2hEyFApV2ZSDk2fA8H/jhd7e8GV
QWX8Gql5x2IhcWqfwGLPn8MJFufnzbDNP2UL9gZrKfq3mWzBjf4w1PEu8IhePVrO
dChylfhG3WNZIKQYFNFMdisi9vegj0huLhtDopn3teW2kl+baJWQZdcmYO+ByhJ4
2pQF4WP1KQLe+XtOoDJlmSpyU8a8628Tktetj/pyvk9ZwtMopT+RCM8mDdO76HnY
lIGzFFKiMt3bMd8GMT4H8ZcqBT3a4SQsmz58CaYXuQRWOlBpA1eIKj+g2rJeZdTO
FPmqc3k3Oigi38EgqHgfGsBB5kRr9Mlc/oyJAT0EEwECACgFAlJK90YCGwMFCQHh
M4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJENc61kMK1HjWzCkH9iPY5J5f
vEDt/mhHQCIzxBUaovooraxD7DuMMzYB1UrJPiw2SmdnCt6LaI23BQiia6ewB4AT
79HLLBqDqBNY3djZ8pp0v6rHsb9h3Dc8a+o/RdldmIKmq691V6nQlhO9rRbRmrpW
2ZkznRtDScja1C9gfVLjf+egiaVDhiqoTfAbz5JDUXubDARlR1pDuLFHjebjZT+k
ve0ZD5TWJhI1cSh1wEQGtSh13QFXO98legODlwYDk5FFuZp+NsPgCP7bBt1JBe9D
SrPq6OL2DZyuh+clt33UoqB2mr6DKLn2JJgqwUOo78j3HrLRkNtdPnqmU5YeSjwa
WwZrX9brJdb9QYkBPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AF
AlQuS2UFCQPEiDEACgkQ1zrWQwrUeNYt1Qf9F489tYvbRpKMdehExF7T2omKyrn4
RjRckKBxLpbQ/F+dqApO8kMyTCokYLEDonHLh2dUEsDyflJhq8RGhs6dnWpnFRLW
0A9sc8JqiJLJDBDSAHRudq/6Y9B4s5LYFs7bFgdSuh68W7nQjxD4lEmymYpgWLw1
9mJ1v99HaMx7mkJ5fEZN2krFb8bYvYj8LpA6nGvkxp1zoqLv2Pj5gTdiL2z4ns79
+ZFWFAgj34FJSPNf5WLhUmRerIOU5Dnuk/DMsLyHw/mjoidsGfcPK7imTB7ZiSKJ
apBSOWow1rX1iR9JB3yu/z4e2/FR1fCnmDX2tO44bIQihjQl7I/NGfp4l4kBPgQT
AQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlYSe5UFCQlrH3AACgkQ
1zrWQwrUeNbEjAf/VQ8uRtxiyneECB14wjar6q3Gb4m8xblg5rKsrIDcWsV4Sz8r
d73Xfs9YpQ09402khDhuO/9go5znzNwyF114hm7WnqhvnbcpD84+boviQ5tncSa3
LYiyCqYLdkh3OZB6vFQDj2OmHEpiUcnqyxqDCYKAo4rSjNBHVAmz8xSv2OM9HHI2
B2qLVkYbpyp8iJM3/6Gn6G/49daYYt286UQnKSjf+BYv219xDRBGuKhXAwinueJf
6DEoDhtPRs6ZIesFCkMZRprrQWtoopyxikc7xC3PfLuPD5HcVFy7+N4mAwImv+lX
b5E7+te4MJGEPhIpqFh/RR9DCjf5hq1UnmUHULQdTGVpZiBKb2hhbnNzb24gPGxl
aWZqQG1udC5zZT6JAVUEEwEIAD8CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
FiEEiqCSgaQS/GvlCu7k1zrWQwrUeNYFAmM8GSAFCRLSVf4ACgkQ1zrWQwrUeNaI
vQf+JmHKXWcH81yjnPWyrMUa6lr1HoAtnQkYsBJFNoDZE0PE+y3pat0shphsm1wn
4VTkirQkqViGhVA44xHiRyU5InSPmlmIpeq/tZmG9X6891qAlg2QsFHDhxTfGe4V
Nr/58QDmBUpWVgStaI9WbcIcO86JbH1BuUqQyXG6VUNiLVtJ47cwbFGlTOzbaPiv
vBtg8/BgoLVkG6onGJX0QoMC6t0FItOf4KSdWQ/IapjB4WDLRLoYwQsKd5Rcjtl4
3tOsp7cOiqV+SdOTfPG/zK+aL3wexPEPsCLjwUoqvQTdREa+xa7Krp6ohmIzbE1G
XWK3lVJSoop2DtrnzEOKLBpuR4kBPgQTAQIAKAUCUkr3UgIbAwUJAeEzgAYLCQgH
AwIGFQgCCQoLBBYCAwECHgECF4AACgkQ1zrWQwrUeNas4Qf9EInrx4bBN1PWVXWo
UL6b0OdnO8PQLigazE+PTn8+CCUq8snGEYJdJNGET9ltWGxQnryoS1IVBTy6WDkZ
rGsW+zzp2WNbCViAvXtWWYbFax041StZdGcOtw0EkIcxuzVUrel37xRdNhzYuP7T
Qj0d0GuqdTLBza/MCmp9AYh4oa4mJYZjGTAXGy4cPQR0D99BoCz8BBCj/LeprKaW
cNAaU9/PwhYP/AY8uoaizSzwDfNHTB+2a3pi85HUnpxYL/DIXx+gXIceVQiFGPkT
ZAlq+j25ZfYeU5m8Ee3YHoBbaYThKxoyHTZhN39liNMiOKI6kaPWLJmYnetmSBqk
wRX4fIkBPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlQuS10F
CQPEiDEACgkQ1zrWQwrUeNbTQgf9Gv6mC00g4bLeVYrjfa/gYz+jfBNJVA7qnIDc
0B5CLZLd8AUEEJgvZN1yqaBun6j9d2YdCki0iAoJeLUCCptnrIkO2JC7394ibgLR
9HsyHVwVH+xj8EFLvS3W7F1Us+7HnctgetCcqafOTygSF4UxT8XGsBE9qTnJ2NXj
uPJeeZ5U8jaxhhShgjLsIFJoKAS0PG7Hw3QWXrXtaXUGuMEDT+EeSI+wUKaNtvRy
6pSCYF/ZSIx+gfQcqNWCK52jvvHvAPShvPVg2JA9vlnT/4tAomAK8gvkN0gXmn9/
ybTGBHa3UBBA55/b3W1YhHibB/GZtWxGxwuodEyzKPZ5dk2goYkBPgQTAQIAKAIb
AwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlYSe5oFCQlrH3AACgkQ1zrWQwrU
eNa9tgf9EZ58HkUlQpx231qm21yS4TlW2E5pVjE4CBp8YZsn4d26YXOlgUDkTU2o
CKcQu33yyXASPclqmEnbzHnlUm7+gujs36TmBh59Ec3VDJCrCjZNpLFjccEtpb3A
LCLmsJf5o67JTl9pvle+evpKzXqYNzRdELxH01Gtfecsyeu1S/VAbSPMT79qqWA0
iyROzABX0Hr4mUQZjdW4/Yp0UrWR7yZT/2zcK/9nYw1Sa1q4rxRHW3bXKeAPjupv
xsHkLV+AyAcEqHy+IxKgXvJuteLqEblYi04tFMMem595EsJZvWOl5l5aH7vIzHKh
YfOPlRPzgzhIIa2vjrsQL9WPDWlT3rQfTGVpZiBKb2hhbnNzb24gPGxlaWZqQHN1
bmV0LnNlPokBVQQTAQgAPwIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSK
oJKBpBL8a+UK7uTXOtZDCtR41gUCYzwZIAUJEtJV/gAKCRDXOtZDCtR41gDuB/45
icHrbHcO4P8xSnOJTvfsRmg/KIh6JQkUOXIwJOW3isjvVKShR8m9LEUwf1thY3bc
MVl2ukeHXgGrelOGfTLII8rceOki0ddNnDovdhL6vKSuWfiDq//WMde+7YBU8Izl
EBMU6x9sLMS+aTrIRmP/rZWNYIdTOmG4JQMKy7bkQ0AM2QejWkPXZnlba10WnF6j
cDwWPgTkAt2srHQrGyn6JLF3W9qOhAIFwz7kG/zi5G01EaFNJJ6CsayPrXjaCFZ0
GzubB7yAjdKu/MqIJQlYseoOrR1PYJsyMZq6QCuxygOhGnbCquif4to0+6Y/ACfK
ZcVhfP3vRGE1YJLj20SviQE+BBMBAgAoBQJSSvaiAhsDBQkB4TOABgsJCAcDAgYV
CAIJCgsEFgIDAQIeAQIXgAAKCRDXOtZDCtR41vdKB/9pw81jWa7jBZ0Ujn/eQ0wQ
YmfLvNp5fcHjEIUtmcUWtVeWwRnLb+lwZURRIY5rNmx3IwiQL0TgHw3y72ByDe3/
N6O4lytSKpWA1rtfS/8Evc6I5WEvod3w6cMO1vPxqNjUZCC2gLvbp69LULFrMN7T
UBTIx1qmTT4deZ7UXUg3/Rsep2lPQPL2Gz2JpwH/ckT0WPlPoTp1+fKloDIxsTcW
M3ow21fx2YQuXXTOt9ZjEDS+NiUeRrNEmKi9uHyMoOeIZfW7i3L0NknlXbAiycrY
VTm1r0/uSQSr46jBja7nqMVbr0OawAgzwzBmWDINGQyd0I9zFR/FWmJRxm7T3Gek
iQE+BBMBAgAoAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCVC5LYQUJA8SI
MQAKCRDXOtZDCtR41vGaCACOrTXM3VKyMdMZTX6873zb030UezvbtkUyYC9jybb1
t+8OBiM2s5OFbE8AGwkEGdYI0behwNnPq0FzRarMGhIQDHTqjfg5qhEMnKUGuhG9
lzWLZsVEQwNqfAJU6eOsDXXMvt4foLvYjsMYPTDm6i90FqDSyslr5j2bqgzP21hn
xXiaCzCpplRfQo+AyVhlw5F25fmnESNsG+HCA7wsVdATg858SUFfgPe0N1fIP0MV
R1LTtDWdTLU2G6QFNkokkDAX69FT85/TqBXAJ/46/R8pLKld/GJM8BgNP9YUi2se
Ur1cf4OTqFUxWHs8JcAGYty9R2f3M0UrjpgW6RblRWs7iQE+BBMBAgAoAhsDBgsJ
CAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCVhJ7ngUJCWsfcAAKCRDXOtZDCtR41pr3
B/9o5p8/9bgvq6MoUCIw1nuvuRszXYTiAhurNewBTXikPFrIcoHAYEsue58E1TyT
z6p4KXzPJDJOHFVBx/tod9ddHeCwACRjnBg4fayGiRPpjzL/OJYq0ZMKYofjEDI1
u7YP88XYV64Cq+aYgYSsIraCOi3NKC6AVkD9xBI68zco/lkfTBVML1YqPS1hmppc
vl2aHH9ttVaPT2KdPL/qBdKNR1pw330GEjn1+EXb4B63L1P5PRM4GTa/8rPIoUgD
sVmQgvaLvZMJ1dOJnLxDkBUBhdlvs4j+lApLvEQl2mDT3qUall1ClgYOZq1tA64l
W4Ma5t6Kio7eJRPbSDFl45xruQENBFJK9qIBCAC+k1tFOeDS4gMxEgRkfiVLHFem
wJWQiGZHYhtDgjh6w6mB8G3WZ+/gD2CMp5DgHFRC1sW2iMj3gOzrfyxzd9AmWbhX
YceR6EFkTc6OVsaIb+eHH/Zo3DKyB1Dq9CA5fjjnEQzti+KKSZYWzB0Fkt7qrfOS
6YM1zMjEUxUUwsl1qirx5DuByWLDX7ULU7H/xmPVhHUVZO8XEaFV2m+ICx8Y6B98
KMeJ0Qz8b8wp2g7vWEkwS2R6IjF0kMrRxnxUvwA6EUiZuFphhuY/lWCJusLl1olg
OE+BKMEUStJWEi0s+pd8FL1vOLeNKbIUFro0+oZr9byABpkPNjMxKV36uj1dABEB
AAGJATwEGAEIACYCGwwWIQSKoJKBpBL8a+UK7uTXOtZDCtR41gUCYzwZNAUJEtJW
EgAKCRDXOtZDCtR41kLZCACHHrLT9XpqJr/tbIUwrPKlqXQbgHQUHKLHokTFF2mZ
Jhejr9olv2Sf2SiU9ie/P5DdnBOxG8NJ1VT/7pYIPzsD8UoGu8c9eGXKW93yohSc
33eUFooDTE2Pt94wbEDbq9En6GzMNmIwvWLLAy3gkC77SyNmJLf8ZuSCQ47DpUK5
79Ym2p4ASsxV9afQ7bAwpKB4VGSMG26kFviIRf+e9HEROYIfz1W9eC2z/JjNjNiw
aHQjuNzi3covFF868ydaKc2AG9N3kzMOWjiEAh1nsmnDCmQr8ZAMfh9HvOYtAe0+
8bdc9AYZbAQLt3hv707nLJU6+3M5kt0xJcdvBEBamQMVuQENBFJK920BCADVvB4g
dJ6EWRmx8xUSxrhoUNnWxEf8ZwAqhzC1+7XBY/hSd/cbEotLB9gxgqt0CLW56VU4
FPLTw8snD8tgsyZN6KH1Da7UXno8oMk8tJdwLQM0Ggx3aWuztItkDfBc3Lfvq5T0
7YfphqJO7rcSGbS4QQdflXuOM9JLi6NStVao0ia4aE6Tj68pVVb3++XYvqvbU6Nt
EICvkTxEY93YpnRSfeAi64hsbaqSTN4kpeltzoSD1Rikz2aQFtFXE03ZC48HtGGh
dMFA/Ade6KWBDaXxHGARVQ9/UccfhaR2XSjVxSZ8FBNOzNsH4k9cQIb2ndkEOXZX
njF5ZjdI4ZU0F+t7ABEBAAGJATwEGAEIACYCGyAWIQSKoJKBpBL8a+UK7uTXOtZD
CtR41gUCYzwZNAUJEtJVRwAKCRDXOtZDCtR41g35B/9+8i+Rb1rTANbAGdRypX/N
H9sF0ErvYPFRKuROUk0VJqc9X7yyhG4ExN2tZkdIM9/hbYxX2efPP4WAJdN2BDzo
6PJsIRso9va5dGnCp2Bg9D36wmfo8RUZ6eYSY7vpCMPFsso42pE4q+IHRQyoErm6
Bm7hGX8zgg4Y6n3dcfi8w3tcG6vjS3Tyc+iRvETrjC3RiNmwItYwiPUGDzoX1sCQ
bSooGcNEFM2yvu4015ymBamn3jzFpHj5D5gVHGHZPF0oZtmn75slLsTcgnyfnPOj
RblmB7O7i2szAarIoc5n56zRkJABnAYDVss2MoYcvAjtfTVBgYIM8ha18eM5xKDC
=PbM5
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -1,58 +0,0 @@
Leif Johansson :
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQGiBD7DfnwRBADpIpOw6bXfx2Yo3vac/j5WzVcWNZKuiYc4uuFnBYxH8zTA5cdw
ytuOYNtecX1yrPgmObfPVU0EFktdBMFgLE5TNRUMeJZTmAl3QYDm8N32SeSUEb6G
PFsUTGgxsCW3GVAoq6DBopKqhR9HT0+crQakbc7XkS4FjeBWiXjuNf/IqwCgyoa2
Qfq8UdjbcH+DRGzPnRTeqzEEALIEsCzDp4HQqXqqNLCoExbgmCrEHvnqFmilCHJV
nyuY8LXmcpq2uwJaiIdsTqLeQ8WrMxWgmZc6F9QSdLP6MVZT3v+5OqOZMUDsu4nG
om3HH+tG238vMSEF+klGdrI0wdscrY+28Oshjhqj4FZxCwdNU9RTU8xQ9IoObiEo
1yOHBADK9a5GhkLT+d2cb48orETGtG7i//HOnstouw/TmEUXreZPtT6wpIdN9Jf3
W80GA6A34VEGA/I+/5e+9nFvINpLvEF2ghJBH+sWwQ8EXpo0M/yir9oGeJI7gpOH
Rj5Mq9uqFG0wcamInuWgbMP1cefjXusHbHyDFKr7ydWSsZHqXrQfTGVpZiBKb2hh
bnNzb24gPGxlaWZqQGl0LnN1LnNlPohXBBMRAgAXBQI+w358BQsHCgMEAxUDAgMW
AgECF4AACgkQ8Jx8FtbMZnfN2gCfZZFbqsv6Tx2P2aqo8uq9aoAozNIAn3iKFa+U
5NKW0LqLE9yfsGdg+6kEiEYEEBECAAYFAj7DgisACgkQGrAM3ynKb7+xEgCcCF6J
/+KfznDzjM9pstb52uAIRrsAnA3WKCpHVN1aqkCXPQj5m5bhcwUViEYEEBECAAYF
Aj7DhUkACgkQvW0872wn7f6G1ACfZSg9ynS00kDncG60cn+5OknZYQgAniUJ/j3u
FYpFNHvM0H5v53uIH0vciEYEEBECAAYFAj7J6rkACgkQP5nw96jJnKNrrwCdHRo3
MYh4U7nduh77lss2X2oxEB4AoImHbwUg779PnXCiKXrCuU9qaBBkiEYEEhECAAYF
Aj9Qqa8ACgkQ02/pMZDM1cVqyACgoC4wtpXlMSRzG9wAH0FfLglZdbQAnRfQfAdq
XaItNTG59LNSC3wXElTbiEYEExECAAYFAj7E1soACgkQvpNrbHcR9Bl7mQCfUF1+
KXNMsYoHp3dmSnxeY9smvMoAnjUumD3ak4HYCuEqufOj/dsvnNboiEYEExECAAYF
AkG22CAACgkQ/I12czyGJg8ZEwCgorz/TjX+e4bUvmEf7Jta2zrLcpgAni9Q3GN1
V5xPr+j4gLpHCZEJRMntiEYEEBECAAYFAkQgevIACgkQVCc8X9UG5FLXwwCfdhhh
ZY2Veuq2783JYKoDepP7p8gAnjRyYsV/KZQKDczoD9vAKE5ver+GiEYEEBECAAYF
AkUZrXcACgkQfwlELCNtlACgOgCeKP4u2/dzmbfYNIjiKtayAem7/+8AoI+s37rc
BWZFpD8wh5JR3cCCyxZ1iQEcBBIBAgAGBQJEts3PAAoJENxm1CNJffh4CNEH/3Hp
AukVW3lB1a+jNEFKvGrjU7RcZQeJYjTNtK+Yk5zKyR/z+z3OmZYXfj9oi8dn6qWL
gMkoTO8m+c8e5vbTosbCodZaWUBdEL1YkjKMxtnAnVeSNC1lEksT3nkaFtNy8163
IHQrlpYTyGYb9/53lTtTlKYjxN0PfQCsUsnh9wTZR5Erw8Yup1EdUBaV3ghU0lzf
HC9lPERpa4/4wf3PCr6PSXYM/O4Gpkya1RPPlNQmg1VXaMospyqZVzgvBVi2T60i
0/bfuztt91um3gYeEMM/4un1+yCnYZUTIbTXsRZ8Y/XDXsAHrkEep/LbDu++v771
mYV7x0Td7T3hZM2tbiW0HUxlaWYgSm9oYW5zc29uIDxsZWlmakBtbnQuc2U+iGAE
ExECACAFAkpwvMMCGyMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRDwnHwW1sxm
d8fDAJ4piDExTh+Cy36fG0J0j1pugUMgOgCgpOEuOvvj6arUzNMCGfpLmGgbZMu0
H0xlaWYgSm9oYW5zc29uIDxsZWlmakBzdW5ldC5zZT6IYAQTEQIAIAUCSnC81wIb
IwYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEPCcfBbWzGZ30GIAnRfhfh/++wBd
qrUhS7YbAmRi4QOrAKCs1XjJ5XbNBh+PMjslPEhXNvZyp7QgTGVpZiBKb2hhbnNz
b24gPGxlaWZqQG5vcmR1Lm5ldD6IYAQTEQIAIAUCSnC85QIbIwYLCQgHAwIEFQII
AwQWAgMBAh4BAheAAAoJEPCcfBbWzGZ3J1YAnAiLOGwjJvm08O0aQEwlq9jExHJ/
AKDDmnmS6GxoYOQCdWa87weOd+5sk7kCDQQ+w36FEAgAr1zK1qIIXmoeEqFulgFi
17FRpSibNwwge9bkG2+IO7MOm4Ih+f4CRkqaP5U5diiWb4nyQc/Yqzf3TTSE+CH0
ghvDCwfZHrzUsVl9t57S2RFKaQhDUUw3lz0TgKN66z1IRnQEARuz9PFd96pIhLaJ
BOn0e55Cu5qqJVwGpst3+I3jqT/cxjymRxPz2O6R9k/ZOOiOGROZYAjNHKcdoeBr
7OaIHcPRCi1R8MBKE4HOK1SwaVvs26Fd2enixIOBmyFTkrue3VgaAd3zrJauD0qa
/u5y2kGEyFFJwNsKnoX0aCmNNIG+aKvnSCWfba8bmYOAsbxS2lo4MKmuDM0rrVyL
hwADBQf/VzM77aviZ3Ir7qXj0uV/62wyrg8/5flXl8XjuATewD+hTaux1lg5LgPU
9cokMHYHrTsnp79nhEB9qOpsQLX+npae7a27x3zyqLP0V7neyKy1ycuBI9KU9B3i
vgSMRlKR91GcmUpRnKiSnxPYNtq018mY72YYHCpfAh0OOUA88bxbYIuF5cv9dYyO
BhNEkI8xB1VOWev1CPkPb0DwDABHdOBq9e0hT3OUOaat2JPwCEHU2NTGsYFuZRys
q8xnxFgHd00+h2OJZ50UYVpBjDxaCj5gvHHFFnmfCLD5VqjEJGi4k2znZHg67i2p
w0f5BSq8fsfdUML35LzL/aaZPMzlg4hGBBgRAgAGBQI+w36FAAoJEPCcfBbWzGZ3
djcAnAxF3084vKlsRNGcyj/rn5lA4Q+nAKCnjZYXsnFG51wbu8OI88aj3LJE5w==
=TBju
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGbVuGcBEADW0CPivAYNuZPU7EJBv0L+dLdeLSd5yo4wsHg0wIKyY2gIHSGB
7Dtm5kiCTV/MR0R+VSZ5MZcCkmUNwUHO/3okn5hhBhL+27b+blxwmLtL5P9XXoTi
TZvrv7Vt//wrZGYml04Dz+o6khLqYvzr3GZk4erETqAPAxLmiFguo26LbiUnY0Qp
tpp+Skr6C7884mq9SglvDwOCboqpLyYMxi1SYldwHw9DmZ9Tt52D7MHtyqh8pUDA
1lQtJm7tMQCvCP4xKlDjkM4ijD2ou686HHLAWm5gH0kT6OJV7SnmvsJrcxhfy/VK
P+AZMjhi2dy1zyYv/8PXLrTkRvs4+WPrBBpJ33LegtpW/C+DPKT4TlZPpWtqM5e0
YN4fSsvxP+4jkol2Tiw5X+XtAn4lMgTejamqKj+wAye5kqA0uTYNdbTkBv5bqRHu
vGqRXb8zHcM10U4oo4d7DufxEPfjq2Nwfw1OGmAe03HMKGGD0l492J1TEcqXMg4U
sywyoohzBr3womw2Vm9O9zn44e8YiheP5TtidYVaZDpoUyaT28J3Vm3HdI4l0gFS
oW9nRQ6Sb4+ShtNKxDBgcbaysf+W/GlUHpEWXgMa/mAk4CxiaK6FNFSk1negLznD
6kLhxkK4XGh08fvMCdQDwgtDwCXc+4G4IquyHyt4Cg2/JdgP+G2Zv6qmZwARAQAB
tC1NYWdudXMgQW5kZXJzc29uIChVU0JBKSA8bWFuZGVyc3NvbkBzdW5ldC5zZT6J
AlcEEwEIAEEWIQQOflRXU56cXkw9l7IffIlrNLKBZAUCZtW4ZwIbAwUJA8JnAAUL
CQgHAgIiAgYVCgkICwIEFgIDAQIeBwIXgAAKCRAffIlrNLKBZCbGEAC2YtJ3vAny
0qD87NBPz3oIFHl8PLVrU0MnThu+qqraGxiYL1hk6jFCuwpcHKJQyeqMDQta1nmj
n4yWxJnd5RikFh60xeUkzTsWksSOHl3e4rUlaWOnrCjZ1CkbvnI7xt9fwICDC4Zg
+mjuYfyLxY/PH9X2PNWIRXzGnFAmftoTuqedbXTZWTJ5SkOsq7BtMDJDpbICmjwm
tLFtp7yWNuVcu5aVJ01Rho0Ki+J3hRuCtUJGusSJEAIBGvYNL6qpgkXYH/IdYJOu
eOMjNTUeE9RNY8uSNJZ8tXTfim87xCw67SOW46qmOvFYS+KGsnSCOFfScILxwnQ0
6bR/HPNiifH80edi8M4X+JITCCFGsF71OfXpM3JVuQyWlYRePGzqiskgsoC+W8ns
n2ukWdou0sJFBC7hUrazXn0jxGsp42/BBD3TohT2uZnpFdnfs5rDCMmdDnZtva47
DCaHo9TzWNsHBN8VnippGsOYRwAh9GEfkzP5xTYgpYCHtLwy7Xop6pwyyiOhXRz0
G3aks7ibJ2jG4m+FFynN6eEOihcrm0loBX0L1f/8+u3lgca4CVDjGqqSnMsSVPbn
GrIB44/CWLhL9A2iOoNmxTsP14MLXbMSNDX9LvlXKVWVKB/fIulhcAD1hxjjNeFM
VQyWSuGNIeO0fzq/LSGOjYKlrZSA5kRqsbkCDQRm1bhnARAAs2fVZXnkd+8U/61C
kJfCAxO+exVU/RJnWmMFGJE9NQwnKIgskllrjbzaus8Ohgga58Wm61HeCP68W+Pt
/THlylTDSGPlN0Cz4IeaaUEpynWRLlFMQVO+iaBHODyrVcs0wFoyZLfaOlD+zYLU
faFkZE6s7gqiZudlzIqXX2zNn58VZ3STxUEzP4MTaEEGOKGz6Ykr5bGIgN9DswpA
KFmPMQKIDcVupu0Nmj+NVb+fxYTTu8kY3NEK6ct+5eyUzRfqwdxeihKbH/aEknZk
TaYJjp+Exahm29+G4zEDJxLkJPOcfYeFpJ3tGmGyxfZI/xi1aoGSZn0T1wChyDcc
MDu/Cn9ZOKuxYUQyrhmFi05UqqOZsvRqZuyowpmNMP948GgtcU53k/UeSWeHupNO
4ssgKUs3TcBMrFIFx/brIR9uCXBkkcK7K5tJmZbCWfLBXKxxoEFVG18GJqKnb0bS
pfWcZSHNJQdxlKcILH56QduTD7xuORVHhkTurHn5oD/gEaK0wvCQ+MWnkBxZtaTt
6w/UG0Qgt7U06GxAjuyFRUBni0Ljd40JjXxpqsQbVEf+5Illk1k5jX9XtPaXLEPs
2zor2rUhKYOg7fMtVPHJ1P8KyDZbhLHTOcu1B/spTngBlxSXUU2GqQ69H/GFQpzT
NPkVfMNKpnpj5dapjJoyjMPs0KEAEQEAAYkCPAQYAQgAJhYhBA5+VFdTnpxeTD2X
sh98iWs0soFkBQJm1bhnAhsgBQkDwmcAAAoJEB98iWs0soFk4tMQAL5Fz4kzkjWw
eHZrwLsZLsenhf5tTyTCBeZDteLAGd8LP4Foz9OHC0NHOEHui8uP03s2WDOI+d/5
dM0yoLtKD1DqB6mRDqpxjem3vGV2Ypjstd+52QFsM0GXzsIa21k24SEdluwivg0N
hf2p8nyfa3IRKCQRBAJVV+1z4L4/tYRp8WBcWdxtp/gNVm4GmJUDNwEokdNN5O92
cA6gSdgAmIQb3hGIPjivOE7t0szVJ9+0kfXivuToVExprgb6Wtse2jN2DlM8ALvK
KWvlNBvrcev2hmZGy0IfPA5kQehZeFASXwa/yCDwzW9EzDJOcR8UU01/Kgsz2CNm
oel2pjvfaw7Fdb9vWmNwO2poC1JmZUEXIBPPApkXxaqsclFpTk8tGnEQlFauaIFm
HcwOBCe3gtogapz1MKg876oBqYxmUbxgoDAjquyyOllzhCvrJ2aDhSAHgoRDCbDv
8zmIGDtueojP34UfoD7x+oDIK+0eHlzseicQL/EHLH7jQpBFdGnwp65gbZzHiXfV
pvD/6dAe2Xa7ffPmoPU/aSxVGahY9NzFjUmlI8SoknIyQY4OjBzq/FOXKuHR2IWc
Ph6clodkz87xU1hJBN2PvICZBd5Wd0Yb7xKfQNuR05CMolTkthKZuyCCMKkWGTYc
LYRZ6TH7ZRyeyeKha/lF2F5FvAHe78Y4uQINBGbVuGcBEADJ0kbyEV26b8qeksMq
T7YDYVSClepOG566JM9SShkmllPb13TZLpNlx49VQljkZLji7ObkNLLtANcdw30b
/vwRX7pJ9AuXj/DyUAOW3lexCpHW8UHaYx7oY+c3d7PFu5wBzQ/NLvXu9E0NK1rg
MjTqRBiG8bL70HrGJGxcW9yC5+ZCM4A70NJjBGXA4rltWwUEpe7xryjj/pWldtME
wt7naMBDUGpFyImRVZ6zijOKsIdKRAp3P0ta0GvvqIgpaCqYs6rwsIFhHhIaMoEQ
Ex16V2anNCAwm0g57h12zBF7Og635KzbTP6rMFm9TLFPVTeE2Qk3noduN3Syr6eO
BKACqiNNMx20n5q1Kjy1lP2JPz/ayHVs9i3Yu1uCFZcSl38/CpXfH6B0QjTaN1Fg
fnT3nscQbjJDtFuxNvmUzFQUxjlESNWbafr94h6L5FLQ0Fe3aFFm8heVmsKb8pXz
77MdLIqVoP+jLQLmED7x1JFdF5GXo35CRDGCgP5Mplc3reC07FUYGUMkxdio+Md7
1Jgs5VlnHQlOYUYvhgB9DAO1g7f0XQJX2QcIvwpZzQ4Rm607xraWbzI9vSJaMNzH
w8UbXlqTAOmBBhcvn3Apb1vbKJjvgtsee+r4xrBZBZ8n9EiE2O1K9ESzttLkvKIo
2zXRrVTfvSp7bvgo48hkCxAAXwARAQABiQI7BBgBCAAmFiEEDn5UV1OenF5MPZey
H3yJazSygWQFAmbVuGcCGwwFCQPCZwAACgkQH3yJazSygWRejA/3Tw1YNHl43fzg
IEJ5JBfEqKyrl1nOCeqaD3uRzpM/27RokdgrxiLtZsKIZ1x6rmKHP4qr/g8S8Azi
ODLTFrA+iwL754QqTkocettdOdTRMXlAejpeq3RFVq7KmW6FGQ9Z5AQOhjop30uC
vSDc4vz0Dc6OtK2t7nFedx8xhubfWfYmO6ce7hB2uM1ujCfLVNFNIN7SoMkMtvdw
d7K1jYdQsG99OAP9EDtI+iOk0X4AzgWRK//1Gy9niaH1T/Sp8lEzF3r9DOv4VSpQ
W8WvSJic5AZvXdFUUzvWs/Ujd1FH6MXEfam2oOCnUdN8DfifPZn572j4iiW4Em05
8MegNZjLYaCntWeHfOobETEy+ZxgzGiNEmNwKoK4wbOPJJHYMxySIf3W38kcLLcA
QLfyhVG0PX/7XCLfu57ccY+IKvAQfxix++b/m7D19Kix836UtT+xPpgzzkT6v5SW
BSkPxodOot9Bnkj+XfUmop7VGrb6yUogWB6e+WY5g+Vny4jc8hgn5jcQj7ntM6qO
svCQd4vDUJ75T+X8LSbPP4gF+CUQEtEylg6BwsXFVbpBJax2lgbMBtHSuv3vnTho
uzlaUSqaW6D9mcTYq7RgA+FaTGJuOiQ0oLHyuirc1sHNZN/PwRgf7utKqAhVM0oM
lg9EJPYxbeglKcRcZ+wAuiDSL/mrsg==
=cGQE
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,130 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGH4/cgBEADIPhFLAnngQbOpTG6gbarEKQGJ078N/cUeISaQDX9PaMyZ9IzU
yvn0JdR17rTofN8XAAhfWcX5qL09XWE67Sa2M5XFbr/5SPaQpE7Altxjl5QVpk7B
umlwgU0T8Z5nX0B0vpPXzZ5fV7RhvPz0muKbFd2TcqAEasFb55VEDjz+ngfPIw8i
EKWhhu2Ll6OTL6bzGdZ5++Ip7wSkCRorRiifyHHM+P7EtVmBx+HyeqClDjAXzvSA
cAw/G1RdSj02qlavYx1mkCTshUWEft9UNeb6QE0bn4alX+uLYRkEZOkdSBd6Eqgt
4o0Y91vg12INY6WOtBkT/7UhPSZDy+mfa94lfchE035V/Fj/DMeoJekKO0DYIrzi
IRHxzEjgXRdjQ3HOnz/2qPFf1lPMoBZiZniWbvYbAoG9GiRrdLh/pgJpc5JDu/b0
FNYzur6fCWhPnXu2kbMltTMvhFyPv8eK+eTP3HlyNTmX89SnSYFXfDQxtIGO6AVt
ertXnOXBEC4FN0aoaAUrLE3KnU7k63O5Z22dyUuyQSbrKvONe0RgwLX2IaFcv/Dk
36OZgHf5xizY0xThV6geq8HpMECmailEcfYYDJAHI1H4fdWwMU9MjHtvJ4cu8mld
oKDIJdiozFG4NGx5t9Zurc5gImLZoPsWaNknSPn1fHCYAiSm2oIzMV6ibwARAQAB
iQKJBB8BCgB9BYJh+P3IAwsJBwkQnnxhalzbE89HFAAAAAAAHgAgc2FsdEBub3Rh
dGlvbnMuc2VxdW9pYS1wZ3Aub3JnmaI/ED4jxIu79r+Fu8PQ75LRTzHK5dmAViO0
uUe6boYDFQoIApsBAh4BFiEEbK1wndVaMkeGgIupnnxhalzbE88AAAKhEACYzH+U
eorxsx+aYR3qJvwZOY8uiPqUs4zPaufoPJAZFLztNAjLvYcPYj34+v+KkvqPvB2+
svsa9v3J4cwnfc2VnAYsZh14vZzAJXWfkXUHX0xJv60pYwy+QQDEJxyOcSYwpXSN
/Eeq+1qmsS0nb64zdkJCAG6z/MPjl7yOeUjCM3e+LctKxIkiXsSqfGWMFjb6MAKP
CGfpnCuu88GdSHVlNf5cVkTtJhh2NdcSlLN/hDQDn0vUo6Y8pshiB9OYuKEVE08U
fKxLiJz3F/YSRGBnO4FKFk+meVB0x55tK5hg6fQDGlrP0rPOOZ1lHVlfOgjWIiPS
/JGOnF6AGtz7yV1AxdmZsd7H6a+M1FX6gjcgehrndI34wmrrfa/4yMnKZr4udFv2
X//3ivSLF0/8alOh9F+CBkJzxUkvhlKssrcWn4paKYRFWcw9G1374QZ3cGyAonCO
Yrigcxrsi16b16czHBqswtDzK1bjbNwIVffanJBAg79CnNuxX3qJSGBCWl8bLZkK
/PFuRWDMu4VNJKRI3YEU4pWAwtDb+szxwJmSrh4pu0oz88iqpMXr9+RQD7i+Bagi
DNiOsVph6uDdh7QihZmAHF2encFfG/l6bv0mJdvP0Cyk7sBO9pZ19MiBRoSMe4HL
QFcl9h0kps63WLgcq/QpFEB1gYZ+ExFXyQuSxrQgRnJlZHJpayBQZXR0YWkgPHBl
dHRhaUBzdW5ldC5zZT6JAowEEwEKAIAFgmH4/cgDCwkHCRCefGFqXNsTz0cUAAAA
AAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmcixOeEe9R3WK/oCzqo
qmu/eK4cCeGDcqQ+CRyITkpHbwMVCggCmQECmwECHgEWIQRsrXCd1VoyR4aAi6me
fGFqXNsTzwAAEmUP/0wDLtk2BW7qevVTACaGwdl9rmMhLxzgCXzGbkuohdcFqDRm
dtPj+soYVoYOPUBQyLmyk+XliS32zkaaNdt4NgCk7YzK16Me2SNxS8KtoBotgO4A
0b65sCxHn3BfluCaWwY0Lkiu6+/vqjqI4FJyoKnM55C4c0pQr+vYtSaGDA15eD0A
RWiWxRa1CRLqpJcoSn0L8U3W9CC/YkvuFjCqqhZ0xDbf4rnpgTpZoZd3CHHdTiQu
pECue3NOrSsd5pLL03No7sdiZfi5aH4/VOI+0pWGMfL6XWKqln9637EfTwPyR8db
KDlabCNg7ikCJQ3z3cagZSVpijA8sT9xpZ76nsp31vrYAViKzH+t/i7UuIJbqYQp
4TqJ3eTAS9k1XgUp7i4j1TqJAAIZ6N5M4XKy5zblJmHNwWVrDEVbCS0HW59Cz4S6
t0+pT98Z+Onr+WrYQnNodt1SCNo4bMvvMrISlYf4Thx+69MXa88OajpsEYXHIlwc
RResVvBVaA9fwjC5lCYBWPrDlwwREx9amqzxLoVW48A5pZlfF/kmfb0UPoL5ItJ1
J6iDqSkc95c43++X1rmrOeFa2Kz/SMlxFUO+2r0xR6Ap1rCyT5rKRkDLl3xTK8zC
0KotXRgZI9GkQC5aH/OiNedAzqdJqKDOyLDvmvljgrLnrioDrSgX15V5tqG0uQIN
BGH4/cgBEACwdio4gvADawt8HLtVvAWLY5vb32yb7KeFT9B5lWOvLnkg2KQ0hU1a
EsHk1TIom+SA1nJ6cuLtixVfY/XF3iQH8vsxHf29nhBMRoC+PX85bNjinDv4XNPj
gY0DVTrsx9q9MKc2ohBRZE95xhCr97Uyyb/JFuo5GrcQVIJMi+aGU+5DNm3/VhhB
BWFdDEDW2OtvJsfySbmHm1JoPt50c5dBoq+O4R0nvAvapg6Ct98yFGJmIpFSedxH
XH0YzGSG8ru7nOifeX6knARxSRdC3XtciRQQDHHQA78oKSGCQkyBpRS2VIhF6Kuj
3wmN8nTvrvmQDyu3N8eK76psVUrIYdGD2syPV1JQ/6wKmQLbDKDibf2hJ5tCEIvM
P1ipxkKn0OFPKkgK+KmyO6K6r2bpSzpSIm7P0RBrbjWIEqxUmiLEEu4Lo0HwoNhT
EmWBxMtNWuXo44+buBpxwqCXY0l9LBWkr3S3S5ldc/buz7+BAQfZL6WjR21u7wPJ
pEcV1ze54xuixitSXCkxnyoUc+6GCcNBKnJPXd07q3EUPdKzPXBLxb5I15q6i13c
yMlgXqdjTxHXTW1ohyW2+rsUdwFfTcYlidHH0XPRvBym1YtPhV6iLSpE5Xhh8ilQ
mLMrWqx7LvKuwVX2chx2jKv2CwhBPkIlShJoxi8+g3P05gpfTM9rFwARAQABiQUC
BBgBCgL2BYJh+P3IBYkFo5qACRCefGFqXNsTz0cUAAAAAAAeACBzYWx0QG5vdGF0
aW9ucy5zZXF1b2lhLXBncC5vcmeNP0xFxrxhSaN/0u4mIUg30aag1kp/BKBtxhuT
3XlkugKbAsG8oAQZAQoAbwWCYfj9yAkQzfLDgemnUb1HFAAAAAAAHgAgc2FsdEBu
b3RhdGlvbnMuc2VxdW9pYS1wZ3Aub3JnJPr+uwSdB9jKBP14qnNQvxQpV7dypr/K
45bf6+Ia9fIWIQRjpa++/6ZXGLDFAb7N8sOB6adRvQAATc4P/1QINlCt3o0AwkKr
+JIUTaVxWTvqDIRfUTlQWz5JhvIOXM2FE2qUP8ArG7kw1/LYheMKM2ftqh6EHodG
5yT9N2rucbVfLAzLxMuXjJoCEBMzcOm8Uf8YK37OoaFhQ97r4eoe5TpSgJlKS1Lm
GRJVDC9L7nbUkCOfUZoksBPmVyL4mywELbTJi7nSe+iPm7yaO5AYUgXoxFZbnRxu
kaf7ngxRdFvnqydlf/idUBeqfAwlJwqu5ctjLlqgag+KVyCf9lt0yIFf2sn+Tz/7
6McYjXBk/7dmeEqXmxduetRmAAVMmsKBGqoeKwskZGHPB9psalDe2xLvnK4ABuw2
4aH7MmyE/OnxAQPBjzR0/8K0m5AAzm+C3tUzgJ0RPwEHree2h4WOcUP2yTCq7xwI
fxvmLBbhEjpBxb0mr3zTYsYtFW044YE/iE6kh9KX3jRzomvL4firkkNJffBhQJJJ
tAXb2nSmnunKH2R6C4V2XYUnBuuCyOYitqmoL0a8h2Qg3u1NCC5xtuhQaS/c2QkT
+icb7FK5XlWOSbf+WwbRTBwqrdR2dsuUkCAlyRxnYfsqPiWiTVn2l3biZOyN0sEB
3I70j8LxS+Lvqibbk/ecESc9vZUgWROHV3buRoW/4+238AF8MAFhsHwA2Dksc5X4
B7N6KPUp4PEGzu5D20JmvIiym02nFiEEbK1wndVaMkeGgIupnnxhalzbE88AAKSm
D/4/Ssb+JGa/H9UXUZpoxe5mmyfAmL6QtxtlysvusY+AUMPBxJTXonczm11J0xW8
I8iVXan7OVodMruuMFfrOWUbybk+uuHECfVQ9woKev2XL3AoxRtSwZfXuunaJ4f2
WtPnx9CIyu/OBjS8R+E+PFsK1u6txPmrh5FW4iieXBlpkexgPpChlg4HRiIitUpv
dT6ba2nyEd7q6jKnza8PCwkihyMjGiaGKRnmcd40SsyXfbg0hbdbHqjV/KhbdgJr
pXku4NUNHW/HTc8R/VVteC8NRGVQ5LOrKOtqg6FZ1vIQdQrjjPb/aiE+Lye7/SV5
RwhlXGIRMLvgM15ESPgpCP9PoT5Ga+G3/8uSzlHrKQNmFZ/Ni/QR0xzehENkoF/y
Jvy0kFfuvfXA68H0f/MJtVQ+CMWQf1dsDi7PAaDRxRk7gxsWKl7aYhuz/B9CNkoS
1O9QE8hUJHM3c+b5XhXCdw6G6QbhEsztE1A08XUN3Zk26TbD9gnZkwKaB+Gnk9IN
dSVzZDnFkUfWiLDi3URC5rn+Hrf2/mBBGtwOJJMi5+SwRJShJvR2XO3SephOxQLE
UNOMtJ7FBZk1jNyv5F1lbG4rAOGlv8wPHm8kFijSTEWvLY2EUgNzIkC8VryhV9Um
xDIWGRBE7m5J/pG8OIsq21xSDu2H+lvwiTt0GY62RnzPabkCDQRh+P3IARAAuUFF
AoWst9HmwefnNEIsi4Nk5pMygof0jMDZFqQ5mPOzi6krbwgTUZu4Il0w5pfKJt2K
88dQOC+kSEagdpOAEp3q1xVPcd1GduYqIlRghHzS1flfQBhC2PZOrByFn695zNZT
TPTxe38jsQBGHGJeC+Mg2thejZJo2XHaLYM5gF0CFXdUivCz1x9dkx+fcPHMmVIz
W+DS5+KJR/N8wh2Uw/VF6aWZikrbqZXrSx9aqdZpRPnyJ4OILhCXV2JCWUlS8l+k
5eEiQwi8zqtXlp0mOjJEV1HvWzAduvKqXa3ArUHR8WsKZpWkzLl3Zy0Pbxt+4u4D
uaWG9bFEpb78qSCMraKHs/ZQKuwdHQgeBy20x7GsN7grNKXT+xVIG/MQVlGGm1iM
O3MxSYs42mYidp3Dj8wamOAD69LWozPyLyCF+cpEH+S3E2C62480kiXEXEXhKTfj
WzbzggHy9CNa2TLFhcD+G7CxAYxwNyEuf7BPIr9hjNm3HBkczwYoKnhB2EJMarnW
1LmSClmnqPyJrfR9SDEdBw2vF67tc/wcDv3ctPnv0H9x6EqhxQx7IE1D96Bi4+Cm
s7JwyfqX4fQlBeBT2c1MiXfwyEugaxKVX/h9olfuBEi+ZoGryyTdM83Q1klMvI1l
e4ffykqzHA2H14CUDbnLCe3S5a01WSyCUa72bwUAEQEAAYkChAQYAQoAeAWCYfj9
yAWJBaOagAkQnnxhalzbE89HFAAAAAAAHgAgc2FsdEBub3RhdGlvbnMuc2VxdW9p
YS1wZ3Aub3JnhjfNA1exO+rxxBgIipULe5IoQ7FSuQWtKdXBteFLyUQCmwwWIQRs
rXCd1VoyR4aAi6mefGFqXNsTzwAA+4MP/j492+RXBje+9UU0Ez2yIll4j4JqL8nP
A8PibBSpzq4y/F3OPrzTCG1nmgvFGKNKcyGpKncG7bEt5Yh8II8UEwjuiwfMhanq
IBfnhxYtQWjKKySgDi7TssQUoQA0DLWLP7QPnAZ1hMWQpd2v8LVQBlwH6hIauspp
NEMiBrZw0BCitM46YhNfCkGfcD742j9P33ju03GQtEct7lxWUCQf09s9ZxUhc1u7
/izQgyMoS/lmokdupUjrCCV3Au/u8xgUXka9jX8tz8AWHPvx/IpVdd5LZm/THs+P
vfeStUk6zqA3viVmCfSD8nX51vrs64bNragVWcvi034shAobgysJeKNmVdN0zZyG
zO8G6B0aDv5/D/sO8vyBTAxxtoiSW+plvsQaOM4uhGBoqbqXff2GYDViy/RVsjqW
x574UJgJC5H7q+uUkHaN+buRxmpLgrRmaurYCSwnkturraNgmPjevosNh/kr8JmS
z5Q1NWaWSZvPPp2Jh7TeVxeCiCZZAcq/SgkMMO43raPRG8iCrJy1/u+BoI9sz+qM
LWa+8ACRDdXJ7fRnqkorlrtPUwRXruiS8subfhHaVYd2a3+r/7lw8TRfRuoKKDT2
4TBr499a2QBmqL9bzlkctB96Rgo+JG8SS8iEAqTk9x0SqgTXVgjtCBrXV7DwUTR/
DbdskhnT1dQeuQINBGH4/cgBEADnIQ2mZJT13YuBUOLM4Xlkp1165nlKvSC3oNE2
Z47sKmcgwgKwPJssd1WsmkKDOsoxsvS6FJiAbmCQe/EdwT4dolRpVjczpp9p+w6w
jtTXsWPsSUDbT0ZD8IOmOr24F8Z0WY/ho1Bmm3LwCMbW30KROpZn9VWyzGT6QTGw
iZF/lyItsdGcYC2qgaXJpI0sEc5W1WK4ozpTu7z3BtzpyjOvVAQirF7Dp2yU3dLB
93vj+/BYnB5F/1cmTWfu6lGRtO60E0j9DSH20AqTGfsJI4fPM7tbJnT2Fhj+MS8b
Hf6iEnh2QwlUSUdMlJAxXVu1XcLiSbbHXV4Mh7gCuGB0p0rMGiBg9W/t+D2dYsBQ
xuXq8fT4iqlaHaUwoVYtsDTMIg3c17mcYni5VRk2d49qpva6zR0zU3v0X2YtvHWl
CCYBmjWSS/8X8FUgHVOaCEAOjTU89TvG9uvxXoqO64Wznx7sjywkaWuwmNck2K3x
lhccw5iy+K1xxalKgcel6nMxdoBuW2RFRAYCCAT8IH+ONzLOcGj/+sRJx+bl18qY
WcZGcYA9IbfJCNXuQHX4uRLjtml+zNac3Kefmw1jyBRUUkWbdcAsW3kvf3+CcP62
URCk+eFMywnGk8N6UX9akSxgMKTR3IHuqZLHtzbgUxgeRHCLUid9GwsqDmu3fC8f
LRK7sQARAQABiQKEBBgBCgB4BYJh+P3IBYkFo5qACRCefGFqXNsTz0cUAAAAAAAe
ACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmct+zTDHUZ6Bm3R8Ppvd/bN
JZM1ZIMgVssw3CAr4P+FsgKbIBYhBGytcJ3VWjJHhoCLqZ58YWpc2xPPAAAk3w/+
MI9j1sls78BhTKrkUfI4S+DrQHQ5Fa5n7mBPaAVGUj/rT8UM1YZejWUvB7Eu6qlf
e1+Ukl0E7WEqyLr1P6WOh+kW+k199gWqM/e5NcZQdRD5G99eXtC1iVdnhmZyJP73
EWp5HSFdejNpnJ2S2LqPHtFSJt1hoQScdKYHOegIJwehfBOPD/3c4zQyXZ/8EZHV
IjdvmS4QVcymuEiu5W0gZq6Uq5YfPtf3uWEfwXtjanXJCK3jYWVUVu99M73p1Xj5
q9aRKdIWHu/2NMliZCtwYYbwVFLuDAN86+zXslP1As+AcxPbMvpjkt+AivhS9g6U
uvov6ou4tjUsfHZKiKU4K2u96QkaprM0oPD73GX+fb5VnnaMh7Jh9Tkptx4VgSuZ
WNHenrqhPG+GhaOdksSLFyrcn5jQcTx2BDaOFzeWtX5R4Iu1tVAqdEDJhcK+36Ee
VoM+E7eppzA6bWRx46gKK2RMkqLbeJJJSn9kR/cxDPlkBomRGHXEKLf4ljXWG0Us
G5iBKU9bU1oVH4eathSJ/8qURDiEaXmGa8Q/Ojo8mq/uwdDWzniCAr7TLHfSWQMb
1pmd3mk2SjsOIogMeHJxNK5g1QXHkFGbHcqGx2R3G2aHKcUgYfwRJc0m2gpp7Ywx
u6jTlXM0VVxcL3hfAAom3jb0NC/f0+wqcER+OZQ5UGw=
=jlA+
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -1,14 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mDMEZeHASRYJKwYBBAHaRw8BAQdApEqGI538OjNii+FRA+EAonLpQlQ4o1SvhvSu
rsn44J60LlJpY2hhcmQgTGFnZXJzdHLDtm0gPHJpY2hhcmQubGFnZXJzdHJvbUB2
ci5zZT6ImQQTFgoAQRYhBPdL6RIzabMyVv7tXvt67Plk7vvXBQJl4cBJAhsDBQkF
o5qABQsJCAcCAiICBhUKCQgLAgQWAgMBAh4HAheAAAoJEPt67Plk7vvXMKABAJH9
knLFrVFNuuA8aeaUoHFl9wHorgkJpwa1imqglSESAP96ufweJeO/6F8WhrM4Qiao
5YABXIhBkHZ7/WN1OH61CLg4BGXhwEkSCisGAQQBl1UBBQEBB0CSj8gnBWVtxSwj
6iGrX18HauPeS/xsChXaQkiN0BJ2HwMBCAeIfgQYFgoAJhYhBPdL6RIzabMyVv7t
Xvt67Plk7vvXBQJl4cBJAhsMBQkFo5qAAAoJEPt67Plk7vvXuKIA/jXcWEDmj6QV
6EzKqc/V1FZqtIMOTlmTmhqC/dSNMn2QAPkBeUSlnpXkL/3cJgFRqcfD3aqxLccI
cIKwgEeMLbs4Ag==
=XLZu
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -5,12 +5,36 @@ ssh_authorized_keys:
name: mandersson+3DB765E9ADBF28C9068A@sunet.se
type: ssh-rsa
user: root
mandersson+0E7E545753:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQCzZ9VleeR37xT/rUKQl8IDE757FVT9EmdaYwUYkT01DCcoiCySWWuNvNq6zw6GCBrnxabrUd4I/rxb4+39MeXKVMNIY+U3QLPgh5ppQSnKdZEuUUxBU76JoEc4PKtVyzTAWjJkt9o6UP7NgtR9oWRkTqzuCqJm52XMipdfbM2fnxVndJPFQTM/gxNoQQY4obPpiSvlsYiA30OzCkAoWY8xAogNxW6m7Q2aP41Vv5/FhNO7yRjc0Qrpy37l7JTNF+rB3F6KEpsf9oSSdmRNpgmOn4TFqGbb34bjMQMnEuQk85x9h4Wkne0aYbLF9kj/GLVqgZJmfRPXAKHINxwwO78Kf1k4q7FhRDKuGYWLTlSqo5my9Gpm7KjCmY0w/3jwaC1xTneT9R5JZ4e6k07iyyApSzdNwEysUgXH9ushH24JcGSRwrsrm0mZlsJZ8sFcrHGgQVUbXwYmoqdvRtKl9ZxlIc0lB3GUpwgsfnpB25MPvG45FUeGRO6sefmgP+ARorTC8JD4xaeQHFm1pO3rD9QbRCC3tTTobECO7IVFQGeLQuN3jQmNfGmqxBtUR/7kiWWTWTmNf1e09pcsQ+zbOivatSEpg6Dt8y1U8cnU/wrINluEsdM5y7UH+ylOeAGXFJdRTYapDr0f8YVCnNM0+RV8w0qmemPl1qmMmjKMw+zQoQ==
name: mandersson+0E7E545753@sunet.se
type: ssh-rsa
user: root
mandersson+B4D855B5:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQDcxRqdDF5X9R8keP5EThzsMhKn4PLXEdPgbWDE9unrHCD+qVpSJpZhucVMieb/C/VSdyCn7Xx7cd5N6HO6ANFoMQ0w7c2HHyHTXnDiy6pbtRs18+UV/xDaokfUlrw8uRkgK7ZNyIz3GpBQGjjS+pWbpiLfbVybeQZb3zbV1l68aPP4a+51synM10M+whB64NH+iRuOjDLE50zVdNBIdXZ0wDCNljf2nDxwNLezVd6uOwDECF04IzEp8AmQavO9Y8548BOm3p2ndNCXdg6tK+KOmsMD8BWxNAMOPYEQniyFh4dDQe+kFQsM4X//diU+Y+D70r2VWmtWHaWmBYutJmgzplBGJ/Vwy+MFfCtqWx4Z/oqUJALZelUrUbmYXnnwnVVLw3WIV7I31zaX1seBkDkyo5h9Kf4PK5dleNUKk9G/YrjsuC/OrP2n7wh4O9vAfoLaOh4reH2VbH1UyKmNLC2K9hEHsTSvfyvVaGevz6e19wKGyipG+fPNIL6a1jZbIpGpZIg2LsnaiKxq1gmlOVrIz4Re8XVpsMEAC6TW2Mg/D+WJmf1N3b6h7Zli8XYZ4pHo42KUElqauwpaGvDtsW1Rt+mm6lvwcvZbp0hVmeKLeksoa8pgQHy/AASqAFk7guympoSeut/s2fqHAPDmz9s6aMM7q6x9VFeWsi66H9Lmzw==
name: mandersson+B4D855B5@sunet.se
type: ssh-rsa
user: root
pettai+07431497:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQDnIQ2mZJT13YuBUOLM4Xlkp1165nlKvSC3oNE2Z47sKmcgwgKwPJssd1WsmkKDOsoxsvS6FJiAbmCQe/EdwT4dolRpVjczpp9p+w6wjtTXsWPsSUDbT0ZD8IOmOr24F8Z0WY/ho1Bmm3LwCMbW30KROpZn9VWyzGT6QTGwiZF/lyItsdGcYC2qgaXJpI0sEc5W1WK4ozpTu7z3BtzpyjOvVAQirF7Dp2yU3dLB93vj+/BYnB5F/1cmTWfu6lGRtO60E0j9DSH20AqTGfsJI4fPM7tbJnT2Fhj+MS8bHf6iEnh2QwlUSUdMlJAxXVu1XcLiSbbHXV4Mh7gCuGB0p0rMGiBg9W/t+D2dYsBQxuXq8fT4iqlaHaUwoVYtsDTMIg3c17mcYni5VRk2d49qpva6zR0zU3v0X2YtvHWlCCYBmjWSS/8X8FUgHVOaCEAOjTU89TvG9uvxXoqO64Wznx7sjywkaWuwmNck2K3xlhccw5iy+K1xxalKgcel6nMxdoBuW2RFRAYCCAT8IH+ONzLOcGj/+sRJx+bl18qYWcZGcYA9IbfJCNXuQHX4uRLjtml+zNac3Kefmw1jyBRUUkWbdcAsW3kvf3+CcP62URCk+eFMywnGk8N6UX9akSxgMKTR3IHuqZLHtzbgUxgeRHCLUid9GwsqDmu3fC8fLRK7sQ==
name: pettai+07431497@sunet.se
type: ssh-rsa
user: root
kano+0DA0A7A5708FE257:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQC5Llc/yl585Uj1CcJPcImWKFRkLOL1OhHhIHcVgj90eqoYz0vtmaw+MzlAj7DgwdtXb1WRAjjoulLZhEkHQ6iL9VePMJFqxN+YKvl+YZnJuOIAoH0CvS8Ej0TzZV2wuhchrWo5YrhVqi9PfFEt5xSHq/B0EFl797R6bFF75g0OE0EdJxtd1UmKQLJxtn/6gZoa7Z4ZuZqm8lL8cpBdm4qWFUGaz8CpCVwuGK9mdoszU/74tWkEcKnYD2DEIC0B/lZ9BeluRgw3Qf1Grf8G9D44OjbB+QkuiO34ru2hVKjTrfCnDEq+pfPzoNXVVUIlAxvoOqjCAnKZv080cJq3fYwjMkMTfU4JaH9y+Byidft1wcgV0T2aayUBMEuF6FbblUhLfhi5C04IfnCWYarquNfLkGy1LnVcejDG17o77Vz8oLlJ8kThMPdOt8hbOZjrdO7y9+Olk0QPYme8AW0sQTthM4+5mlQ3bHIX40QRoA6xm4+gPISqZQhdEmHR9iialCsx4KV2qpBkeNsvnBuC54Ltwmr5/nNSpKkfPJ8t7wKe42DPhxvg1Tb+GV6YIhDYJaHzbT1OVLO9X9YsjKGxtF6kxo46+0rOx3FDfYfG77qKKc3XmDaJLUcwVHO+PlBAWnfvMuWzSLWFduOHvm9gb49jsxw4rAB8iYLO8YHv4eqkhw==
name: kano@sunet.se
type: ssh-rsa
user: root
mariah+CA747E57:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDLQfL3uYsqjzkKOxn9nhjDHeWdWQ5SRwcPzq7gINcwJ7omA5c7wJ4RKDqBPihJ9tp2rgM6DKKGxtSyjO6LFhkGNa86uub2PLS0ar+aRobPZ6sOeASqHbO3S1mmvZZWTQ30AFjtY98jjlvfKEI5Xu1+UKyQJqK+/UBVKlPaW6GMSYLr9Z5Uu4XS/sBPdL/ZtR95zDO9OKY8OtTufQi8Zy3pl4Q3xcOsSLZrKiEKMYDCLPlxytHD8FDDYLsgiuPlbF8/uVYYrt/LHHMkD552xC+EjA7Qde1jDU6iHTpttn7j/3FKoxvM8BXUG+QpbqGUESjAlAz/PMNCUZ0kVYh9eeXr
name: mariah+CA747E57@nordu.net
type: ssh-rsa
user: root
mgmt_addresses:
- 130.242.125.68
- 2001:6b0:8:4::68

View file

@ -1,30 +1,77 @@
# Note that the matching is done with re.match()
.+:
sunet::server:
disable_all_local_users: true
disable_ipv6_privacy: true
encrypted_swap: false
nftables_init: true
fail2ban: true
ssh_allow_from_anywhere: false
unattended_upgrades: true
'^k8sc[1-9].matrix.test.sunet.se$':
sunet::microk8s::node:
channel: 1.30/stable
channel: 1.31/stable
peers:
- 89.47.191.81 k8sc1
- 89.47.191.52 k8sc2
- 89.47.190.141 k8sc3
- 89.47.190.223 k8sw1
- 89.47.190.80 k8sw2
- 89.47.191.178 k8sw3
- 89.47.191.38 k8sc1
- 89.45.237.236 k8sc2
- 89.46.20.227 k8sc3
- 89.47.191.230 k8sw1
- 89.45.236.152 k8sw2
- 89.46.20.18 k8sw3
- 89.47.190.78 k8sw4
- 89.45.236.6 k8sw5
- 89.46.21.195 k8sw6
traefik: false
sunet::frontend::register_sites:
sites:
'kube-matrixtest.matrix.test.sunet.se':
frontends:
- 'se-fre-lb-1.sunet.se'
- 'se-tug-lb-1.sunet.se'
port: '443'
'^k8sw[1-9].matrix.test.sunet.se$':
sunet::microk8s::node:
channel: 1.30/stable
channel: 1.31/stable
peers:
- 89.47.191.81 k8sc1
- 89.47.191.52 k8sc2
- 89.47.190.141 k8sc3
- 89.47.190.223 k8sw1
- 89.47.190.80 k8sw2
- 89.47.191.178 k8sw3
- 89.47.191.38 k8sc1
- 89.45.237.236 k8sc2
- 89.46.20.227 k8sc3
- 89.47.191.230 k8sw1
- 89.45.236.152 k8sw2
- 89.46.20.18 k8sw3
- 89.47.190.78 k8sw4
- 89.45.236.6 k8sw5
- 89.46.21.195 k8sw6
traefik: false
'^lb[1-9]\.matrix\.test\.sunet\.se$':
matrix::lb:
'^mgmt[1-9]\.matrix\.test\.sunet\.se$':
matrix::podmanhost:
rootless: true
rlusers:
- matrixinstaller
'^k8sc[1-9].matrix.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.190.119 k8sc1
- 89.45.237.43 k8sc2
- 89.46.21.148 k8sc3
- 89.47.190.103 k8sw1
- 89.45.237.161 k8sw2
- 89.46.20.60 k8sw3
- 89.47.190.237 k8sw4
- 89.45.236.55 k8sw5
- 89.46.20.191 k8sw6
traefik: false
'^k8sw[1-9].matrix.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.190.119 k8sc1
- 89.45.237.43 k8sc2
- 89.46.21.148 k8sc3
- 89.47.190.103 k8sw1
- 89.45.237.161 k8sw2
- 89.46.20.60 k8sw3
- 89.47.190.237 k8sw4
- 89.45.236.55 k8sw5
- 89.46.20.191 k8sw6
traefik: false
'^lb[1-9]\.matrix\.sunet\.se$':
matrix::lb:

View file

@ -54,3 +54,11 @@ if is_hash($ssh_authorized_keys) {
# proto => "tcp"
# }
#}
if $::facts['hostname'] =~ /^k8s[wc]/ {
warning('Setting nftables to installed but disabled')
ensure_resource ('class','sunet::nftables::init', { enabled => false })
} else {
warning('Enabling nftables')
ensure_resource ('class','sunet::nftables::init', { })
}

View file

@ -113,12 +113,18 @@ def main():
"upgrade": "yes",
"tag": "sunet-2*",
},
"matrix": {
"repo": "https://platform.sunet.se/matrix/matrix-puppet.git",
"upgrade": "yes",
"tag": "stable-2*",
}
}
# When/if we want we can do stuff to modules here
if host_info:
if host_info["environment"] == "test":
modules["sunet"]["tag"] = "testing-2*"
modules["matrix"]["tag"] = "testing-2*"
# if host_info["fqdn"] == "k8sw1.matrix.test..sunet.se":
# modules["sunet"]["tag"] = "mandersson-test*"
# Build list of expected file content

View file

@ -20,8 +20,9 @@ if ! test -f "${stamp}" -a -f /usr/bin/puppet; then
puppet-module-puppetlabs-apt \
puppet-module-puppetlabs-concat \
puppet-module-puppetlabs-cron-core \
puppet-module-puppetlabs-sshkeys-core \
puppet-module-puppetlabs-stdlib \
puppet-module-puppetlabs-vcsrepo
puppet-module-puppetlabs-vcsrepo
fi

View file

@ -0,0 +1,3 @@
### Patch resource felixconfiguration/default to enable wireguard calico CNI.
kubectl patch felixconfiguration default --type='merge' --patch-file calico-wireguard-patch.yaml

View file

@ -0,0 +1,2 @@
spec:
wireguardEnabled: true

View file

@ -0,0 +1,6 @@
# install cert-manager addon
microk8s enable cert-manager
microk8s enable ingress dns
# init the clusterissuer
kubectl apply -f clusterissuer.yaml
kubectl get clusterissuer -o wide

View file

@ -0,0 +1,16 @@
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: someemailaddress+element@sunet.se
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: lets-encrypt-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public

View file

@ -0,0 +1,9 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-read-namespaces
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]

View file

@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: matrix-registry
namespace: matrix-registry
labels:
k8s-app: matrix-registry
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
matchLabels:
k8s-app: matrix-registry
template:
metadata:
labels:
k8s-app: matrix-registry
kubernetes.io/cluster-service: "true"
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
k8s-app: matrix-registry
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 300Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_HTTP_SECRET
valueFrom:
secretKeyRef:
name: matrix-registry-secret
key: http-secret
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
livenessProbe:
httpGet:
path: /
port: registry
readinessProbe:
httpGet:
path: /
port: registry
volumes:
- name: image-store
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false

View file

@ -0,0 +1,31 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
name: matrix-registry-ingress
namespace: matrix-registry
spec:
defaultBackend:
service:
name: matrix-registry-service
port:
number: 5000
ingressClassName: nginx
rules:
- host: registry.matrix.test.sunet.se
http:
paths:
- backend:
service:
name: matrix-registry-service
port:
number: 5000
path: /
pathType: Prefix
tls:
- hosts:
- registry.matrix.test.sunet.se
secretName: tls-secret

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: matrix-registry
labels:
name: matrix-registry-namespace

View file

@ -0,0 +1,13 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: matrix-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs

View file

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: matrix-registry-service
namespace: matrix-registry
spec:
selector:
k8s-app: matrix-registry
ports:
- name: httpregistry
protocol: TCP
port: 5000
targetPort: registry

View file

@ -0,0 +1,26 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: health-node
namespace: health
creationTimestamp:
labels:
app: health-node
spec:
replicas: 3
selector:
matchLabels:
app: health-node
template:
metadata:
creationTimestamp:
labels:
app: health-node
spec:
containers:
- name: echoserver
image: k8s.gcr.io/echoserver:1.10
resources: {}
strategy: {}
status: {}

View file

@ -0,0 +1,32 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
tls:
- hosts:
- kube-matrixtest.matrix.test.sunet.se
secretName: tls-secret
rules:
- host: kube-matrixtest.matrix.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,8 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: health
spec:
finalizers:
- kubernetes

View file

@ -0,0 +1,25 @@
---
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app: health-node
name: health-node
namespace: health
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: health-node
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""

View file

@ -0,0 +1,7 @@
resources:
- health-deployment.yml
- health-ingress.yml
- health-namespace.yml
- health-service.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
# tls:
# - hosts:
# - kube-matrix.matrix.sunet.se
# secretName: tls-secret
rules:
- host: kube-matrix.matrix.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
# tls:
# - hosts:
# - kube-matrixtest.matrix.test.sunet.se
# secretName: tls-secret
rules:
- host: "kube.matrix.test.sunet.se"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,43 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: btsystem-registry
name: btsystemregistry-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- configmaps
- events
- limitranges
- persistentvolumeclaims
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- controllerrevisions
- daemonsets
- deployments
- replicasets
- statefulsets
- localsubjectaccessreviews
- horizontalpodautoscalers
- cronjobs
- jobs
- leases
- networkpolicies
- networksets
- endpointslices
- events
- ingresses
- networkpolicies
- objectbucketclaims
- poddisruptionbudgets
- rolebindings
- roles
- csistoragecapacities
verbs:
- get
- watch
- list

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: matrix
labels:
name: matrix

7
k8s/postgres/README.md Normal file
View file

@ -0,0 +1,7 @@
### Postgres password
To create the postgres password secret you can use the following command.
kubectl apply -f postgres-namespace.yaml
kubectl apply -f postgres-pvc.yaml
kubectl create secret generic postgres-secret --from-literal=postgres-password=xxXxXxX -n postgres
kubectl apply -f postgres-deployment.yaml
kubectl apply -f postgres-service.yaml

View file

@ -0,0 +1,75 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: postgres
labels:
k8s-app: postgres
spec:
replicas: 1
selector:
matchLabels:
k8s-app: postgres
template:
metadata:
labels:
k8s-app: postgres
spec:
containers:
- name: postgresql
image: postgres:17.0-bookworm
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: sharemem
mountPath: /dev/shm
ports:
- containerPort: 5432
name: postgres
protocol: TCP
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
livenessProbe:
exec:
command:
- /bin/bash
- -c
- exec pg_isready -U postgres -h 127.0.0.1 -p 5432
failureThreshold: 6
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
exec:
command:
- /bin/bash
- -c
- exec pg_isready -U postgres -h 127.0.0.1 -p 5432
failureThreshold: 6
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 2
timeoutSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
readOnly: false
- emptyDir:
medium: Memory
sizeLimit: 1Gi
name: sharemem

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: postgres
labels:
usage: postgres-btsystem

View file

@ -0,0 +1,13 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: postgres
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: rook-cephfs

View file

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
spec:
selector:
k8s-app: postgres
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432

74
k8s/rook/README.md Normal file
View file

@ -0,0 +1,74 @@
### Rook deployment
In the operator.yaml change ROOK_CSI_KUBELET_DIR_PATH to "/var/snap/microk8s/common/var/lib/kubelet"
# initalize rook operator
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-6668b75686-l4zlh 1/1 Running 0 60s
# initalize rook cluster
kubectl create -f cluster-multizone.yaml
takes lots of time before the multizone cluster is initalized
(should be around 47 pods...)
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-6xhjh 2/2 Running 1 (3m42s ago) 4m16s
csi-cephfsplugin-cgmqs 2/2 Running 0 4m16s
csi-cephfsplugin-hs2rx 2/2 Running 1 (3m43s ago) 4m16s
csi-cephfsplugin-km7k6 2/2 Running 0 4m16s
csi-cephfsplugin-ms8c2 2/2 Running 1 (3m42s ago) 4m16s
csi-cephfsplugin-provisioner-dc97f9d65-6tvkn 5/5 Running 2 (3m35s ago) 4m15s
csi-cephfsplugin-provisioner-dc97f9d65-bwdkn 5/5 Running 0 4m15s
csi-cephfsplugin-wlks6 2/2 Running 0 4m16s
csi-rbdplugin-ckgnc 2/2 Running 0 4m18s
csi-rbdplugin-hmfhc 2/2 Running 1 (3m42s ago) 4m18s
csi-rbdplugin-mclsz 2/2 Running 0 4m18s
csi-rbdplugin-nt7rk 2/2 Running 1 (3m42s ago) 4m18s
csi-rbdplugin-provisioner-7f5767b9d5-gvbkr 5/5 Running 0 4m17s
csi-rbdplugin-provisioner-7f5767b9d5-n5mwc 5/5 Running 0 4m17s
csi-rbdplugin-rzk9v 2/2 Running 1 (3m44s ago) 4m18s
csi-rbdplugin-z9dmh 2/2 Running 0 4m18s
rook-ceph-crashcollector-k8sw1-5fd979dcf9-w9g2x 1/1 Running 0 119s
rook-ceph-crashcollector-k8sw2-68f48b45b-dwld5 1/1 Running 0 109s
rook-ceph-crashcollector-k8sw3-7f5d749cbf-kxswk 1/1 Running 0 96s
rook-ceph-crashcollector-k8sw4-84fd486bb6-pfkgm 1/1 Running 0 2m3s
rook-ceph-crashcollector-k8sw5-58c7b74b4c-pdf2j 1/1 Running 0 110s
rook-ceph-crashcollector-k8sw6-578ffc7cfb-bpzgl 1/1 Running 0 2m27s
rook-ceph-exporter-k8sw1-66746d6cf-pljkx 1/1 Running 0 119s
rook-ceph-exporter-k8sw2-6cc5d955d4-k7xx5 1/1 Running 0 104s
rook-ceph-exporter-k8sw3-5d6f7d49b9-rvvbd 1/1 Running 0 96s
rook-ceph-exporter-k8sw4-5bf54d5b86-cn6v7 1/1 Running 0 118s
rook-ceph-exporter-k8sw5-547898b8d7-l7cmc 1/1 Running 0 110s
rook-ceph-exporter-k8sw6-596f7d956d-n426q 1/1 Running 0 2m27s
rook-ceph-mgr-a-6cfc895565-h9qfg 2/2 Running 0 2m37s
rook-ceph-mgr-b-85fc4df4b5-fv6z9 2/2 Running 0 2m37s
rook-ceph-mon-a-868c8f5cff-2tk7l 1/1 Running 0 4m10s
rook-ceph-mon-b-6f9776cf9b-w4dtq 1/1 Running 0 3m12s
rook-ceph-mon-c-8457f5cc77-8mbpj 1/1 Running 0 2m57s
rook-ceph-operator-6668b75686-l4zlh 1/1 Running 0 7m36s
rook-ceph-osd-0-79d7b6c764-shwtd 1/1 Running 0 2m4s
rook-ceph-osd-1-65d99447b5-bnhln 1/1 Running 0 119s
rook-ceph-osd-2-69dbd98748-5vrwn 1/1 Running 0 114s
rook-ceph-osd-3-596b58cf7d-j2qgj 1/1 Running 0 115s
rook-ceph-osd-4-858bc8df6d-wrlsx 1/1 Running 0 2m
rook-ceph-osd-5-7f6fbfd96-65gpl 1/1 Running 0 96s
rook-ceph-osd-prepare-k8sw1-5pgh9 0/1 Completed 0 2m14s
rook-ceph-osd-prepare-k8sw2-6sdrc 0/1 Completed 0 2m14s
rook-ceph-osd-prepare-k8sw3-mfzsh 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw4-dn8gn 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw5-lj5tj 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw6-8hw4k 0/1 Completed 0 2m12s
# init rook toolbox
kubectl create -f toolbox.yaml
# jump into toolbox
kubectl -n rook-ceph exec -it rook-ceph-tools-5f4464f87-zbd5p -- /bin/bash
# init rook filesystem & storageclass
kubectl create -f filesystem.yaml
kubectl create -f storageclass.yaml

View file

@ -0,0 +1,130 @@
#################################################################################################################
# Define the settings for the rook-ceph cluster with common settings for a production cluster.
# Selected nodes with selected raw devices will be used for the Ceph cluster. At least three nodes are required
# in this example. See the documentation for more details on storage settings available.
# For example, to create the cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster-multizone.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph # namespace:cluster
spec:
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: false
failureDomainLabel: topology.kubernetes.io/zone
zones:
- name: dco
- name: sto3
- name: sto4
mgr:
count: 2
allowMultiplePerNode: false
modules:
- name: rook
enabled: true
- name: pg_autoscaler
enabled: true
cephVersion:
image: quay.io/ceph/ceph:v18.2.4
allowUnsupported: false
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
waitTimeoutForHealthyOSDInMinutes: 10
dashboard:
enabled: true
ssl: true
storage:
useAllNodes: false
nodes:
- name: k8sw1
- name: k8sw2
- name: k8sw3
- name: k8sw4
- name: k8sw5
- name: k8sw6
useAllDevices: false
devices:
- name: "/dev/rookvg/rookvol1"
- name: "/dev/rookvg/rookvol2"
- name: "/dev/rookvg/rookvol3"
deviceFilter: ""
placement:
osd:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- dco
- sto3
- sto4
mgr:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- dco
- sto3
- sto4
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
disruptionManagement:
managePodBudgets: true
csi:
readAffinity:
# Enable read affinity to enable clients to optimize reads from an OSD in the same topology.
# Enabling the read affinity may cause the OSDs to consume some extra memory.
# For more details see this doc:
# https://rook.io/docs/rook/latest/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-volumes
enabled: false
# cephfs driver specific settings.
cephfs:
# Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options.
# kernelMountOptions: ""
# Set CephFS Fuse mount options to use https://docs.ceph.com/en/quincy/man/8/ceph-fuse/#options.
# fuseMountOptions: ""
# healthChecks
# Valid values for daemons are 'mon', 'osd', 'status'
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
osd:
disabled: false
interval: 60s
status:
disabled: false
interval: 60s
# Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons.
livenessProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
# Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons.
startupProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false

1284
k8s/rook/common.yaml Normal file

File diff suppressed because it is too large Load diff

14044
k8s/rook/crds.yaml Normal file

File diff suppressed because it is too large Load diff

20
k8s/rook/filesystem.yaml Normal file
View file

@ -0,0 +1,20 @@
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: rookfs
namespace: rook-ceph
spec:
metadataPool:
failureDomain: zone
replicated:
size: 3
dataPools:
- name: replicated
failureDomain: zone
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true

View file

@ -0,0 +1,3 @@
nbd
rbd
ceph

691
k8s/rook/operator.yaml Normal file
View file

@ -0,0 +1,691 @@
#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
###############################################################################################################
# Rook Ceph Operator Config ConfigMap
# Use this ConfigMap to override Rook-Ceph Operator configurations.
# NOTE! Precedence will be given to this config if the same Env Var config also exists in the
# Operator Deployment.
# To move a configuration(s) from the Operator Deployment to this ConfigMap, add the config
# here. It is recommended to then remove it from the Deployment to eliminate any future confusion.
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-ceph-operator-config
# should be in the namespace of the operator
namespace: rook-ceph # namespace:operator
data:
# The logging level for the operator: ERROR | WARNING | INFO | DEBUG
ROOK_LOG_LEVEL: "INFO"
# The address for the operator's controller-runtime metrics. 0 is disabled. :8080 serves metrics on port 8080.
ROOK_OPERATOR_METRICS_BIND_ADDRESS: "0"
# Allow using loop devices for osds in test clusters.
ROOK_CEPH_ALLOW_LOOP_DEVICES: "false"
# Enable CSI Operator
ROOK_USE_CSI_OPERATOR: "false"
# Enable the CSI driver.
# To run the non-default version of the CSI driver, see the override-able image properties in operator.yaml
ROOK_CSI_ENABLE_CEPHFS: "true"
# Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_RBD: "true"
# Enable the CSI NFS driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_NFS: "false"
# Disable the CSI driver.
ROOK_CSI_DISABLE_DRIVER: "false"
# Set to true to enable Ceph CSI pvc encryption support.
CSI_ENABLE_ENCRYPTION: "false"
# Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance.
# CSI_ENABLE_HOST_NETWORK: "true"
# Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network
# without needing hosts to the network. Holder pods are being removed. See issue for details:
# https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true".
CSI_DISABLE_HOLDER_PODS: "true"
# Set to true to enable adding volume metadata on the CephFS subvolume and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default.
# CSI_ENABLE_METADATA: "true"
# cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases
# like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster.
# CSI_CLUSTER_NAME: "my-prod-cluster"
# Set logging level for cephCSI containers maintained by the cephCSI.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
# Set logging level for Kubernetes-csi sidecar containers.
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
# CSI_SIDECAR_LOG_LEVEL: "0"
# csi driver name prefix for cephfs, rbd and nfs. if not specified, default
# will be the namespace name where rook-ceph operator is deployed.
# search for `# csi-provisioner-name` in the storageclass and
# volumesnashotclass and update the name accordingly.
# CSI_DRIVER_NAME_PREFIX: "rook-ceph"
# Set replicas for csi provisioner deployment.
CSI_PROVISIONER_REPLICAS: "2"
# OMAP generator will generate the omap mapping between the PV name and the RBD image.
# CSI_ENABLE_OMAP_GENERATOR need to be enabled when we are using rbd mirroring feature.
# By default OMAP generator sidecar is deployed with CSI provisioner pod, to disable
# it set it to false.
# CSI_ENABLE_OMAP_GENERATOR: "false"
# set to false to disable deployment of snapshotter container in CephFS provisioner pod.
CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in NFS provisioner pod.
CSI_ENABLE_NFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in RBD provisioner pod.
CSI_ENABLE_RBD_SNAPSHOTTER: "true"
# set to false to disable volume group snapshot feature. This feature is
# enabled by default as long as the necessary CRDs are available in the cluster.
CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: "true"
# Enable cephfs kernel driver instead of ceph-fuse.
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
# NOTE! cephfs quota is not supported in kernel version < 4.17
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
# (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_RBD_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_CEPHFS_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_NFS_FSGROUPPOLICY: "File"
# (Optional) Allow starting unsupported ceph-csi image
ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
# (Optional) control the host mount of /etc/selinux for csi plugin pods.
CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
# ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:3.12.2"
# ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1"
# ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1"
# ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1"
# ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1"
# ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1"
# To indicate the image pull policy to be applied to all the containers in the csi driver pods.
# ROOK_CSI_IMAGE_PULL_POLICY: "IfNotPresent"
# (Optional) set user created priorityclassName for csi plugin pods.
CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical"
# (Optional) set user created priorityclassName for csi provisioner pods.
CSI_PROVISIONER_PRIORITY_CLASSNAME: "system-cluster-critical"
# CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
# Default value is 1.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
# Default value is 1.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_NFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
ROOK_CSI_KUBELET_DIR_PATH: "/var/snap/microk8s/common/var/lib/kubelet"
# Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
# ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI RBD Deployments and DaemonSets Pods.
# ROOK_CSI_RBD_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI NFS Deployments and DaemonSets Pods.
# ROOK_CSI_NFS_POD_LABELS: "key1=value1,key2=value2"
# (Optional) CephCSI CephFS plugin Volumes
# CSI_CEPHFS_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI CephFS plugin Volume mounts
# CSI_CEPHFS_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI RBD plugin Volumes
# CSI_RBD_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI RBD plugin Volume mounts
# CSI_RBD_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI provisioner NodeAffinity (applied to both CephFS and RBD provisioner).
# CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI provisioner tolerations list(applied to both CephFS and RBD provisioner).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_PROVISIONER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI plugin NodeAffinity (applied to both CephFS and RBD plugin).
# CSI_PLUGIN_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI plugin tolerations list(applied to both CephFS and RBD plugin).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_PLUGIN_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI RBD provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_RBD_PROVISIONER_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_RBD_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI RBD plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_RBD_PLUGIN_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_RBD_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI CephFS provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_CEPHFS_PROVISIONER_NODE_AFFINITY: "role=cephfs-node"
# (Optional) CephCSI CephFS provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_CEPHFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI CephFS plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: "role=cephfs-node"
# NOTE: Support for defining NodeAffinity for operators other than "In" and "Exists" requires the user to input a
# valid v1.NodeAffinity JSON or YAML string. For example, the following is valid YAML v1.NodeAffinity:
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) CephCSI CephFS plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_CEPHFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI NFS provisioner NodeAffinity (overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_NFS_PROVISIONER_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS provisioner tolerations list (overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_NFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CephCSI NFS plugin NodeAffinity (overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_NFS_PLUGIN_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS plugin tolerations list (overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_NFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CEPH CSI RBD provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_RBD_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-omap-generator
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI RBD plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_RBD_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_CEPHFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_CEPHFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
# CSI_NFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
# CSI_NFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# Configure CSI CephFS liveness metrics port
# Set to true to enable Ceph CSI liveness container.
CSI_ENABLE_LIVENESS: "false"
# CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
# Configure CSI RBD liveness metrics port
# CSI_RBD_LIVENESS_METRICS_PORT: "9080"
# CSIADDONS_PORT: "9070"
# Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
# CSI_CEPHFS_KERNEL_MOUNT_OPTIONS: "ms_mode=secure"
# (Optional) Duration in seconds that non-leader candidates will wait to force acquire leadership. Default to 137 seconds.
# CSI_LEADER_ELECTION_LEASE_DURATION: "137s"
# (Optional) Deadline in seconds that the acting leader will retry refreshing leadership before giving up. Defaults to 107 seconds.
# CSI_LEADER_ELECTION_RENEW_DEADLINE: "107s"
# (Optional) Retry Period in seconds the LeaderElector clients should wait between tries of actions. Defaults to 26 seconds.
# CSI_LEADER_ELECTION_RETRY_PERIOD: "26s"
# Whether the OBC provisioner should watch on the ceph cluster namespace or not, if not default provisioner value is set
ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"
# Custom prefix value for the OBC provisioner instead of ceph cluster namespace, do not set on existing cluster
# ROOK_OBC_PROVISIONER_NAME_PREFIX: "custom-prefix"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
ROOK_ENABLE_DISCOVERY_DAEMON: "false"
# The timeout value (in seconds) of Ceph commands. It should be >= 1. If this variable is not set or is an invalid value, it's default to 15.
ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: "15"
# Enable the csi addons sidecar.
CSI_ENABLE_CSIADDONS: "false"
# Enable watch for faster recovery from rbd rwo node loss
ROOK_WATCH_FOR_NODE_FAILURE: "true"
# ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.9.1"
# The CSI GRPC timeout value (in seconds). It should be >= 120. If this variable is not set or is an invalid value, it's default to 150.
CSI_GRPC_TIMEOUT_SECONDS: "150"
# Enable topology based provisioning.
CSI_ENABLE_TOPOLOGY: "false"
# Domain labels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# CSI_TOPOLOGY_DOMAIN_LABELS: "kubernetes.io/hostname,topology.kubernetes.io/zone,topology.rook.io/rack"
# Whether to skip any attach operation altogether for CephCSI PVCs.
# See more details [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the CephCSI PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes. for RBD PVC it can cause data corruption,
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false
# since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
CSI_CEPHFS_ATTACH_REQUIRED: "true"
CSI_RBD_ATTACH_REQUIRED: "true"
CSI_NFS_ATTACH_REQUIRED: "true"
# Rook Discover toleration. Will tolerate all taints with all keys.
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
# DISCOVER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s)
# DISCOVER_PRIORITY_CLASS_NAME: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity.
# DISCOVER_AGENT_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) Discover Agent Pod Labels.
# DISCOVER_AGENT_POD_LABELS: "key1=value1,key2=value2"
# Disable automatic orchestration when new devices are discovered
ROOK_DISABLE_DEVICE_HOTPLUG: "false"
# The duration between discovering devices in the rook-discover daemonset.
ROOK_DISCOVER_DEVICES_INTERVAL: "60m"
# DISCOVER_DAEMON_RESOURCES: |
# - name: DISCOVER_DAEMON_RESOURCES
# resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 128Mi
# (Optional) Burst to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_BURST: "10"
# (Optional) QPS to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_QPS: "5.0"
# Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled
ROOK_ENFORCE_HOST_NETWORK: "false"
# RevisionHistoryLimit value for all deployments created by rook.
# ROOK_REVISION_HISTORY_LIMIT: "3"
---
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-operator
namespace: rook-ceph # namespace:operator
labels:
operator: rook
storage-backend: ceph
app.kubernetes.io/name: rook-ceph
app.kubernetes.io/instance: rook-ceph
app.kubernetes.io/component: rook-ceph-operator
app.kubernetes.io/part-of: rook-ceph-operator
spec:
selector:
matchLabels:
app: rook-ceph-operator
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: rook-ceph-operator
spec:
tolerations:
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 5
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: docker.io/rook/ceph:v1.15.3
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
volumeMounts:
- mountPath: /var/lib/rook
name: rook-config
- mountPath: /etc/ceph
name: default-config-dir
env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon, osd pods and csi provisioners(if logrotation is on).
# Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to separate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Time to wait until the node controller will move Rook pods to other
# nodes after detecting an unreachable node.
# Pods affected by this setting are:
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Recommended resource requests and limits, if desired
#resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 200m
# memory: 128Mi
# Uncomment it to run lib bucket provisioner in multithreaded mode
#- name: LIB_BUCKET_PROVISIONER_THREADS
# value: "5"
# Uncomment it to run rook operator on the host network
#hostNetwork: true
volumes:
- name: rook-config
emptyDir: {}
- name: default-config-dir
emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT

View file

@ -0,0 +1,28 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: rookfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: rookfs-replicated
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete

128
k8s/rook/toolbox.yaml Normal file
View file

@ -0,0 +1,128 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-tools
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-tools
spec:
replicas: 1
selector:
matchLabels:
app: rook-ceph-tools
template:
metadata:
labels:
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: quay.io/ceph/ceph:v17.2.6
command:
- /bin/bash
- -c
- |
# Replicate the script from toolbox.sh inline so the ceph image
# can be run directly, instead of requiring the rook toolbox
CEPH_CONFIG="/etc/ceph/ceph.conf"
MON_CONFIG="/etc/rook/mon-endpoints"
KEYRING_FILE="/etc/ceph/keyring"
# create a ceph config file in its default location so ceph/rados tools can be used
# without specifying any arguments
write_endpoints() {
endpoints=$(cat ${MON_CONFIG})
# filter out the mon names
# external cluster can have numbers or hyphens in mon names, handling them in regex
# shellcheck disable=SC2001
mon_endpoints=$(echo "${endpoints}"| sed 's/[a-z0-9_-]\+=//g')
DATE=$(date)
echo "$DATE writing mon endpoints to ${CEPH_CONFIG}: ${endpoints}"
cat <<EOF > ${CEPH_CONFIG}
[global]
mon_host = ${mon_endpoints}
[client.admin]
keyring = ${KEYRING_FILE}
EOF
}
# watch the endpoints config file and update if the mon endpoints ever change
watch_endpoints() {
# get the timestamp for the target of the soft link
real_path=$(realpath ${MON_CONFIG})
initial_time=$(stat -c %Z "${real_path}")
while true; do
real_path=$(realpath ${MON_CONFIG})
latest_time=$(stat -c %Z "${real_path}")
if [[ "${latest_time}" != "${initial_time}" ]]; then
write_endpoints
initial_time=${latest_time}
fi
sleep 10
done
}
# read the secret from an env var (for backward compatibility), or from the secret file
ceph_secret=${ROOK_CEPH_SECRET}
if [[ "$ceph_secret" == "" ]]; then
ceph_secret=$(cat /var/lib/rook-ceph-mon/secret.keyring)
fi
# create the keyring file
cat <<EOF > ${KEYRING_FILE}
[${ROOK_CEPH_USERNAME}]
key = ${ceph_secret}
EOF
# write the initial config file
write_endpoints
# continuously update the mon endpoints if they fail over
watch_endpoints
imagePullPolicy: IfNotPresent
tty: true
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
env:
- name: ROOK_CEPH_USERNAME
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-username
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: mon-endpoint-volume
mountPath: /etc/rook
- name: ceph-admin-secret
mountPath: /var/lib/rook-ceph-mon
readOnly: true
volumes:
- name: ceph-admin-secret
secret:
secretName: rook-ceph-mon
optional: false
items:
- key: ceph-secret
path: secret.keyring
- name: mon-endpoint-volume
configMap:
name: rook-ceph-mon-endpoints
items:
- key: data
path: mon-endpoints
- name: ceph-config
emptyDir: {}
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5

View file

@ -0,0 +1 @@
../README

View file

@ -1,30 +1,83 @@
---
tls-secret: &tls
- key: tls.crt
value: >
ENC[PKCS7,MIIIogYJKoZIhvcNAQcDoIIIkzCCCI8CAQAxggKGMIICggIBAD
BqMFIxCzAJBgNVBAYTAlNFMQ4wDAYDVQQKDAVTVU5FVDEOMAwGA1UECwwFRV
lBTUwxIzAhBgNVBAMMGms4c2MxLm1hdHJpeC50ZXN0LnN1bmV0LnNlAhRQzt
oh/v7OPqz1zgArYCaRp/w78jANBgkqhkiG9w0BAQEFAASCAgBtXy5wM9p2Yn
cI5GR83HTUv3af9JgAiE3X0SmrzzNQF08GUxM9M6trERrLX24iVgLUxMV74S
geKz+uQs708wcGvWAne6V+Es2/WfuKsSObLaaNSDhidnXVBMP1DjlSyIFChp
s6HIsSUdOV1gZDl2dtez4mYSx0vKweQEGk3cK3Ic6xmnNVZNXCZTIBuzsawo
ZJjBjOumadbypgzvPL/nGITiQ2XPmHhae0bzEGCHV6iheRKuTZdTBm5DRwm9
GDU5jF8rSdcyQu9QJ51Pa75OdLL9gF+S3u8IxN88/gMMhDayL5+7pSxDCK8q
9zys8482LrWlHHXo0a9ThkbXvazp3Ek+k6pCA9MtPZ4VKWfzXrmFNOlbeoYN
eflfSz/HTtTEF2Zd4eBOgrnJT172n/J8hZ+bTAa6qAMBSW9QOPrJlWWK65Hu
CXNpS9mMZf8I+8U9WzHmLCGphe1CS2MGcCZgQZEefSWVg05iWbUl18mZw6YC
gvzDmkSmsHs1D/Me3kIrYcQgGDWpZyiqpknw2ABfmv6HLA911fclYtccnav1
830FGFGdI187RqOip4ArZiAOvd4kH6AIcjJb7Q/891IVv4gqVZOMpdsjmRW0
1LRDG387bDJSkyd4WDIME6f76uBONaRdlmi8pUjU9GVJ/bikWCS9hRFZ+/LZ
XSe1ZfcnYN6jCCBf4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEBHo3uwa1g
F2l3OMAgw+pPaAggXQiOCm6Zj8jbmYe6JQOI8MN0b/UIEK+xGHJ6CX+WjFnE
tWlbetkx/fuF0A/3dZwEvZONwnSjHMorTv1IK42fND7aAdMDy1jTv55GCBhI
Uq9T1UEfPc7vigiqYAvxfgsInfHtxANoFp6c8BJ4rThqHl8UhGRcjwEqI2Re
RWFNba/dR+l/hUs6V0OGp/IFto7oUBf4C1rmwIIu7JNzifQu8P9R4KYxMIXT
Y5PpVQbEY//S1eG+X9ReK57bUn3sOe4N8zUFdkMdR38Iu3HkJPhgNFCmL9iH
Lbu/w/5q6Xs8r6viI7TGBuFjROZqAC6JoUHBSn5qHW0nL9K7YTNN67mFAHiB
hqxg4BlP64epTnx5mMyT+NLat9Aie5kPGRHJBXEUjPiMx8WNhpZvY5cfWMep
LK4UIUAiVhhkJ8emCv3KEBcoQE+0RoGcnu0pN7YrSHaj0Jj6T83T665ENd5K
mzyPe/uBkTBKE9qyBvvqdn5U/gQo/yg5iyO4YMozN75TGNCBdupsfPs24lxY
6mbZPGYlDAbdkRdOm9WfbXvyjeX5LtEbdsKeqRFe53vqeA/XIW9RUP95QrPO
Y3RrNv3XX2AbJq55h0BmctAgpq6LYZSjyWkWMAtgTY2p1p0Rdoqb9CPrcSVS
pXlSQpgxUG5m4r0MTm0l8dukDIwpT4o0sXTnGDtMlLoSZJ7kmfOLCGvHRk2t
mymbnXsWEhtTHGmkwJXpiIEOs2FDaH2uVwHxP8DA7pkOMM/c9nbKLTyH1rmP
jRWfTxB6Q9KlGF5acHDbYQsoMVD15//4O/bwaj6Xes7bqfMMc05otShT61b/
5KzltMuM6nr3/n53Vc/gMSt3aDnyMP2pOQC6K7UbXQ8gSKRnSh20DBsPU0zt
sHYMSYTjx39uJTGj1vhW+aXbe9CS0GuxysCbwXwZBDq4kdgWEICNInf4yGG0
a7v3txXf/DlyL/wAQQlvgHV5h+UQL2IKg3AdmQflLUMqr2l7ja69I8Tkq+He
nieKmEg7b78pvJp3guJ1a9PsaWI+QeYNjaKJ0MeNufLhh1Ki6IGdvPR/lJ5U
JagkCEBZeBcMxrapJ4YxP91Jta0/EAiy7sRIa6JhdSIT+m4b9+LgxoSyGySS
xs81TNZA6nxZsG4U5OyFO4POWPNXwAzGGpfRLBch4PJX8Nzs1djMtHTxJxCC
0xBC6FdgkzwdjD+272yUgaKqlAAjYazd4hNF0yzYo0lR04OMIeRBKnm9e69f
UiY0eyocrdd8f7NQQtnYo0FyiujP/zGCvr0EpHGHo1lt+8IKJMJ+ETf7xc2Z
EbpKevY1EAoHnR3KZ/2hv2MqlxN6zRPmSApAl5LdXhioPLMG74f7CtEl0OE6
gRT0XEPFvvei3epH+6tTBhJPxGKQorZgH/vAbNk4g5USbhx7ijlolXQ4OPtB
0ThTuTM93TDzkFfmhleQTHFJk4l5gQjW/7QS5Y48Ja3qxWg6L/5hc2S3bfbW
gwkHpCSwDwmCU1/d7kDkOCkX4ae/amJ+pIrKwA3sfz1Ze6bYxF4EmwQ8s0yf
vCRqDwpXFyRWyya8fUbXgGj+PNfr0d6j4G+mbwVj/efAhu2tTDFPjPQdKQ6O
8eCmGsr1g0klPeN0HCwqYeNw1v3+og57JphaMWSIIlGa7rxoH5yvm+geaBe2
RMVyepVFN+4Y5JRiOFYmAnu/+Dv6PST9apYBDyFBkcCPMJf+PQ4TrpiDmZYM
dqTYHnluotVuE/b3btnTeNwXz+9I2hIiNBkNCOpaMJUbrnb9g3H3wAGOB+s4
UFoOjZNxUujEjdFKIE0420k+++blZJpop/wSDBaCWbkiA4R47yQBNNJxkub9
cZkZRCZSykgUfgTW4lKbOMWqtC+SFP3wCORWNrg1H4BhWyKKPBVKN4fFzsge
/YM1UTZcsAZ25Vca/MOZTkU0B2zosFbqgDLY3ZF8j0LuKNqm0eD61gaCoxAt
xu9ya8QJYksph1U8emCPvL]
- key: tls.key
value: >
ENC[PKCS7,MIID8gYJKoZIhvcNAQcDoIID4zCCA98CAQAxggKGMIICggIBAD
BqMFIxCzAJBgNVBAYTAlNFMQ4wDAYDVQQKDAVTVU5FVDEOMAwGA1UECwwFRV
lBTUwxIzAhBgNVBAMMGms4c2MxLm1hdHJpeC50ZXN0LnN1bmV0LnNlAhRQzt
oh/v7OPqz1zgArYCaRp/w78jANBgkqhkiG9w0BAQEFAASCAgCIMvfO6Ht52g
YV4eR28PMz1HsgzjGl/kD/4hmHTwEprFbRyO5ahYcbZfVt/vt7eFQI3Vi1AZ
Dc8BZQrBbTJ4g4PMhPa4C3/a43cz9s6E8QMXzHIKtyz/FjVlCsXrrIqNnJin
BW/aP0McsnrYfn69b59dl1EMKO5H72bwtg9JClrCpjElF5W1GfOajEE76vqY
GPqiXXYtMQ5VaFm9ljelItGmqINnipZnKrfGyIOAlBO7ZplBADXGjR3RiQqx
bU4zdD384rjnSYFyJRnuzpFH/Aq2fIzMotvnT/w7kNVQFhd4/RSUqyGLiHrf
Rm2LmNjsWd1vNPT6bhIzTTb1fOt5ujSpQgmJQWVgeA+acCx+OHHkh5bQjOOH
LSadIWwyQyCCy6fyB9z9SdEnlwoJvKiXEr1lZybCYhFNiQyOkc228fpNbfRN
MMH+MPvF5ioz1NXYbjZMbHFTbJQVXNJA/lMheuXxUDY5yQAoKCzFjWlz7g25
35Opqyca9VEYZknqfpJEtcay5cSeJubGrgQNVzPHnj0UoA6THunDasaj4ZfZ
cUbXM1lizTwyIcWOD1VIGKJPacaSof0qcxCs8aGiWRGf7fo/Fuz95ihpy6Xx
d+pr9dg5lB0oJle3cQM91YvmNUziUSYUk25rsEc4DLhL8aHS1zhWtmtHS8Ns
CGetG4THrgZzCCAU4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEMKcjNq7MM
4GH0x34Ud0jYKAggEggyT8nSd3Un+NPXBPOb4IE/3ZyYPk45mIe5SJ9uW4zv
p7Ab69yMKGqzSvCNtrS7+doFcL2Cz/O9a+BmQwKRvATBvnFlKDOzuFBiCvMl
+6Uu+gcq4iTgx2uiqqkrYE/mA0NkFXAhOLkyJRP/1aBm+fi3c1h29/js5r89
hiv+pCs63tsPn3NQrPg/j6WIBhry/tJqz7PW9wjtXbwJtctefvCdR6c1zTGk
iPNOYSler/CTcImTltTDJHbDb4tXu5xQhw6apEfosVtjBBVgGsGgEFmuKS/J
3Ax/ZtxS4SwkTeR/awq7hNuwVdBGohatqo0166psT1TcM/TP+Td8nok2H+mj
CScc0fFxnJjnOmx0q7X72d4XHzyQqNo6hxokMB0rTC]
microk8s_secrets:
kube-system:
cloud-config:
- key: cloud.conf
value: >
ENC[PKCS7,MIIEIgYJKoZIhvcNAQcDoIIEEzCCBA8CAQAxggKGMIICggIBAD
BqMFIxCzAJBgNVBAYTAlNFMQ4wDAYDVQQKDAVTVU5FVDEOMAwGA1UECwwFRV
lBTUwxIzAhBgNVBAMMGms4c2MxLm1hdHJpeC50ZXN0LnN1bmV0LnNlAhQp55
2CB8LYM07W//pThAdlxeWN7DANBgkqhkiG9w0BAQEFAASCAgC8vaLYmNEA83
i2ishSaP0YSqF96w8WD0mowFQ9wD7FJdHGU1CtbiXA3cHRR/Rd84tKW9W7Ft
OjDhZrZcdmOuZPD8i65ssvcwDvLFVY4nXrQdaoWICXqGvljwFc+6MPYo0HA/
1VPpBW0grMYwWA/o53h4bAret2G+IJw381qNNnLnSKRcT4AXOjty564voGVx
QuqWU2uetYAH8howmwz4c50n4U4gZdVE6Lrq6d4k3p2+KSD2p3jF0W0ORpsO
j0eJ9YJIl2LoX28r1eSjxufA00Rp5zyoczAku4+BzZ3L6oSj+Juh9cr/IRUG
b2hpqaNPWG8cJNDIP+UZaVOPEwbwio+CmO+kC4O2WALbYYxMZQ/vmGcdykCA
PftwH+AQa8PNA26tbv1zADJdnG5fxuT9xrPBshSJm7Ea+aaJAMF4V3NkINKJ
5+myKGo/wH3o/oDl+9YdxgrkUZiUlgGXFFiSFNsGRLEjkLyEcRKfzeacnedp
Z4rCuYPbhq6INeAmX3h4SiQ4LqUeDP3lP0m078TtMeJtuW1ra4TEnRBbdTp2
dHbuKWY5tlB3suYxjNbsxNnY1YQJJuJcldWfoY+O/D7nOHeH6fo3pUCT9du6
zBzQUiRmeZwpMeQ67wPvtn+7jmOWsteldwBjCawQrmlcjtFUI8FOAXVWDXQg
XBXi9KVrIzADCCAX4GCSqGSIb3DQEHATAdBglghkgBZQMEASoEEABvFq397Q
BUPoE0TqjtJCOAggFQdW+HgrVMXnbdN3Kh7rORUj3QVV9tbgM1bTIklmmw9Z
XBqOep0nbcPc2elEHMU2hTRco6/a0Iduz3tX0ITR5rwLpmIhR5HZ93nJtuS8
NVic26LFOnnd9EUxrP6sV24/527ZAjmHD7PmudkNcsJZ/mgs2o1/b4/KG6Fn
pNUiEsxgNbNumC87VSfyJy1KZ26jjKR5S2rf/1LISNItNBDqaQxE9RrhqlHd
uFFIBSzP96XxW4FUj+//A4Z9k4BZUhfYoAwwefRJA8gQMaI/WZ3jSeHKaQXQ
6kbQ8vdyyT7Ij/BNVhzHx2D0hL0uNnTTZh/Y3J0iNDg6osq2ovEoQFhklZ9k
Hr67kEECCk+jfJUKqmenKrPJRALw7lYhiCCfg2YyqzjjlthV7ciavru2S43Y
NSGC14Dc7+IWulxKeDDiFW7mxd2aPJd+XURo3N7kYV1ItF]
argocd:
tls-secret : *tls

View file

@ -0,0 +1 @@
../README

View file

@ -0,0 +1 @@
../README

View file

@ -0,0 +1 @@
../README

View file

@ -0,0 +1 @@
../README

View file

@ -0,0 +1 @@
../README

View file

@ -0,0 +1,3 @@
The system documentation is in the docs directory of the multiverse repository.

View file

@ -0,0 +1 @@
../README

Some files were not shown because too many files have changed in this diff Show more