Compare commits

...

142 commits

Author SHA1 Message Date
1c22bfb722
add cert-manager stuff 2024-11-12 15:08:49 +01:00
46ade449bb
add rook instructions 2024-11-08 22:53:34 +01:00
ca2a3ef1c9
lb1.matrix.sunet.se added 2024-11-08 11:09:24 +01:00
5df6b70bbf
mgmt1.matrix.sunet.se added 2024-11-08 11:04:31 +01:00
46a7ccc30f
k8sw6.matrix.sunet.se added 2024-11-08 11:02:28 +01:00
d71a71f226
k8sw5.matrix.sunet.se added 2024-11-08 10:55:41 +01:00
22871236cb
k8sw4.matrix.sunet.se added 2024-11-08 10:55:17 +01:00
d19f81d0c4
add debian fix for puppet-module-puppetlabs-sshkeys-core 2024-11-08 10:25:34 +01:00
d6b200faad
delete old stuff + fix ssh key 2024-11-08 10:21:29 +01:00
1449221f43
new cluster added 2024-11-08 10:20:48 +01:00
487770e350
Change prod microk8s version to match env test version. 2024-11-08 09:11:12 +01:00
c19aaa5a97
Remove swap file 2024-11-08 00:04:12 +01:00
980adbf867
Add example output module for commands to add dns records. 2024-11-08 00:03:04 +01:00
c27a2195cc
Setup new prod deployment 2024-11-07 22:55:21 +01:00
1c72cff364
Remove old style cluster deployment 2024-11-07 22:22:48 +01:00
a375ea111f
add myself 2024-11-07 12:48:12 +01:00
57e1339000
clean out old keys 2024-11-07 12:47:00 +01:00
f8118ef52d
Add new mgmt vpn + ingress for https in dco 2024-11-07 12:38:58 +01:00
c65741753c
Add README.md about postgres password secret. 2024-11-07 11:03:20 +01:00
e61e7654dc
Add postgres deployment for element 2024-11-07 10:59:04 +01:00
2a103b0da4
Add ip secret to list 2024-11-06 15:09:58 +01:00
666b81af9c
Add matrix deploy role 2024-11-06 15:07:53 +01:00
132a2dd771
Add matrix.sunet.se ip to lb 2024-11-06 15:06:05 +01:00
035524863a
Add cluster role to get namespaces 2024-11-06 12:52:53 +01:00
4515bafe6c
Add rexistry example ingress 2024-11-06 12:38:48 +01:00
153a31ae27
Add script to automate creation of k8s users 2024-11-06 07:55:11 +01:00
d5c31c0d32
Add mgmt to lb rule for ingress ports 2024-11-06 07:47:20 +01:00
9df05afe20
lb1: add address 2024-11-05 22:47:25 +01:00
3014c551b8
lb1: Update secret 2024-11-05 22:39:57 +01:00
956deed67a
Update lb sg 2024-11-05 22:38:41 +01:00
888f20a67b
Add user for matrix installer 2024-10-31 10:04:53 +01:00
fd7edba2cb
mgmt1.matrix.test.sunet.se added 2024-10-31 09:24:29 +01:00
8025cb5cc8
Add management node rule 2024-10-31 09:17:53 +01:00
b90a4b9a36
Add management node 2024-10-31 09:16:51 +01:00
f691ae99e6
Open ingress port from lb to workers 2024-10-30 23:56:25 +01:00
9b343f32e7
Add tls secret 2024-10-30 16:05:06 +01:00
6393a8279d
Enable external access from lb to k8s 2024-10-30 15:17:58 +01:00
7c7b85cfbd
Create security group for k8s external access. 2024-10-30 14:56:05 +01:00
b0701f9b66
Make secret a secret 2024-10-30 13:56:20 +01:00
a3b86f45d4
Add test adress 2024-10-30 13:41:42 +01:00
e1e4802def
Create lb cosmos rule 2024-10-30 13:35:19 +01:00
919bab0791
Add matrix puppet module 2024-10-30 13:31:34 +01:00
840af98c51
Open lb port to source ip during setup and hardening 2024-10-30 12:25:44 +01:00
b497844e59
lb1.matrix.test.sunet.se added 2024-10-29 09:10:20 +01:00
47d77c5fde
Add lb. 2024-10-29 08:52:57 +01:00
618b273ca8
Add example serive 2024-10-29 08:52:08 +01:00
ad52a3c054
Make rook deployment fully multizone aware 2024-10-28 14:20:12 +01:00
1384c2df90
Updated rook deployment for faliure zones 2024-10-25 13:18:48 +02:00
8622d20d51
Add security group port 2024-10-24 12:53:00 +02:00
fd5204fb47
Update k8s peers 2024-10-24 09:59:05 +02:00
5a1b44e7c0
Test more recent puppet sunet tag 2024-10-24 09:45:47 +02:00
4f63fa0f60
Readd microk8s node 2024-10-24 09:02:13 +02:00
a3215eae9b
Try to deploy with older version of puppet sunet 2024-10-24 08:19:39 +02:00
9a23e78011
remove microk8s 2024-10-23 16:28:09 +02:00
03839ba41b
Remove channel spec for microk8s. 2024-10-23 15:34:47 +02:00
8cf1b38f15
Downgrade microk8s version in test. 2024-10-23 15:14:16 +02:00
6edd4c82eb
Clean out k8sc1 secrets 2024-10-23 14:12:34 +02:00
f8e206c372
Add script to test connectivity between k8s nodes 2024-10-23 13:44:02 +02:00
eadff19277
Make variable naming more consistent and more typo fixes 2024-10-23 11:15:33 +02:00
f59ab71fe6
Fix some typos and comment out some unused parts for now. 2024-10-23 10:07:08 +02:00
8f70f4a3ff
Refactor the remaining control nodes to new deployment model 2024-10-22 16:24:49 +02:00
d32825ec66
Update worker peers 2024-10-22 13:52:28 +02:00
f9b217e538
Update peer ip addresses 2024-10-22 13:41:34 +02:00
855613625e
Resolve IaC conflict by rename old microk8s group 2024-10-22 09:21:56 +02:00
3f07468b64
Fix typo in comment 2024-10-22 08:17:26 +02:00
7280601637
Adjust security group rules for control node in sto4 2024-10-22 08:05:26 +02:00
d5663f23e1
Set correct nuimber of controller replicas 2024-10-22 07:13:26 +02:00
d9a575f0e6
Add controller node definition to sto4 2024-10-22 07:09:07 +02:00
f8fc2ab777
Add k8sw1 to rook and remove k8sc3 from deployment. 2024-10-21 17:40:24 +02:00
5aef290639
Move and scale out DCO workers with new deployment method 2024-10-21 15:09:34 +02:00
5645cedf71
Remove k8sw1 from rook 2024-10-21 13:21:59 +02:00
19da9ecc8c
k8sw5.matrix.test.sunet.se added 2024-10-21 09:04:04 +02:00
c68c4d85d0
Make sto4 security group rules resource naming statndard consistent with the sto3 one. 2024-10-21 08:15:24 +02:00
8d630a9d08
Prepare sto3 worker nodes. 2024-10-19 22:11:41 +02:00
ecd78cbf8b
Decrease legacy deployed worker count to 1 2024-10-18 22:03:00 +02:00
1f0ad12e63
k8sw6.matrix.test.sunet.se added 2024-10-18 17:03:24 +02:00
e73374e9d6
Add ssh security group 2024-10-18 17:02:29 +02:00
9872f8f923
Add security group rules to allow k8s traffic from sto4 to dco 2024-10-18 14:54:13 +02:00
b53cd52314
Prepare terraform manifests for two workers in sto4. 2024-10-18 13:59:55 +02:00
83c3d61b77
Add a list of used datacenters in deployment 2024-10-18 12:45:46 +02:00
282263bb05
Remove k8sw3.matrix.test.sunet.se to make the recreate flow work 2024-10-18 11:03:55 +02:00
0c4d7fe8c3
Remove worker k8sw4 in preparation to move it to sto4. 2024-10-18 10:16:00 +02:00
d102573cbd
Add placeholder for postgresql patroni nodes 2024-10-18 09:17:07 +02:00
1ac6a1f16a
Add rook-toolbox deployment file to manage ceph in cluster. 2024-10-18 09:14:40 +02:00
44d989698c
Refactor security group generation and prepare sto4 microk8s group 2024-10-18 00:25:01 +02:00
7b779b2c41
Prepare multicloud setup 2024-10-17 08:08:33 +02:00
bdb858df42
Add patch to enable calico wireguard mode for internode CNI traffic 2024-10-16 22:52:12 +02:00
ec943d607c
Fix microk8s security group rule resource name 2024-10-16 21:58:05 +02:00
f5f8c1983f
microk8s sg: Add udp port 51820 to allow calico wireguard internode CNI overlay vpn 2024-10-16 21:55:13 +02:00
fd3fe09901
Add failuredomain to rookfs data and metadata pool 2024-10-16 21:36:45 +02:00
9fed13b09c
Add storageclass and fs for rook cephfs 2024-10-16 13:42:48 +02:00
b5967c57af
Add rook manifest files. 2024-10-16 08:32:16 +02:00
09359e07af
Add new jocar key to validate cosmos modules apparmor and nagioscfg 2024-10-15 16:07:13 +02:00
fcfc9fc0ac
Delete expired gpg keys for mandersson on servers. 2024-10-15 16:03:15 +02:00
7842dd4ace
k8sw4.matrix.test.sunet.se added 2024-10-15 15:50:38 +02:00
7d34d653e6
Add woker 4 matrix-test to cosmos-rules peer list 2024-10-15 15:45:32 +02:00
e35138ce8c
Move workers to other ssh keypair. 2024-10-15 15:38:44 +02:00
da2d2004dd
Extend worker node count and create rook volume on workers. 2024-10-15 14:52:52 +02:00
71919802c8
Add new ssh key for mandersson 2024-10-15 13:56:48 +02:00
5b480f0088
Add new gpg key for mandersson. 2024-10-15 12:35:12 +02:00
1b6c4c5e98
Test 2024-10-04 07:02:54 +02:00
5a36c01199
Mandersson: add new ssh key. 2024-09-16 18:46:12 +02:00
5b5f8f73a9
Update README.md 2024-09-16 18:16:28 +02:00
a2ed169425
Just move prod lb. 2024-09-16 17:31:09 +02:00
11a54df526
Change frontend servers. 2024-09-16 17:25:03 +02:00
6355d795c0
Fix typo 2024-06-05 17:03:52 +02:00
4a7ca09ca3
Add mariah ssh key 2024-06-05 16:59:37 +02:00
b0f691e0b0
Add frontends to k8s. 2024-06-05 14:59:15 +02:00
b49db88a5b
Add appcred 2024-06-04 15:39:57 +02:00
6fdb194e66
Update matrix prod tls secrets 2024-06-04 15:29:30 +02:00
1fd938d438
Add tls secrets for matrix test 2024-06-04 15:15:09 +02:00
b7edd8a465
Remove some unnecessary whitespaces 2024-06-04 13:57:44 +02:00
fffa63b827
Expose ingress externaly and remove external kube api endpoint 2024-06-04 13:53:57 +02:00
0d594247b4
Add health ingress endpoint 2024-06-04 13:12:40 +02:00
d1a85199c6
Make kub[cw] a special nftables case 2024-05-31 12:57:38 +02:00
e87657f209
reenable nftables init 2024-05-31 12:39:35 +02:00
becbb55d1a
Disable nftables init 2024-05-31 11:51:48 +02:00
47382d698e
Add kano ssh key 2024-05-29 09:08:09 +02:00
5c57fb769c
Clean up more sunet server options 2024-05-28 22:15:44 +02:00
894e3117f0
Try without nftables 2024-05-28 21:59:25 +02:00
7901cd54d6
k8sw3.matrix.sunet.se added 2024-05-28 10:44:46 +02:00
a576a123de
k8sw2.matrix.sunet.se added 2024-05-28 10:38:32 +02:00
2aabf9afd9
k8sw1.matrix.sunet.se added 2024-05-28 10:35:01 +02:00
d529ab9ecc
k8sc3.matrix.sunet.se added 2024-05-28 10:29:02 +02:00
f219af697e
k8sc2.matrix.sunet.se added 2024-05-28 10:23:36 +02:00
58cb178906
k8sc1.matrix.sunet.se added 2024-05-28 10:17:30 +02:00
855f06db18
Add cosmos rules for matrix k8s prod. 2024-05-28 10:14:33 +02:00
22718bb91d
Make test and prod IaC two separate directories 2024-05-28 09:01:42 +02:00
ee78b4942f
Upgrade microk8s version and remove traefik reference 2024-05-28 08:15:49 +02:00
72e0294d5e
Add authorized keys 2024-05-27 22:34:42 +02:00
f8e6ba97ac
Add clouds config secret. 2024-05-27 21:41:33 +02:00
7869398fae
Confiure k8s peers 2024-05-27 16:22:23 +02:00
14b5ef2910
Restore inital cosmos rule range 2024-05-27 15:37:34 +02:00
d35b0a5696
Add general server config 2024-05-27 15:35:57 +02:00
5c4da9100f
Test server rule 2024-05-27 15:12:33 +02:00
8a6e1c1c59
Test server rule 2024-05-27 15:10:27 +02:00
9b3c4ae686
Add mgmt addresses config 2024-05-27 14:53:00 +02:00
06e1e0c57f
Add some more tust 2024-05-27 11:11:36 +02:00
219d351660
Fix setup_cosmos_modules 2024-05-27 11:02:16 +02:00
7e172b5f25
Fix copy/paste error 2024-05-27 10:53:17 +02:00
2dc16fa1b7
Add basic k8s cosmos rules. 2024-05-27 10:50:39 +02:00
17aa029d3c
Add matrix version of setup_cosmos_modules 2024-05-27 10:47:18 +02:00
119 changed files with 21685 additions and 361 deletions

12
.gitignore vendored
View file

@ -1,5 +1,9 @@
*.pyc
/IaC/.terraform*
/IaC/.terraform*/**
/IaC/terraform.tfstate*
/IaC/*.tfvars
/IaC-test/.terraform*
/IaC-test/.terraform*/**
/IaC-test/terraform.tfstate*
/IaC-test/*.tfvars
/IaC-prod/.terraform*
/IaC-prod/.terraform*/**
/IaC-prod/terraform.tfstate*
/IaC-prod/*.tfvars

22
IaC-prod/dnsoutput.tf Normal file
View file

@ -0,0 +1,22 @@
output "control_ip_addr_dco" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-dco : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_dco" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-dco : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "control_ip_addr_sto3" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-sto3 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_sto3" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-sto3 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "control_ip_addr_sto4" {
value = [ for node in resource.openstack_compute_instance_v2.controller-nodes-sto4 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}
output "worker_ip_addr_sto4" {
value = [ for node in resource.openstack_compute_instance_v2.worker-nodes-sto4 : "knotctl -z sunet.se --ttl 360 -r A -d ${node.access_ip_v4} -n ${node.name}\nknotctl -z sunet.se --ttl 360 -r AAAA -d ${node.access_ip_v6} -n ${node.name}" ]
}

24
IaC-prod/images.tf Normal file
View file

@ -0,0 +1,24 @@
# Default os version
data "openstack_images_image_v2" "debian12image" {
name = "debian-12" # Name of image to be used
most_recent = true
}
data "openstack_images_image_v2" "debian12image-dco" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.dco
}
data "openstack_images_image_v2" "debian12image-sto4" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto4
}
data "openstack_images_image_v2" "debian12image-sto3" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto3
}

138
IaC-prod/k8snodes-dco.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global DCO definitions
#
locals {
dcodc = "dco"
dconodenrbase = index(var.datacenters, "dco")
dcoindexjump = length(var.datacenters)
}
#
# Control node resources DCO
#
resource "openstack_networking_port_v2" "kubecport-dco" {
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "controller-nodes-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-dco[count.index].id
}
}
#
# Worker node resources DCO
#
resource "openstack_networking_port_v2" "kubewport-dco" {
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "worker-nodes-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-dco[count.index].id
}
}

139
IaC-prod/k8snodes-sto3.tf Normal file
View file

@ -0,0 +1,139 @@
#
# Global definitions sto3
#
locals {
sto3dc = "sto3"
sto3nodenrbase = index(var.datacenters, "sto3")
sto3indexjump = length(var.datacenters)
}
#
# Control node resources STO3
#
resource "openstack_networking_port_v2" "kubecport-sto3" {
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "controller-nodes-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto3[count.index].id
}
}
#
# Worker node resources STO3
#
resource "openstack_networking_port_v2" "kubewport-sto3" {
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "worker-nodes-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto3[count.index].id
}
}

138
IaC-prod/k8snodes-sto4.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global definitions for sto4
#
locals {
sto4dc = "sto4"
sto4nodenrbase = index(var.datacenters, "sto4")
sto4indexjump = length(var.datacenters)
}
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport-sto4" {
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "controller-nodes-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto4[count.index].id
}
}
#
# Worker node resources
#
resource "openstack_networking_port_v2" "kubewport-sto4" {
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "worker-nodes-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto4[count.index].id
}
}

48
IaC-prod/lb.tf Normal file
View file

@ -0,0 +1,48 @@
# Netowrk port
resource "openstack_networking_port_v2" "lb1-port-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.lb-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "lb1volumeboot-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for lb1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "lb1-node-dco" {
name = "lb1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.lb-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.lb1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.lb1-port-dco.id
}
}

33
IaC-prod/main.tf Normal file
View file

@ -0,0 +1,33 @@
# Define required providers
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.53.0"
}
}
}
# Configure the OpenStack Provider
provider "openstack" {
cloud = "${var.clouddco_name}"
}
# DCO Matrix Test
provider "openstack" {
cloud = "${var.clouddco_name}"
alias = "dco"
}
# STO3 Matrix test
provider "openstack" {
cloud = "${var.cloudsto3_name}"
alias = "sto3"
}
# STO4 Matrix test
provider "openstack" {
cloud = "${var.cloudsto4_name}"
alias = "sto4"
}

46
IaC-prod/mgmt.tf Normal file
View file

@ -0,0 +1,46 @@
# Netowrk port
resource "openstack_networking_port_v2" "mgmt1-port-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "mgmt1volumeboot-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for mgmt1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "mgmt1-node-dco" {
name = "mgmt1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.mgmt1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.mgmt1-port-dco.id
}
}

18
IaC-prod/network.tf Normal file
View file

@ -0,0 +1,18 @@
data "openstack_networking_network_v2" "public" {
name = "public" # Name of network to use.
}
data "openstack_networking_network_v2" "public-dco" {
name = "public" # Name of network to use.
provider = openstack.dco
}
data "openstack_networking_network_v2" "public-sto4" {
name = "public" # Name of network to use.
provider = openstack.sto4
}
data "openstack_networking_network_v2" "public-sto3" {
name = "public" # Name of network to use.
provider = openstack.sto3
}

View file

@ -0,0 +1,177 @@
# Security groups dco
resource "openstack_networking_secgroup_v2" "microk8s-dco" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.dco
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-dco" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.dco
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO3 to DCO
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO4 to DCO
#
#Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-dco" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-dco" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}

View file

@ -0,0 +1,125 @@
# Security groups for external acccess k8s control nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-control-dco" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s control nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto3" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s control nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto4" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto4.id
}
# Security groups for external acccess k8s worker nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-dco" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s worker nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto3" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s worker nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto4" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto3
resource "openstack_networking_secgroup_v2" "microk8s-sto3" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto3
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto3" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto3
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From DCO to STO3
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From STO4 to STO3
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto3" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto3" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto4
resource "openstack_networking_secgroup_v2" "microk8s-sto4" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto4
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto4" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto4
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# DCO to STO4
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# From STO3 to STO4
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto4" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto4" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}

View file

@ -0,0 +1,109 @@
# Security groups lb-frontend
resource "openstack_networking_secgroup_v2" "lb-dco" {
name = "lb-frontend"
description = "Ingress lb traffic to allow."
provider=openstack.dco
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
# From mgmt1
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule4_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule5_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule6_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule7_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8080"
port_range_max = "8080"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule8_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.184.88/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule9_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "130.242.121.23/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-dco" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.dco
}
resource "openstack_compute_servergroup_v2" "controllers-dco" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.dco
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto3" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto3
}
resource "openstack_compute_servergroup_v2" "controllers-sto3" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto3
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto4" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto4
}
resource "openstack_compute_servergroup_v2" "controllers-sto4" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto4
}

98
IaC-prod/vars.tf Normal file
View file

@ -0,0 +1,98 @@
variable "datacenter_name" {
type = string
default = "dco"
}
variable "datacenters" {
type = list(string)
default = [ "dco", "sto3", "sto4" ]
}
# Cloud names in clouds.yaml file
variable "clouddco_name" {
type = string
default = "dco-matrixprod"
}
variable "cloudsto3_name" {
type = string
default = "sto3-matrixprod"
}
variable "cloudsto4_name" {
type = string
default = "sto4-matrixprod"
}
variable "keyname" {
type = string
default = "pettai-7431497"
}
variable "keynameworkers" {
type = string
default = "pettai-7431497"
}
# Replicas per datacenter
variable "workerdcreplicas" {
default = "2"
}
# Replicas per datacenter
variable "controllerdcreplicas" {
default = "1"
}
variable "controller_instance_type" {
default = "b2.c2r4"
}
variable "worker_instance_type" {
default = "b2.c4r16"
}
variable "lb_instance_type" {
default = "b2.c2r4"
}
variable "mgmt_instance_type" {
default = "b2.c2r4"
}
variable "worker_name" {
default = "k8sw"
}
variable "controller_name" {
default = "k8sc"
}
variable "dns_suffix" {
default = "matrix.sunet.se"
}
variable "k8sports" {
default=[
{"16443" = "tcp"},
{"10250" = "tcp"},
{"10255" = "tcp"},
{"25000" = "tcp"},
{"12379" = "tcp"},
{"10257" = "tcp"},
{"10259" = "tcp"},
{"19001" = "tcp"},
{"4789" = "udp"},
{"51820" = "udp"}
]
}
variable jumphostv4-ips {
type = list(string)
default = []
}
variable jumphostv6-ips {
type = list(string)
default = []
}

24
IaC-test/images.tf Normal file
View file

@ -0,0 +1,24 @@
# Default os version
data "openstack_images_image_v2" "debian12image" {
name = "debian-12" # Name of image to be used
most_recent = true
}
data "openstack_images_image_v2" "debian12image-dco" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.dco
}
data "openstack_images_image_v2" "debian12image-sto4" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto4
}
data "openstack_images_image_v2" "debian12image-sto3" {
name = "debian-12" # Name of image to be used
most_recent = true
provider = openstack.sto3
}

138
IaC-test/k8snodes-dco.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global DCO definitions
#
locals {
dcodc = "dco"
dconodenrbase = index(var.datacenters, "dco")
dcoindexjump = length(var.datacenters)
}
#
# Control node resources DCO
#
resource "openstack_networking_port_v2" "kubecport-dco" {
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "controller-nodes-dco" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-dco[count.index].id
}
}
#
# Worker node resources DCO
#
resource "openstack_networking_port_v2" "kubewport-dco" {
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.microk8s-dco.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}-${replace(var.dns_suffix,".","-")}-${local.dcodc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.dcoindexjump + 1 + local.dconodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "worker-nodes-dco" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.dcoindexjump + 1 + local.dconodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-dco.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-dco[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-dco.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-dco[count.index].id
}
}

139
IaC-test/k8snodes-sto3.tf Normal file
View file

@ -0,0 +1,139 @@
#
# Global definitions sto3
#
locals {
sto3dc = "sto3"
sto3nodenrbase = index(var.datacenters, "sto3")
sto3indexjump = length(var.datacenters)
}
#
# Control node resources STO3
#
resource "openstack_networking_port_v2" "kubecport-sto3" {
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "controller-nodes-sto3" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto3[count.index].id
}
}
#
# Worker node resources STO3
#
resource "openstack_networking_port_v2" "kubewport-sto3" {
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto3.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id,
resource.openstack_networking_secgroup_v2.microk8s-sto3.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
]
admin_state_up = "true"
provider = openstack.sto3
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto3.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto3dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto3
}
resource "openstack_compute_instance_v2" "worker-nodes-sto3" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto3indexjump + 1 + local.sto3nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto3
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto3.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto3.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto3[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto3.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto3[count.index].id
}
}

138
IaC-test/k8snodes-sto4.tf Normal file
View file

@ -0,0 +1,138 @@
#
# Global definitions for sto4
#
locals {
sto4dc = "sto4"
sto4nodenrbase = index(var.datacenters, "sto4")
sto4indexjump = length(var.datacenters)
}
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport-sto4" {
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.controllerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "controller-nodes-sto4" {
count = var.controllerdcreplicas # Replicas per datacenter
name = "${var.controller_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-control-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubecport-sto4[count.index].id
}
}
#
# Worker node resources
#
resource "openstack_networking_port_v2" "kubewport-sto4" {
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-port"
# We create as many ports as there are instances created
count = var.workerdcreplicas
network_id = data.openstack_networking_network_v2.public-sto4.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id,
resource.openstack_networking_secgroup_v2.microk8s-sto4.id,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
]
admin_state_up = "true"
provider = openstack.sto4
}
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-vol"
description = "OS volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
image_id = data.openstack_images_image_v2.debian12image-sto4.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_blockstorage_volume_v3" "kubewvolumerook-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}-${replace(var.dns_suffix,".","-")}-${local.sto4dc}-rook-vol"
description = "Rook storage volume for kubernetes worker node ${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}"
size = 100
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.sto4
}
resource "openstack_compute_instance_v2" "worker-nodes-sto4" {
count = var.workerdcreplicas # Replicas per datacenter
name = "${var.worker_name}${count.index * local.sto4indexjump + 1 + local.sto4nodenrbase}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.sto4
security_groups = [
resource.openstack_networking_secgroup_v2.microk8s-sto4.name,
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.name,
resource.openstack_networking_secgroup_v2.k8s-external-worker-sto4.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook-sto4[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 1
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers-sto4.id
}
network {
port = resource.openstack_networking_port_v2.kubewport-sto4[count.index].id
}
}

48
IaC-test/lb.tf Normal file
View file

@ -0,0 +1,48 @@
# Netowrk port
resource "openstack_networking_port_v2" "lb1-port-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id,
resource.openstack_networking_secgroup_v2.lb-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "lb1volumeboot-dco" {
name = "lb1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for lb1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "lb1-node-dco" {
name = "lb1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name,
resource.openstack_networking_secgroup_v2.lb-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.lb1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.lb1-port-dco.id
}
}

View file

@ -13,3 +13,21 @@ required_version = ">= 0.14.0"
provider "openstack" {
cloud = "${var.cloud_name}"
}
# DCO Matrix Test
provider "openstack" {
cloud = "${var.clouddco_name}"
alias = "dco"
}
# STO3 Matrix test
provider "openstack" {
cloud = "${var.cloudsto3_name}"
alias = "sto3"
}
# STO4 Matrix test
provider "openstack" {
cloud = "${var.cloudsto4_name}"
alias = "sto4"
}

46
IaC-test/mgmt.tf Normal file
View file

@ -0,0 +1,46 @@
# Netowrk port
resource "openstack_networking_port_v2" "mgmt1-port-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-port"
network_id = data.openstack_networking_network_v2.public-dco.id
# A list of security group ID
security_group_ids = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
]
admin_state_up = "true"
provider = openstack.dco
}
# Boot volume
# Boot volume for node
resource "openstack_blockstorage_volume_v3" "mgmt1volumeboot-dco" {
name = "mgmt1-${replace(var.dns_suffix,".","-")}-${local.dcodc}-vol"
description = "OS volume for mgmt1.matrix.test.sunet.se"
size = 50
image_id = data.openstack_images_image_v2.debian12image-dco.id
enable_online_resize = true # Allow us to resize volume while attached.
provider = openstack.dco
}
resource "openstack_compute_instance_v2" "mgmt1-node-dco" {
name = "mgmt1.${var.dns_suffix}"
flavor_name = "${var.lb_instance_type}"
key_pair = "${var.keynameworkers}"
provider = openstack.dco
security_groups = [
resource.openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.mgmt1volumeboot-dco.id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
network {
port = resource.openstack_networking_port_v2.mgmt1-port-dco.id
}
}

18
IaC-test/network.tf Normal file
View file

@ -0,0 +1,18 @@
data "openstack_networking_network_v2" "public" {
name = "public" # Name of network to use.
}
data "openstack_networking_network_v2" "public-dco" {
name = "public" # Name of network to use.
provider = openstack.dco
}
data "openstack_networking_network_v2" "public-sto4" {
name = "public" # Name of network to use.
provider = openstack.sto4
}
data "openstack_networking_network_v2" "public-sto3" {
name = "public" # Name of network to use.
provider = openstack.sto3
}

123
IaC-test/nodes.tf Normal file
View file

@ -0,0 +1,123 @@
#
# Controller node resources
#
#resource "openstack_networking_port_v2" "kubecport" {
# name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# count = var.controller_instance_count
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# resource.openstack_networking_secgroup_v2.microk8s-old.id,
# resource.openstack_networking_secgroup_v2.microk8s-dco.id,
# resource.openstack_networking_secgroup_v2.https.id
# ]
# admin_state_up = "true"
#}
#
#resource "openstack_blockstorage_volume_v3" "kubecvolumeboot" {
# count = var.controller_instance_count # size of cluster
# name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
# description = "OS volume for kubernetes control node ${count.index + 1}"
# size = 100
# image_id = data.openstack_images_image_v2.debian12image.id
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#resource "openstack_compute_instance_v2" "controller-nodes" {
# count = var.controller_instance_count
# name = "${var.controller_name}${count.index+1}.${var.dns_suffix}"
# flavor_name = "${var.controller_instance_type}"
# key_pair = "${var.keyname}"
# security_groups = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
# data.openstack_networking_secgroup_v2.allegress.name,
# resource.openstack_networking_secgroup_v2.microk8s-old.id,
# resource.openstack_networking_secgroup_v2.microk8s-dco.id,
# resource.openstack_networking_secgroup_v2.https.name
# ]
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 0
# }
# scheduler_hints {
# group = openstack_compute_servergroup_v2.controllers.id
# }
# network {
# port = resource.openstack_networking_port_v2.kubecport[count.index].id
# }
#}
#
##
## Worker node resources
##
#
#resource "openstack_networking_port_v2" "kubewport" {
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# count = var.worker_instance_count
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# resource.openstack_networking_secgroup_v2.microk8s-old.id
# ]
# admin_state_up = "true"
#}
#
#resource "openstack_blockstorage_volume_v3" "kubewvolumeboot" {
# count = var.worker_instance_count # size of cluster
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
# description = "OS volume for kubernetes worker node ${count.index + 1}"
# size = 100
# image_id = data.openstack_images_image_v2.debian12image.id
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#resource "openstack_blockstorage_volume_v3" "kubewvolumerook" {
# count = var.worker_instance_count # size of cluster
# name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-rook-vol"
# description = "Rook storage volume for kubernetes worker node ${count.index + 1}"
# size = 100
# enable_online_resize = true # Allow us to resize volume while attached.
#}
#
#
#resource "openstack_compute_instance_v2" "worker-nodes" {
# count = var.worker_instance_count
# name = "${var.worker_name}${count.index+1}.${var.dns_suffix}"
# flavor_name = "${var.worker_instance_type}"
# key_pair = "${var.keynameworkers}"
# security_groups = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
# data.openstack_networking_secgroup_v2.allegress.name,
# resource.openstack_networking_secgroup_v2.microk8s-old.name
# ]
#
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 0
# }
# block_device {
# uuid = resource.openstack_blockstorage_volume_v3.kubewvolumerook[count.index].id
# source_type = "volume"
# destination_type = "volume"
# boot_index = 1
# }
#
# scheduler_hints {
# group = openstack_compute_servergroup_v2.workers.id
# }
# network {
# port = resource.openstack_networking_port_v2.kubewport[count.index].id
# }
#}

17
IaC-test/pgnodes.tf Normal file
View file

@ -0,0 +1,17 @@
# Postgresql patroni nodes
#resource "openstack_networking_port_v2" "pgport" {
# name = "pgdb1-${replace(var.dns_suffix,".","-")}-port"
# # We create as many ports as there are instances created
# network_id = data.openstack_networking_network_v2.public.id
# # A list of security group ID
# security_group_ids = [
# data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
# data.openstack_networking_secgroup_v2.allegress.id,
# ]
# admin_state_up = "true"
# provider = openstack.DCO
#}

View file

@ -0,0 +1,137 @@
# Security groups for external acccess k8s control nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-control-dco" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s control nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto3" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s control nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-control-sto4" {
name = "k8s-external"
description = "External ingress traffic to k8s control nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_control_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-control-sto4.id
}
# Security groups for external acccess k8s worker nodes in dco.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-dco" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.dco
}
# Security groups for external acccess k8s worker nodes in sto3.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto3" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto3
}
# Security groups for external acccess k8s worker nodes in sto4.
resource "openstack_networking_secgroup_v2" "k8s-external-worker-sto4" {
name = "k8s-external-worker"
description = "External ingress traffic to k8s worker nodes."
provider=openstack.sto4
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules sto3
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto3
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto3.id
}
# Rules sto4
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule1_v4_sto4" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.sto4
remote_ip_prefix = "89.47.191.43/32"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-sto4.id
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}
# Rules dco
resource "openstack_networking_secgroup_rule_v2" "k8s_external_ingress_worker_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.k8s-external-worker-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups dco
resource "openstack_networking_secgroup_v2" "microk8s-dco" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.dco
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-dco" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.dco
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_dco" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.dco
remote_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO3 to DCO
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# From STO4 to DCO
#
#Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_dco" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.dco
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-dco.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-dco" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-dco" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.dco
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-dco.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto3
resource "openstack_networking_secgroup_v2" "microk8s-sto3" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto3
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto3" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto3
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto3" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto3
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From DCO to STO3
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# From STO4 to STO3
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto4_to_sto3" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto4)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto4))])[0]
provider = openstack.sto3
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto4[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto4)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto3.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto3" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto3" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto3
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto3.id
}

View file

@ -0,0 +1,177 @@
# Security groups sto4
resource "openstack_networking_secgroup_v2" "microk8s-sto4" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
provider=openstack.sto4
}
resource "openstack_networking_secgroup_v2" "ssh-from-jump-hosts-sto4" {
name = "ssh-from-jumphosts"
description = "Allow ssh traffic from sunet jumphosts."
provider=openstack.sto4
}
#
# Security group rules for microk8s
#
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v4_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule_v6_sto4" {
count = length(var.k8sports)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[count.index][keys(var.k8sports[count.index])[0]]
port_range_min = keys(var.k8sports[count.index])[0]
port_range_max = keys(var.k8sports[count.index])[0]
provider = openstack.sto4
remote_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# DCO to STO4
#
# Controllers
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Workers
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_dco_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-dco)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-dco))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-dco[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-dco)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# From STO3 to STO4
#
# Control nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_controller_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.controller-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.controller-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.controller-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.controller-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
# Worker nodes
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v4_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv4"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/", [ resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v4, "32" ])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_worker_rule_v6_sto3_to_sto4" {
count = length(var.k8sports) * length(resource.openstack_compute_instance_v2.worker-nodes-sto3)
direction = "ingress"
ethertype = "IPv6"
protocol = var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))][keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]]
port_range_min = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
port_range_max = keys(var.k8sports[floor(count.index/length(resource.openstack_compute_instance_v2.worker-nodes-sto3))])[0]
provider = openstack.sto4
remote_ip_prefix = join("/",[ replace(resource.openstack_compute_instance_v2.worker-nodes-sto3[count.index % length(resource.openstack_compute_instance_v2.worker-nodes-sto3)].access_ip_v6, "/[\\[\\]']/",""), "128"])
security_group_id = openstack_networking_secgroup_v2.microk8s-sto4.id
}
#
# Security group rules for ssh-from-jump-hosts
#
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v4rules-sto4" {
count = length(var.jumphostv4-ips)
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv4-ips[count.index]}/32"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}
resource "openstack_networking_secgroup_rule_v2" "ssh-from-jumphosts-v6rules-sto4" {
count = length(var.jumphostv6-ips)
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = "22"
port_range_max = "22"
provider = openstack.sto4
remote_ip_prefix = "${var.jumphostv6-ips[count.index]}/128"
security_group_id = openstack_networking_secgroup_v2.ssh-from-jump-hosts-sto4.id
}

View file

@ -0,0 +1,109 @@
# Security groups lb-frontend
resource "openstack_networking_secgroup_v2" "lb-dco" {
name = "lb-frontend"
description = "Ingress lb traffic to allow."
provider=openstack.dco
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule2_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "87.251.31.118/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
# From mgmt1
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule3_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule4_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "80"
port_range_max = "80"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule5_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "443"
port_range_max = "443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule6_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8443"
port_range_max = "8443"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule7_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "8080"
port_range_max = "8080"
provider = openstack.dco
remote_ip_prefix = "89.47.191.66/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule8_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "89.47.184.88/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}
resource "openstack_networking_secgroup_rule_v2" "lb_ingress_rule9_v4_dco" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = "16443"
port_range_max = "16443"
provider = openstack.dco
remote_ip_prefix = "130.242.121.23/32"
security_group_id = openstack_networking_secgroup_v2.lb-dco.id
}

212
IaC-test/securitygroups.tf Normal file
View file

@ -0,0 +1,212 @@
# Datasource of sunet ssh-from-jumphost security group.
data "openstack_networking_secgroup_v2" "sshfromjumphosts" {
name = "ssh-from-jumphost"
}
data "openstack_networking_secgroup_v2" "allegress" {
name = "allegress"
}
#resource "openstack_networking_secgroup_v2" "microk8s-old" {
# name = "microk8s-old"
# description = "Traffic to allow between microk8s hosts"
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule1" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 16443
# port_range_max = 16443
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule2" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 16443
# port_range_max = 16443
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule3" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10250
# port_range_max = 10250
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule4" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10250
# port_range_max = 10250
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule5" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10255
# port_range_max = 10255
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule6" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10255
# port_range_max = 10255
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule7" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 25000
# port_range_max = 25000
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule8" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 25000
# port_range_max = 25000
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule9" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 12379
# port_range_max = 12379
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule10" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 12379
# port_range_max = 12379
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule11" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10257
# port_range_max = 10257
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule12" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10257
# port_range_max = 10257
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule13" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 10259
# port_range_max = 10259
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule14" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 10259
# port_range_max = 10259
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule15" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 19001
# port_range_max = 19001
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule16" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "tcp"
# port_range_min = 19001
# port_range_max = 19001
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule17" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "udp"
# port_range_min = 4789
# port_range_max = 4789
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule18" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "udp"
# port_range_min = 4789
# port_range_max = 4789
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule19" {
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "udp"
# port_range_min = 51820
# port_range_max = 51820
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#resource "openstack_networking_secgroup_rule_v2" "microk8s_rule20" {
# direction = "ingress"
# ethertype = "IPv6"
# protocol = "udp"
# port_range_min = 51820
# port_range_max = 51820
# remote_group_id = openstack_networking_secgroup_v2.microk8s-old.id
# security_group_id = openstack_networking_secgroup_v2.microk8s-old.id
#}
#
#resource "openstack_networking_secgroup_v2" "https" {
# name = "https"
# description = "Allow https to ingress controller"
#}
#
#resource "openstack_networking_secgroup_rule_v2" "https_rule1" {
# # External traffic
# direction = "ingress"
# ethertype = "IPv4"
# protocol = "tcp"
# port_range_min = 443
# port_range_max = 443
# remote_ip_prefix = "0.0.0.0/0"
# security_group_id = openstack_networking_secgroup_v2.https.id
#}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-dco" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.dco
}
resource "openstack_compute_servergroup_v2" "controllers-dco" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.dco
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto3" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto3
}
resource "openstack_compute_servergroup_v2" "controllers-sto3" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto3
}

View file

@ -0,0 +1,11 @@
resource "openstack_compute_servergroup_v2" "workers-sto4" {
name = "workers"
policies = ["anti-affinity"]
provider = openstack.sto4
}
resource "openstack_compute_servergroup_v2" "controllers-sto4" {
name = "controllers"
policies = ["anti-affinity"]
provider = openstack.sto4
}

9
IaC-test/servergroups.tf Normal file
View file

@ -0,0 +1,9 @@
resource "openstack_compute_servergroup_v2" "workers" {
name = "workers"
policies = ["anti-affinity"]
}
resource "openstack_compute_servergroup_v2" "controllers" {
name = "controllers"
policies = ["anti-affinity"]
}

118
IaC-test/vars.tf Normal file
View file

@ -0,0 +1,118 @@
variable "datacenter_name" {
type = string
default = "dco"
}
variable "datacenters" {
type = list(string)
default = [ "dco", "sto3", "sto4" ]
}
# Cloud names in clouds.yaml file
variable "clouddco_name" {
type = string
default = "dco-matrixtest"
}
variable "cloudsto3_name" {
type = string
default = "sto3-matrixtest"
}
variable "cloudsto4_name" {
type = string
default = "sto4-matrixtest"
}
variable "keyname" {
type = string
default = "manderssonpub3"
}
variable "keynameworkers" {
type = string
default = "manderssonpub3"
}
variable "worker_instance_count" {
default = "0"
}
# Replicas per datacenter
variable "workerdcreplicas" {
default = "2"
}
# Replicas per datacenter
variable "controllerdcreplicas" {
default = "1"
}
variable "controller_instance_count" {
default = "1"
}
variable "controller_instance_type" {
default = "b2.c2r4"
}
variable "worker_instance_type" {
default = "b2.c4r16"
}
variable "lb_instance_type" {
default = "b2.c2r4"
}
variable "mgmt_instance_type" {
default = "b2.c2r4"
}
variable "worker_name" {
default = "k8sw"
}
variable "controller_name" {
default = "k8sc"
}
variable "dns_suffix" {
default = "matrix.test.sunet.se"
}
variable "cloud_name" {
default="dco-matrixtest"
}
variable "cloud2_name" {
default="dco-matrixtest"
}
variable "cloud3_name" {
default="dco-matrixtest"
}
variable "k8sports" {
default=[
{"16443" = "tcp"},
{"10250" = "tcp"},
{"10255" = "tcp"},
{"25000" = "tcp"},
{"12379" = "tcp"},
{"10257" = "tcp"},
{"10259" = "tcp"},
{"19001" = "tcp"},
{"4789" = "udp"},
{"51820" = "udp"}
]
}
variable jumphostv4-ips {
type = list(string)
default = []
}
variable jumphostv6-ips {
type = list(string)
default = []
}

View file

@ -1,5 +0,0 @@
# Default os version
data "openstack_images_image_v2" "debian12image" {
name = "debian-12" # Name of image to be used
most_recent = true
}

View file

@ -1,3 +0,0 @@
data "openstack_networking_network_v2" "public" {
name = "public" # Name of network to use.
}

View file

@ -1,109 +0,0 @@
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubecport" {
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubecvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.controller_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes control node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "controller-nodes" {
count = var.controller_instance_count
name = "${var.controller_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.controller_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubecvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.controllers.id
}
network {
port = resource.openstack_networking_port_v2.kubecport[count.index].id
}
}
#
# Worker node resources
#
#
# Controller node resources
#
resource "openstack_networking_port_v2" "kubewport" {
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-port"
# We create as many ports as there are instances created
count = var.controller_instance_count
network_id = data.openstack_networking_network_v2.public.id
# A list of security group ID
security_group_ids = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.id,
data.openstack_networking_secgroup_v2.allegress.id,
resource.openstack_networking_secgroup_v2.microk8s.id
]
admin_state_up = "true"
}
resource "openstack_blockstorage_volume_v3" "kubewvolumeboot" {
count = var.controller_instance_count # size of cluster
name = "${var.worker_name}${count.index+1}-${replace(var.dns_suffix,".","-")}-vol"
description = "OS volume for kubernetes worker node ${count.index + 1}"
size = 100
image_id = data.openstack_images_image_v2.debian12image.id
enable_online_resize = true # Allow us to resize volume while attached.
}
resource "openstack_compute_instance_v2" "worker-nodes" {
count = var.worker_instance_count
name = "${var.worker_name}${count.index+1}.${var.dns_suffix}"
flavor_name = "${var.worker_instance_type}"
key_pair = "${var.keyname}"
security_groups = [
data.openstack_networking_secgroup_v2.sshfromjumphosts.name,
data.openstack_networking_secgroup_v2.allegress.name,
resource.openstack_networking_secgroup_v2.microk8s.name
]
block_device {
uuid = resource.openstack_blockstorage_volume_v3.kubewvolumeboot[count.index].id
source_type = "volume"
destination_type = "volume"
boot_index = 0
}
scheduler_hints {
group = openstack_compute_servergroup_v2.workers.id
}
network {
port = resource.openstack_networking_port_v2.kubewport[count.index].id
}
}

View file

@ -1,197 +0,0 @@
# Datasource of sunet ssh-from-jumphost security group.
data "openstack_networking_secgroup_v2" "sshfromjumphosts" {
name = "ssh-from-jumphost"
}
data "openstack_networking_secgroup_v2" "allegress" {
name = "allegress"
}
resource "openstack_networking_secgroup_v2" "microk8s" {
name = "microk8s"
description = "Traffic to allow between microk8s hosts"
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule1" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "0.0.0.0/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule2" {
#We never know where Richard is, so allow from all of the known internet
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_ip_prefix = "::/0"
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule3" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule4" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10250
port_range_max = 10250
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule5" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule6" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10255
port_range_max = 10255
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule7" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule8" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 25000
port_range_max = 25000
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule9" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule10" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 12379
port_range_max = 12379
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule11" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule12" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10257
port_range_max = 10257
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule13" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule14" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 10259
port_range_max = 10259
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule15" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule16" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 19001
port_range_max = 19001
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule17" {
direction = "ingress"
ethertype = "IPv4"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule18" {
direction = "ingress"
ethertype = "IPv6"
protocol = "udp"
port_range_min = 4789
port_range_max = 4789
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule19" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}
resource "openstack_networking_secgroup_rule_v2" "microk8s_rule20" {
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 16443
port_range_max = 16443
remote_group_id = openstack_networking_secgroup_v2.microk8s.id
security_group_id = openstack_networking_secgroup_v2.microk8s.id
}

View file

@ -1,40 +0,0 @@
variable "datacenter_name" {
type = string
default = "dco"
}
variable "keyname" {
type = string
default = "manderssonpub"
}
variable "worker_instance_count" {
default = "3"
}
variable "controller_instance_count" {
default = "3"
}
variable "controller_instance_type" {
default = "b2.c2r4"
}
variable "worker_instance_type" {
default = "b2.c4r16"
}
variable "worker_name" {
default = "k8sw"
}
variable "controller_name" {
default = "k8sc"
}
variable "dns_suffix" {
default = "matrix.test.sunet.se"
}
variable "cloud_name" {
default="dco-matrixtest"
}

View file

@ -1,2 +1,3 @@
### \[Matrix\] ops repo.
Files in IaC is infrastructure as code definitions for a microk8s cluster in opentofu format.
### \[Matrix\]

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGL/dDsBEADK7apJAq+EJ+0I2qmHDl3vZsS9z/e2x4tYCLDLXSDEnDvsmeZw
3/Is555uDGgLYmY91K86l7ef/p7zRR2wbs3nguSo6KTbMiasi6HjvW+PcdW8SqVg
cskocFX7rL44p09+Huesi7JbQZDN46nH+gn39u6Tk8Qpm3gZP9JwptoAZWcJzujW
pY0Ni78KKbSFJjjL2YTisUlGButlFOYEwmvjVJ3+v6DIVeahqxvdtggy7sQZfRDc
u48SMsre/xqHvnLEcAChqK3etIyWtee/CNcn2cOJoVlCZsK/a+btAWC3shL/g64C
lFBjZU/EztXkwk9HPyxFoin985jINzdiWm8nJZeUxv+2KT/XNOLTU2nNSBtbz35L
dhju/P3oRBmfYDzsppDskwlYMaMubWTlntn6GR6u+f89I7xeXsdUeKzprRGo9yO4
Lm/5FsTmwUlkWsyjYt5KrLtCAZCoukc/1TOp8ZixgADO+fCPGixpYjlLSY2SiL/+
zLCHXPJzNi4a4vrO3GrV+zHdnNGdJ4p+VWXACwhnFyLkOW4T/nF+f04BS+J9yFMw
80bk8tC3Ok4HFWFG1XHaVoafpSoy04Zi5gHAAI+xcEPE4te2Gqb/CRZik0RdUzCg
fJFXMJycS6YwuXwUrlgB17mBXHDt7Yg4hiMGziQa+pnDmrgQQwg5WQs4PwARAQAB
tB9Kb2hhbiBXYXNzYmVyZyA8am9jYXJAc3VuZXQuc2U+iQJUBBMBCgA+AhsDBQsJ
CAcDBRUKCQgLBRYCAwEAAh4FAheAFiEEEzdr+JK1hxGBohjpvk7C7q3ywxsFAmb6
s+gFCQXlraQACgkQvk7C7q3ywxs+5BAAlXIVK4vapth7VX56SBE2y/HIRcrGjDlx
N4iLaU16yfma3YYEh2LpWQz7yMTJj+3YUaMuMcozL72BiNAQbK9lKOAqoScaScEL
DKLDg0ngnELKuo4ACFYTiKQ8Iy+Z15WHD2WV/Sf8CM2wuWrvEcYn377Y3qkOapwA
9DcU8D24UjgB/zpPiNVIMJNmwZLljKgzdqA4jVjWQW0UGl1NWM7ynJdPA6H4EShv
3ZAbyy47DO8inSdtCU172LZpONbVjone32tOGaT2yqNE89bRTFcN6/5LhJABaG4F
DALPzSCZjCJ0cYWVNJAmrYmLm+WSuYagAvR4eR9/FClZiSFQ1k/hUakYYtpSxNFh
MrYnps5xr63uFGJ4atVytA779dqy0Y6wwsUCnxq7gZxpAyq8afPdf3km14kb14ud
9wfIZgYPa9j0LRX4AZRPkrRx7021vhCkgjLKRn0zP7FYyBeKJvv4GsuFFg+wCGHc
oorRf8xuC0sxanwmRRkO/3iITNUVK6wYOyJzFiYAnNHZgHsepS7D7nrZf+27Rjp4
eag86shxmhuvSH4yLvQ9L4FK0Oa/Myi3VWo7ckqLhrFe82zENRQ7MzVbZmBkHAdT
NZxZFNOXwsMpsYat6Xb2erJ3ai0XGTqZ1ckdChS2M6FHohk/0H78LfWrZ5DteApr
1PhC9KUrttG5Ag0EYv90OwEQAMsHA8GGcQksqV55xg0TfGqSWtHVhXlA5GK40LLd
8gUn7/0d8ymlo3ZOJWjG9NIpnNQ9mLRW3TsDYC/8ToJ5tlwIS9/fRoCfc7TMFWLC
GrxpyhxrJVzgxZVE9qlKjafKOg/7ojXN8zolNlcUHWH3ehj2Fl9bwsavFDlFphK4
plS5xUUqkjZIq3e40YNSNL4Swt6HWMwQ0taPWVTwcaX5ruN8jV26kFGA4EbacvAy
ezyXucx4dBZSaPhqIHWIKvGrWiNcPfkTxP4v+c6OAm9fXO8ybBVN7kOZKogRgLIM
xsgE0siSt6nKH5k+zJwIhN4l/yaI1I6M/fIVJsLlikED52FdRfpSunh5yrskZ0gg
cPXyyZ7pPF7f/jjIpNBeD4xror/Ejk7lm0WSbUhfiFpQ7sr4lhyq3cpJg5S0P3Hy
WPTc81+8J6iEbdDImwDt/+aG5huBsyYYSWJwa/SKuFdWMZMemG1wundhbgzMvJPZ
RyqvKjKH303AStqU1R7ROvxyGF5tJxlQWky990X1+DUo+YDmrgWgf9/owqWE8t89
OmL7kNfXqD5sgzAKL7fluOfrohBim6KlyN40CWeiT7/xqd4NKZsDiKFqNLZhFTJB
W1uHerqLj4p6T5wOv66yKcaAuHNq8XyP9ypiYZhLHPNc4mh2jUjSlbso4Xn1eRJ0
QOxzABEBAAGJAjwEGAEKACYCGwwWIQQTN2v4krWHEYGiGOm+TsLurfLDGwUCZNoz
TAUJA/s5JQAKCRC+TsLurfLDG+ywEACFvXSt22vx1p3ss8Zk+bZK7M2q46ld6FfN
kxotGZFMGvLojs4Oqa04Wt2fLaZiYWgFtBfMkkYlbAJBFk5C5Q/5ikQkSxIs/Uow
vGQ2F4zphFliqvUNUcuRkqHjCOc61jKygs/V1UaWkY1gjAu8XmqwSt+rGmKhh5Ob
MFlRcgErD9e9KerCHuRmL7Tw12onhfuG5gK60DE0shrxkvZm5xPbjzysin32Pc9+
sK09PDIn6nFv8kfYBYcpfeFxaj18cMZ5lqf3WNwRYJk4Znu7eZTsUiIgzZ5BBrpq
OBX3LoOrb89s7PLSNfg+dzKQBj3rBCEvkklzZHcFH5u02DxepvyGnd6FljQbnjGU
J0OPjn4jcpdFHpGCG5Z//01qlr8+xx7kQiXFv+ENwrAbsKI1RW243oi7qwR0h6+6
RsNn/EESXAszeJNKDxAoLh477bM0FsZn5BpG6pDhNJMVQ7M55r/AE4xD1QaoyXtc
HYHedGVYhofLw3vyv6hsPNJiS/s9LwXf/jMNAaM+p5gFbnKRL00/0ix0zYf6x6vc
VbQhYLD0yTw3Boy6k9rHrfLNQdwkYWpk/JY1ruEGSMjbhiyvo6CH7EhLPI59Sj5S
OrEXCYTUCVVU7PUvs7QVcx5GgzBtmuBg47Ep2w9RfYhdG/5hTF45zVhstPNB9jId
/diBWsGA9bkCDQRi/3S9ARAAzZJjGVo7GtqGXyLCvvMDtMPDIDGtWllDmV2oYI+y
YPggUsWN3lzWvAUaE/YLxxFknU/TegCGNCMQog7NCmZgeAlf5Od9nDALOattk/VN
YyxD/BQOs11aMhBr4WP7+WkaaAjhMGaRwkadOMRIhLcOMplwNCZyOv9mKfptsHYZ
MmAY67/8QnqHiIY7TB2lUJTMJfyy0kWmg92EXPYPFpp57WabM9gSAzBi4SPEBf63
hfpfARTQMh0G7hYZH0IJja2tyrAKjSMFdmzGUY3vk5083hEYxsXjP5DoWARLpVXX
8VDlRRN6Q80xtVPLK1XYnOPfj5X4aBSUSzaPkwE5F2ybhygiQOJIw7+xcN9mO3eq
axGoah//FXQLI5r9muugQAY/+WJf6aepgwgPl8uuuwLIJOqiih+TQHqh0kMe7Ovk
4IrV1DGZ+v0nuvpdVneN4lSvjefMStjDAPJQDWkmHXUyeJPMBRBXWI42sRqjzhBN
s7ShQGU6eZwYvI0cS2dZny12ca7vBz1Zgkj8cfv+G2Xt88jDxxqm/HxHP57jZHZ6
IKOKKV19sOuJgIhSWX7VkPpOVYoE9ZfQ1DdCO8Du6USNPEqPFvP24lJqGqPtA4CN
5F1vTwpRHM9EHG7zgkHlZTRNlKMApoimfzrYJt1UxsvcpCO4mG8PuVGbOGdAKfvn
RwEAEQEAAYkCPAQYAQoAJgIbIBYhBBM3a/iStYcRgaIY6b5Owu6t8sMbBQJk2jNd
BQkD+zizAAoJEL5Owu6t8sMbhPMP/3SrFn1VTnVJW685T9bKDSOJDakBae4m1YAp
CWLPQ+S2/MF/7d38MgpgM6c7gai9khfxgr0JvUR+gD1Y9GGCXU9cIWP0cYmLBhpu
b5PEnbyZI6DUc0bKyAbfnAVWJv+toj71fLt+Lo7V9n38DafnKAg3YtxOxif9Th9t
vRKQsa5q9VPj0OOFfj/PyubZgMRqZn7uamAOMRhtKX3x41K0At61QGzhecmw/6Xl
APiJ+lbsjvX/Cgn/mKpKIw65q6ehABo1T0Ls+eVQRef+RDmfIGO4D8RUu3G+jt+I
wOfY49vKxhi6rTuC18HyxPrs7uwjpxUj7CDM/LKt/tXQoffc63F9GtREdzmJHSE0
9UZC2gcwYt4mziB9b7qbsrjubWC3b9ivcFdvWPmWgPcIgFJYroYKfn/DITnfjrlR
8EbmyoCVmqgJ9hTvLU/0z6bW0sEIkpS5ameCau0X933G6TacAEQvdy7WonzUrPsy
/GUZ3AFJqe7Eftvt0D2PTvPXetVnG6LJIp/OikEoNy7TDkFAS9yvB+KXaJ+iOg9x
BCFc0lA0rh2PvbQQdyg2YPStg3o43hlKAl/RsyYCAvIUFWggHnrk/pLbBMxVnv+T
/tm3SFDgtag4o/tI295NpFiroDu8zhPJTv2F2GxGPZmNawjw8hyqy2lF8oH9tD6N
mhxC7iIM
=P0dT
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGL/dDsBEADK7apJAq+EJ+0I2qmHDl3vZsS9z/e2x4tYCLDLXSDEnDvsmeZw
3/Is555uDGgLYmY91K86l7ef/p7zRR2wbs3nguSo6KTbMiasi6HjvW+PcdW8SqVg
cskocFX7rL44p09+Huesi7JbQZDN46nH+gn39u6Tk8Qpm3gZP9JwptoAZWcJzujW
pY0Ni78KKbSFJjjL2YTisUlGButlFOYEwmvjVJ3+v6DIVeahqxvdtggy7sQZfRDc
u48SMsre/xqHvnLEcAChqK3etIyWtee/CNcn2cOJoVlCZsK/a+btAWC3shL/g64C
lFBjZU/EztXkwk9HPyxFoin985jINzdiWm8nJZeUxv+2KT/XNOLTU2nNSBtbz35L
dhju/P3oRBmfYDzsppDskwlYMaMubWTlntn6GR6u+f89I7xeXsdUeKzprRGo9yO4
Lm/5FsTmwUlkWsyjYt5KrLtCAZCoukc/1TOp8ZixgADO+fCPGixpYjlLSY2SiL/+
zLCHXPJzNi4a4vrO3GrV+zHdnNGdJ4p+VWXACwhnFyLkOW4T/nF+f04BS+J9yFMw
80bk8tC3Ok4HFWFG1XHaVoafpSoy04Zi5gHAAI+xcEPE4te2Gqb/CRZik0RdUzCg
fJFXMJycS6YwuXwUrlgB17mBXHDt7Yg4hiMGziQa+pnDmrgQQwg5WQs4PwARAQAB
tB9Kb2hhbiBXYXNzYmVyZyA8am9jYXJAc3VuZXQuc2U+iQJUBBMBCgA+AhsDBQsJ
CAcDBRUKCQgLBRYCAwEAAh4FAheAFiEEEzdr+JK1hxGBohjpvk7C7q3ywxsFAmTT
lSYFCQP7M8UACgkQvk7C7q3ywxu24hAAj/FXBwFd4jxT3NWQFKyuLM/Q1NRnSbg3
vvPggtyybCMQ8Jgak+H6dTXfA4/ltqeEJopqJgpbVkLOC5fatbzyOcFSR20oYbw7
horamRhLhycS0PA/eYPE0THeGX7izgU7ds8vNNL+UwsgtUsS4e8iW5wfumi6oFq4
oFyF/fxrBns7ECF5K0eErw56agpCW4dnjM3qs9EggQmzBz8UibuNsFhhDpw950BG
WzXdKY2w1QRkpm1ZAQgknl8ouYKvFB5efuSeypyx6cHNZg5njna8pimL2zRReDBy
Z6dl/i2fHvWqq8vK+OUJbpozPdpRzUsRPlIhRaP4ekOhGDZnX5sDIbgN8d2ivGhW
XQUOqyBJhmyh1LGZw2A3/2OoFsNNZdSQVYDgYfhLlkfNXJRPO8ivMvpNpjIt3vji
iSov/RvZe39/KvnH1S03A1bXm+RsilWnJYrnY9DpOwwyJoVpJNfT/XoGjnH7b4PA
KsSg8lkbix7PocGLuxH9W2iC9gDqZNROVtbk2AUbRXYFpAC7O4xGEOWr6766Zr8U
bkQCO6RI66M+TGVVP22A0j0H0ViaTYw2x1vwYQI7s6ZiQ3wIp5uJZRtyOThz44Ol
sXhlX8R2SI+2T0c7tKyrJB5kVuLlma3NflMy+vZmwU7E4xTYYNBHA5A+8eqCRC0d
jrfF3bZThEi5Ag0EYv90OwEQAMsHA8GGcQksqV55xg0TfGqSWtHVhXlA5GK40LLd
8gUn7/0d8ymlo3ZOJWjG9NIpnNQ9mLRW3TsDYC/8ToJ5tlwIS9/fRoCfc7TMFWLC
GrxpyhxrJVzgxZVE9qlKjafKOg/7ojXN8zolNlcUHWH3ehj2Fl9bwsavFDlFphK4
plS5xUUqkjZIq3e40YNSNL4Swt6HWMwQ0taPWVTwcaX5ruN8jV26kFGA4EbacvAy
ezyXucx4dBZSaPhqIHWIKvGrWiNcPfkTxP4v+c6OAm9fXO8ybBVN7kOZKogRgLIM
xsgE0siSt6nKH5k+zJwIhN4l/yaI1I6M/fIVJsLlikED52FdRfpSunh5yrskZ0gg
cPXyyZ7pPF7f/jjIpNBeD4xror/Ejk7lm0WSbUhfiFpQ7sr4lhyq3cpJg5S0P3Hy
WPTc81+8J6iEbdDImwDt/+aG5huBsyYYSWJwa/SKuFdWMZMemG1wundhbgzMvJPZ
RyqvKjKH303AStqU1R7ROvxyGF5tJxlQWky990X1+DUo+YDmrgWgf9/owqWE8t89
OmL7kNfXqD5sgzAKL7fluOfrohBim6KlyN40CWeiT7/xqd4NKZsDiKFqNLZhFTJB
W1uHerqLj4p6T5wOv66yKcaAuHNq8XyP9ypiYZhLHPNc4mh2jUjSlbso4Xn1eRJ0
QOxzABEBAAGJAjwEGAEKACYCGwwWIQQTN2v4krWHEYGiGOm+TsLurfLDGwUCZNoz
TAUJA/s5JQAKCRC+TsLurfLDG+ywEACFvXSt22vx1p3ss8Zk+bZK7M2q46ld6FfN
kxotGZFMGvLojs4Oqa04Wt2fLaZiYWgFtBfMkkYlbAJBFk5C5Q/5ikQkSxIs/Uow
vGQ2F4zphFliqvUNUcuRkqHjCOc61jKygs/V1UaWkY1gjAu8XmqwSt+rGmKhh5Ob
MFlRcgErD9e9KerCHuRmL7Tw12onhfuG5gK60DE0shrxkvZm5xPbjzysin32Pc9+
sK09PDIn6nFv8kfYBYcpfeFxaj18cMZ5lqf3WNwRYJk4Znu7eZTsUiIgzZ5BBrpq
OBX3LoOrb89s7PLSNfg+dzKQBj3rBCEvkklzZHcFH5u02DxepvyGnd6FljQbnjGU
J0OPjn4jcpdFHpGCG5Z//01qlr8+xx7kQiXFv+ENwrAbsKI1RW243oi7qwR0h6+6
RsNn/EESXAszeJNKDxAoLh477bM0FsZn5BpG6pDhNJMVQ7M55r/AE4xD1QaoyXtc
HYHedGVYhofLw3vyv6hsPNJiS/s9LwXf/jMNAaM+p5gFbnKRL00/0ix0zYf6x6vc
VbQhYLD0yTw3Boy6k9rHrfLNQdwkYWpk/JY1ruEGSMjbhiyvo6CH7EhLPI59Sj5S
OrEXCYTUCVVU7PUvs7QVcx5GgzBtmuBg47Ep2w9RfYhdG/5hTF45zVhstPNB9jId
/diBWsGA9bkCDQRi/3S9ARAAzZJjGVo7GtqGXyLCvvMDtMPDIDGtWllDmV2oYI+y
YPggUsWN3lzWvAUaE/YLxxFknU/TegCGNCMQog7NCmZgeAlf5Od9nDALOattk/VN
YyxD/BQOs11aMhBr4WP7+WkaaAjhMGaRwkadOMRIhLcOMplwNCZyOv9mKfptsHYZ
MmAY67/8QnqHiIY7TB2lUJTMJfyy0kWmg92EXPYPFpp57WabM9gSAzBi4SPEBf63
hfpfARTQMh0G7hYZH0IJja2tyrAKjSMFdmzGUY3vk5083hEYxsXjP5DoWARLpVXX
8VDlRRN6Q80xtVPLK1XYnOPfj5X4aBSUSzaPkwE5F2ybhygiQOJIw7+xcN9mO3eq
axGoah//FXQLI5r9muugQAY/+WJf6aepgwgPl8uuuwLIJOqiih+TQHqh0kMe7Ovk
4IrV1DGZ+v0nuvpdVneN4lSvjefMStjDAPJQDWkmHXUyeJPMBRBXWI42sRqjzhBN
s7ShQGU6eZwYvI0cS2dZny12ca7vBz1Zgkj8cfv+G2Xt88jDxxqm/HxHP57jZHZ6
IKOKKV19sOuJgIhSWX7VkPpOVYoE9ZfQ1DdCO8Du6USNPEqPFvP24lJqGqPtA4CN
5F1vTwpRHM9EHG7zgkHlZTRNlKMApoimfzrYJt1UxsvcpCO4mG8PuVGbOGdAKfvn
RwEAEQEAAYkCPAQYAQoAJgIbIBYhBBM3a/iStYcRgaIY6b5Owu6t8sMbBQJk2jNd
BQkD+zizAAoJEL5Owu6t8sMbhPMP/3SrFn1VTnVJW685T9bKDSOJDakBae4m1YAp
CWLPQ+S2/MF/7d38MgpgM6c7gai9khfxgr0JvUR+gD1Y9GGCXU9cIWP0cYmLBhpu
b5PEnbyZI6DUc0bKyAbfnAVWJv+toj71fLt+Lo7V9n38DafnKAg3YtxOxif9Th9t
vRKQsa5q9VPj0OOFfj/PyubZgMRqZn7uamAOMRhtKX3x41K0At61QGzhecmw/6Xl
APiJ+lbsjvX/Cgn/mKpKIw65q6ehABo1T0Ls+eVQRef+RDmfIGO4D8RUu3G+jt+I
wOfY49vKxhi6rTuC18HyxPrs7uwjpxUj7CDM/LKt/tXQoffc63F9GtREdzmJHSE0
9UZC2gcwYt4mziB9b7qbsrjubWC3b9ivcFdvWPmWgPcIgFJYroYKfn/DITnfjrlR
8EbmyoCVmqgJ9hTvLU/0z6bW0sEIkpS5ameCau0X933G6TacAEQvdy7WonzUrPsy
/GUZ3AFJqe7Eftvt0D2PTvPXetVnG6LJIp/OikEoNy7TDkFAS9yvB+KXaJ+iOg9x
BCFc0lA0rh2PvbQQdyg2YPStg3o43hlKAl/RsyYCAvIUFWggHnrk/pLbBMxVnv+T
/tm3SFDgtag4o/tI295NpFiroDu8zhPJTv2F2GxGPZmNawjw8hyqy2lF8oH9tD6N
mhxC7iIM
=w3OF
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,86 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGDHZkgBEADTRYoZqk3uXBusvXTxT9bheOKzAvgOD9MVzn2+nQ79sUtvdosB
FHmr737cutILHl6dzn7B6R6FPvLnoDIaoSpIdBUePLyvNg2/XjQOVfb5ONyXxXIf
iDWLtHNa5aGmKXjGFagY+1LEEh6v4cDZnu/KSiOc5KhDQsiMohe0zR39KPraE1bu
IylESf7VZb/HYqmXQqwae41vgIIZ3HkDfnDYfqWHsFBsF9nrCBqgJRQjQlh9eusd
7hGsY5ZdXawvF2vDXx917asr6b+deNb072+bvM6GqnKg68Q4rhGN+y7eO4Jzm9To
yhSggOig+dllwDzVT1Dx39jdSaHVGeQVmouym5jT3HkS9VKE9uKef/Oylf6Pjom5
Z5XbrWd+mPZgZed61yxFCT4Gs53cqt02Ce5vDYU4aJhwiDPG9zlO9kQNf6P/veik
Ni50gdnboC7Tb5Vhaud3s9CTSUPfJbv509X+anuJG+yFpbYxrKgIKHIvnT4O9XYR
OwpaCc+VI2scXyfR+5qorya6aHguop9WsAk2xLpM2gxsDi4E07HURkOb7M+DAhEJ
U3eHREaJOWcVBgArrKoMFbvfYmMZKxCJByJ9qQPhhqstmOzMseEUZlcTasiegYg5
4P9KDW5QbEbnTBuA+ClS8dxU+XHp6KfDrAd2XQFT9CF7V/6VXhxYFSaFvwARAQAB
tBxNaWNrZSBOb3JkaW4gPGthbm9Ac3VuZXQuc2U+iQJUBBMBCgA+AhsDBQsJCAcD
BRUKCQgLBRYCAwEAAh4BAheAFiEEIpL7dwHsMfazpY3ODaCnpXCP4lcFAmY7fRUF
CQk2fc0ACgkQDaCnpXCP4ldfTQ/+MnbkICnnGvEyTqv3Z6UN0InVhKxnGh0y6/q/
10RR0bN6gmn79CV1BMbuIIpwTBMe47oQiVuvF+Qypf1AJoc5HSqF1V+yeDN+a0yf
cj3CXTQ1Hr/zCBhUCy2jZQLYyL2oL961XPnXsrMV7gkjupAXPzG7u9CrTX5gfi5w
RzmMGLJfYtN0h9DU8ShyUk1YFJKHlLCZZBgjwT6ikX69Kndl+PDTlW53hhsgwoAL
vIJKH0dKL+LVh/AYY2hmkwwvbH8tocFAQnPEXS4v9loug4wRfVikwtro3/xlIHyG
1oMzgB1cPJBoO/wIpe0WIypcQnqZSCnm9n/QZhRtHhZC1UNvb1RlfqJaL3ygXsM6
L9BTyS23/lKBRApHuqkr8VfP0oi28Vygmg61Xokwh1oFMfd/2Vn2MJtRnPLVtimg
JLEIfuuhz5rvXjIE+2oJCDHK7aoHPukdhTdmxjYPhbXHHm1hqm5vgbVUNHTsIKWi
P94FDgFet8yxjGnnlvb3sfbOFRtaoEIiBaokBIVgX5ktjyBILplaWVEXUR4rOjpK
DM0p3RICJFrkdVxxD9oG5OxQ6dGjxBpce4293VqBmNJILPyXPI1IZyA6gs42suu9
cfqW90iOemO+rJ5y6kHrPTcYoxZuyBJHuqBSfvUO3pSM0JGonvrWOoPfN/tCl9JY
vGsyQxOJAjMEEAEIAB0WIQST2nAGwucEPowz7RumwVJzjQPH0QUCYMxZ+QAKCRCm
wVJzjQPH0Z0OD/0Z0aWXOwISYaws9RFbCdRvEADZKEnYZ+rgv30VLvrmcvk8VNrR
4m2bLmnKzMEJrXu9jegbhRNwq6kgFyb/j1P62RnriWaSXytVgLTIO6fh/qNkj0S5
Z2tB0+/Ndml7qpFM1iXNiF0g57q2LAHUyasLiOH0Kj9Q+og/EfMYtjRI027S4DCd
ZgFKbautW+TyxnmuZkHpRMNg8RJYkp4S0GsjmPOR17C6B+Btwp2hhuhg6QRqk8+b
REp+lvKGc+2AoHeaoe+/2qnDUFeNZQHCW4MADXi82EQgz874sXV8vAJzn2372Kaf
uTcr65/qUX0aDmT0LUEFAO2TQm0ho/ysVd0SpJ4jyof8WSw5gSUdgFUI2KnGZUYj
UzhZ6A5VjOKqRanamQ/Ja7ms+d+86KT0INX5QFz0F62CencEkPOoGDTsUAWrLLBR
jOo1FxhXIrCtWjdY98TnGrQ0f7CgyncLJ6ST5Gt9zz2+GTfmRhwa88civD+5Kb1W
jGnzjTP+CGqC1s7RRjsU9iLNAOsEzpGwL7vtRwVcWLSqBlVtK3j4BcUalsErvJAx
T0S99EomPwJGKSWSbHw6SOYz7lxihxNJFdJrnLtH3+LZEL8XfJN275pN8INF3wJU
AUB0CO18I94Ep0wXjkyqxNjf0PiCe8yrRyeafPZDufVD8TlV40Y9qiZQYLkCDQRg
x2ZIARAA762zqX4J1+rk/JGDYH6guh3UMM4M8e6X0WHYxo/9V2dNW/N1fyk1GQR3
PcrPmUALsrBb3bQvW76orZmSnoP2rt5b6uR+y14/WrG2r98Pa9dYnKDO9YeozcTh
ZRj/1gcRwVHAhNLIwE5B5c5h65c4wR5/ZUrhYp7X4/ugC7m8CL17df/2sK+m33Ey
IuyGzVkNMn9lxWH2V6k1As6HsPA028XsINpOXAPqGDGqAAWkU9bVx0GYCtF6BUpK
E+0N/OG7MzfOvBxmrSP6NyoWx4XMKzRwqb57QnPnCbMVifi82o+n6G/00GlqDAJ9
cHL/d+XA4XPmpL5bDYPPOdD+oU2xVNBgdcs6+wYs1vm3QdsPDNB+KRMCWbMJLxJZ
CTtkDadqYjhTOd6fLQTpAshaA9RjSKvEZqnwr63lNDYcX03trAoD060N8HQxyVMG
qx/YEcHJili3iIkiZwdCrQ9NISrEuAtIQzDnirTuxPx0Cjl7gL4yY3mEvJkQ2tWW
3hm3vPe4AeUar+ai6t+RqLGoehu24ImYhrJjhOb3YYH2ZkgzS7bF9i5+xmbqFxfT
W/dznKUBaFFvisA4JG52pT8VnYWyBo2Q6GjaL5m/azTN3CZtp5VAwUuQyRLasL8h
io0e6x7G9rcrxpB2mEg/s/PVcCmGD1iTTrfgGIUz5hTxdCDivsEAEQEAAYkCPAQY
AQoAJgIbDBYhBCKS+3cB7DH2s6WNzg2gp6Vwj+JXBQJmO31MBQkJNn4EAAoJEA2g
p6Vwj+JX7OoP/2FoNVz+KT/QcGFm6xF6TKFMeVxuaBqilsa94xQSKZ55BaepBL00
pEXM+38eIU2ogNhLE8m8T2BtarFphzhALkGEI/XVtqfqzatMt1TLlTTkWwmRO9lF
2zkq9e2TCLebgOuZfNEK9bpIc+/+dRsUcaicf4e6xAEzP8IFeTxuzD1FussMNC4f
c9RrZxA9BGisrXyNAezuiAtgHg6j3AKQhTG95NUacZIJyWcaOMGMBs2HcZ/ranpF
xnfpTqUlEuY6jk97k71beORy3mvH9U2MmadaoOjuSeXRBe1IgNTkCa8AR+rkmW7d
utkgTNe5SYjuxptz/Pqzi2i0MpJev6p3AT+x5dCET5TLVy1vJe06y/eSgmYvpGw9
ITDdgHhT3OL8V3k08L7gXZXAyAAchSYgipAOL0qxosYyyxq4TGDu/3wzhRf6nZ64
Hu1tXv3iSniqx5HAq0WCl4e4YPjOD7yZvCrauctgBf2Sus/lUwoCDaIlWJjw0BN/
TkKhPsuD267ORioHpONCfjhxxjZYDsmShATIohq2nqEl8+/a+Rtsv5QisfPhr3js
RSdSOSgOGZgDR0VSaX/NZUcolIOhq+db3IoDhwkM9YxGFX7TGLiASNJ1mYE8O3nZ
YS+AOdohIL0NQ0OLgnWco0/TC4ziNxF0b9pb8+xlwImoRMrFQVkNu6UjuQINBGDH
ZnkBEAC5Llc/yl585Uj1CcJPcImWKFRkLOL1OhHhIHcVgj90eqoYz0vtmaw+MzlA
j7DgwdtXb1WRAjjoulLZhEkHQ6iL9VePMJFqxN+YKvl+YZnJuOIAoH0CvS8Ej0Tz
ZV2wuhchrWo5YrhVqi9PfFEt5xSHq/B0EFl797R6bFF75g0OE0EdJxtd1UmKQLJx
tn/6gZoa7Z4ZuZqm8lL8cpBdm4qWFUGaz8CpCVwuGK9mdoszU/74tWkEcKnYD2DE
IC0B/lZ9BeluRgw3Qf1Grf8G9D44OjbB+QkuiO34ru2hVKjTrfCnDEq+pfPzoNXV
VUIlAxvoOqjCAnKZv080cJq3fYwjMkMTfU4JaH9y+Byidft1wcgV0T2aayUBMEuF
6FbblUhLfhi5C04IfnCWYarquNfLkGy1LnVcejDG17o77Vz8oLlJ8kThMPdOt8hb
OZjrdO7y9+Olk0QPYme8AW0sQTthM4+5mlQ3bHIX40QRoA6xm4+gPISqZQhdEmHR
9iialCsx4KV2qpBkeNsvnBuC54Ltwmr5/nNSpKkfPJ8t7wKe42DPhxvg1Tb+GV6Y
IhDYJaHzbT1OVLO9X9YsjKGxtF6kxo46+0rOx3FDfYfG77qKKc3XmDaJLUcwVHO+
PlBAWnfvMuWzSLWFduOHvm9gb49jsxw4rAB8iYLO8YHv4eqkhwARAQABiQI8BBgB
CgAmAhsgFiEEIpL7dwHsMfazpY3ODaCnpXCP4lcFAmY7fU0FCQk2fdMACgkQDaCn
pXCP4ldKOg/8CkqL/5C+hDeMVxPzFwlMuRoSuixS9odZjyQ/2R9Q63TVe570Ilv4
GNdkylzNy6qLGKw1U7Qr2pYgunB84Ii7VZ5zjh9SY/MB8/vS9AYseAEzl2QoU23k
00t6E+VQckbrz3BCKOv3vmLmb4L0PbADPYcYqj4hnXSwwXli5odn+f/AogEZmMrl
UAA9iM5cBrgqEzjHjpKUXbsY4ms+evO86Ei5h1soWKuRGcc5JIkH9mswA3UxRPFL
VVUgxfBUCjrAksKO5ke0lGVYRmOEzCrxWsQUH6fqE8rip6BdCCuB1CXHtyYu9q8a
d26da6N97V/cfGCz6e6Yl/JMIU1NXr3N6mUnXNO4hUZVXX6Knz+NTdnP/phAZPCc
AUUk0RrHvKIfYpEV66/MpTinc4rWHrS/5UJMwoFAafe8SemyHxqzK8b1Wz/IEN/E
RmddBqeNK4Uzs7guea8aL53IBL3Z8Ja4quhhulOaCd6rdARWSZSY9wwEXoZEDjac
IKKIHl5Po7shmWCnyIaTCqMPiJ9a1odvH89QEpJPgRsFxUw1TnQQHBm5extA8wcj
+mr/oi+rYGQlC1jZC9Up76ThE93BFcipFwanZx2lHC7j8gHj13aCYwqhiRO2nJsm
7vv1agpa5q0Zod4KgxR+oevGelHfzuY9BHnErkvwWhq9aq4hkXiqWqA=
=GNcK
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,118 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBFJK9qIBCACypED81H1N4YmhMJrb4uOtTDzo+lFZDVVOcK11+NhTFl+AZZFn
WH/7UPn+q5ZZBd/IhONfb5QGw5FzTyBWHsbAteXgCvHAIyumwhQzhZnow6myyC6/
MwDhomT5rb3MkCKCyQMNfj/yMgL6ZRsXVhlGOLMmOekRfKe2wiC5BhRaQQwPZPwg
FS5D0Tro8Xfxjk98u8rNpQXi9walRAffRY+byhkPiBj0sVA2RXK9Dx2DL3EY0xx0
7r6Qhs2XkbXNDDCHRuChhHSHwWC16VS9x7Nhfg2EwKqmMGRNREikjwzDl/aHKz+F
XTLONdmc83sRyklqgH90f3na6s/RT5XTb08xABEBAAG0IExlaWYgSm9oYW5zc29u
IDxsZWlmakBub3JkdS5uZXQ+iQFVBBMBCAA/AhsDBgsJCAcDAgYVCAIJCgsEFgID
AQIeAQIXgBYhBIqgkoGkEvxr5Qru5Nc61kMK1HjWBQJlHtIWBQkUtQ70AAoJENc6
1kMK1HjWc84IAKIABd093e4sBJA2d0JBSroHf22jM843DIrMpmfMVyEnInWZCZfA
oeRu7aVINlb8x81j+XZu/11NW3RTgy3A9M/OiwpaYubwrDnzWqV0W3iuicFZ0ywk
fMSeiDbjjbzRUEUmm7GoyRK99ZTKD/Q7WJvQw8inHupB8Do9Vf9dJLhfdiFZ+nL/
6Fl5mkbuKkQLgywGlC8jgXQ7ohUTHNtZcr1bNrLQ24BnqFim4MKYLZeWUJG1i33v
uoKujcB+jkaBRn/jaDo9Sd37jXonQnQBvWgqV66dGuagby4L0e4HRgQKgLeSrUfJ
SlUQhifkyiU1iI/HL3COVnfUZT1B+/9ww7eJAT0EEwECACgFAlJK90YCGwMFCQHh
M4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJENc61kMK1HjWzCkH9iPY5J5f
vEDt/mhHQCIzxBUaovooraxD7DuMMzYB1UrJPiw2SmdnCt6LaI23BQiia6ewB4AT
79HLLBqDqBNY3djZ8pp0v6rHsb9h3Dc8a+o/RdldmIKmq691V6nQlhO9rRbRmrpW
2ZkznRtDScja1C9gfVLjf+egiaVDhiqoTfAbz5JDUXubDARlR1pDuLFHjebjZT+k
ve0ZD5TWJhI1cSh1wEQGtSh13QFXO98legODlwYDk5FFuZp+NsPgCP7bBt1JBe9D
SrPq6OL2DZyuh+clt33UoqB2mr6DKLn2JJgqwUOo78j3HrLRkNtdPnqmU5YeSjwa
WwZrX9brJdb9QYkBPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AF
AlQuS2UFCQPEiDEACgkQ1zrWQwrUeNYt1Qf9F489tYvbRpKMdehExF7T2omKyrn4
RjRckKBxLpbQ/F+dqApO8kMyTCokYLEDonHLh2dUEsDyflJhq8RGhs6dnWpnFRLW
0A9sc8JqiJLJDBDSAHRudq/6Y9B4s5LYFs7bFgdSuh68W7nQjxD4lEmymYpgWLw1
9mJ1v99HaMx7mkJ5fEZN2krFb8bYvYj8LpA6nGvkxp1zoqLv2Pj5gTdiL2z4ns79
+ZFWFAgj34FJSPNf5WLhUmRerIOU5Dnuk/DMsLyHw/mjoidsGfcPK7imTB7ZiSKJ
apBSOWow1rX1iR9JB3yu/z4e2/FR1fCnmDX2tO44bIQihjQl7I/NGfp4l4kBPgQT
AQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlYSe5UFCQlrH3AACgkQ
1zrWQwrUeNbEjAf/VQ8uRtxiyneECB14wjar6q3Gb4m8xblg5rKsrIDcWsV4Sz8r
d73Xfs9YpQ09402khDhuO/9go5znzNwyF114hm7WnqhvnbcpD84+boviQ5tncSa3
LYiyCqYLdkh3OZB6vFQDj2OmHEpiUcnqyxqDCYKAo4rSjNBHVAmz8xSv2OM9HHI2
B2qLVkYbpyp8iJM3/6Gn6G/49daYYt286UQnKSjf+BYv219xDRBGuKhXAwinueJf
6DEoDhtPRs6ZIesFCkMZRprrQWtoopyxikc7xC3PfLuPD5HcVFy7+N4mAwImv+lX
b5E7+te4MJGEPhIpqFh/RR9DCjf5hq1UnmUHULQdTGVpZiBKb2hhbnNzb24gPGxl
aWZqQG1udC5zZT6JAVUEEwEIAD8CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
FiEEiqCSgaQS/GvlCu7k1zrWQwrUeNYFAmUe0hYFCRS1DvQACgkQ1zrWQwrUeNao
7wf/YS51lOO6jpxwuzTRillJLlJsDalsJCx5kPO1aEtzC/LJDpZGPawLwoM45Wd+
ue1LyzDgT6lLztEJlfLFFiFkU6A1aUn5Syqr9Uav4PHiDL4/L0vOHbBMtfcMp47g
sQHJv4Vtoq0DyctJefmq8GjfqzRO2PPE9wyN2Ux6JMY7zCZXqS2zzoowa2po1kUO
ynHdUQJQlLXl31job4YaJ94b+zgOmvbpQJHhx0lXvw16xYct8bohiOWUMOLOngzo
mBzNIlrjjzYURbyBLBj7RzFAIUioBTEwN0yx1akfIhrqbvF13vz68/OeEsVAjfUj
6Ips+6qail/2QKVdjru66Ju2e4kBPgQTAQIAKAUCUkr3UgIbAwUJAeEzgAYLCQgH
AwIGFQgCCQoLBBYCAwECHgECF4AACgkQ1zrWQwrUeNas4Qf9EInrx4bBN1PWVXWo
UL6b0OdnO8PQLigazE+PTn8+CCUq8snGEYJdJNGET9ltWGxQnryoS1IVBTy6WDkZ
rGsW+zzp2WNbCViAvXtWWYbFax041StZdGcOtw0EkIcxuzVUrel37xRdNhzYuP7T
Qj0d0GuqdTLBza/MCmp9AYh4oa4mJYZjGTAXGy4cPQR0D99BoCz8BBCj/LeprKaW
cNAaU9/PwhYP/AY8uoaizSzwDfNHTB+2a3pi85HUnpxYL/DIXx+gXIceVQiFGPkT
ZAlq+j25ZfYeU5m8Ee3YHoBbaYThKxoyHTZhN39liNMiOKI6kaPWLJmYnetmSBqk
wRX4fIkBPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlQuS10F
CQPEiDEACgkQ1zrWQwrUeNbTQgf9Gv6mC00g4bLeVYrjfa/gYz+jfBNJVA7qnIDc
0B5CLZLd8AUEEJgvZN1yqaBun6j9d2YdCki0iAoJeLUCCptnrIkO2JC7394ibgLR
9HsyHVwVH+xj8EFLvS3W7F1Us+7HnctgetCcqafOTygSF4UxT8XGsBE9qTnJ2NXj
uPJeeZ5U8jaxhhShgjLsIFJoKAS0PG7Hw3QWXrXtaXUGuMEDT+EeSI+wUKaNtvRy
6pSCYF/ZSIx+gfQcqNWCK52jvvHvAPShvPVg2JA9vlnT/4tAomAK8gvkN0gXmn9/
ybTGBHa3UBBA55/b3W1YhHibB/GZtWxGxwuodEyzKPZ5dk2goYkBPgQTAQIAKAIb
AwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AFAlYSe5oFCQlrH3AACgkQ1zrWQwrU
eNa9tgf9EZ58HkUlQpx231qm21yS4TlW2E5pVjE4CBp8YZsn4d26YXOlgUDkTU2o
CKcQu33yyXASPclqmEnbzHnlUm7+gujs36TmBh59Ec3VDJCrCjZNpLFjccEtpb3A
LCLmsJf5o67JTl9pvle+evpKzXqYNzRdELxH01Gtfecsyeu1S/VAbSPMT79qqWA0
iyROzABX0Hr4mUQZjdW4/Yp0UrWR7yZT/2zcK/9nYw1Sa1q4rxRHW3bXKeAPjupv
xsHkLV+AyAcEqHy+IxKgXvJuteLqEblYi04tFMMem595EsJZvWOl5l5aH7vIzHKh
YfOPlRPzgzhIIa2vjrsQL9WPDWlT3rQfTGVpZiBKb2hhbnNzb24gPGxlaWZqQHN1
bmV0LnNlPokBVQQTAQgAPwIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSK
oJKBpBL8a+UK7uTXOtZDCtR41gUCZR7SFgUJFLUO9AAKCRDXOtZDCtR41sH0B/wL
bcm3d/bUYo6BYd78bidgdpelqWI2KWSXQyZ6k6e5yAQ9yJPaCbnl/nMmoXYl3Opd
Qs+DEtZ3D4vRP7iYSiBDPF0D6R0ppxo7xPL+dQN9GmDgLj2YeWDeH4Z8XQ4iEfXF
zrGm1cojfgcexf546gssCwAAF3WfY3gw0n/G1zkp5dZD9c7dypfwppAGdTwxJT/V
ayQ4oWnIqy9Y+GNDHTp/CJqaVmHLOfdIJsGfUnc+qDjfrp/Myov3VGrlZN+J+Z1U
/6zK8UAd2sh3qccJVxelKS0lk+ucY1y1kCFFof82IE+JBFAmqbe2PEPHsHE5Dgwo
gLxxQOVihE/c6QDv/DHSiQE+BBMBAgAoBQJSSvaiAhsDBQkB4TOABgsJCAcDAgYV
CAIJCgsEFgIDAQIeAQIXgAAKCRDXOtZDCtR41vdKB/9pw81jWa7jBZ0Ujn/eQ0wQ
YmfLvNp5fcHjEIUtmcUWtVeWwRnLb+lwZURRIY5rNmx3IwiQL0TgHw3y72ByDe3/
N6O4lytSKpWA1rtfS/8Evc6I5WEvod3w6cMO1vPxqNjUZCC2gLvbp69LULFrMN7T
UBTIx1qmTT4deZ7UXUg3/Rsep2lPQPL2Gz2JpwH/ckT0WPlPoTp1+fKloDIxsTcW
M3ow21fx2YQuXXTOt9ZjEDS+NiUeRrNEmKi9uHyMoOeIZfW7i3L0NknlXbAiycrY
VTm1r0/uSQSr46jBja7nqMVbr0OawAgzwzBmWDINGQyd0I9zFR/FWmJRxm7T3Gek
iQE+BBMBAgAoAhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCVC5LYQUJA8SI
MQAKCRDXOtZDCtR41vGaCACOrTXM3VKyMdMZTX6873zb030UezvbtkUyYC9jybb1
t+8OBiM2s5OFbE8AGwkEGdYI0behwNnPq0FzRarMGhIQDHTqjfg5qhEMnKUGuhG9
lzWLZsVEQwNqfAJU6eOsDXXMvt4foLvYjsMYPTDm6i90FqDSyslr5j2bqgzP21hn
xXiaCzCpplRfQo+AyVhlw5F25fmnESNsG+HCA7wsVdATg858SUFfgPe0N1fIP0MV
R1LTtDWdTLU2G6QFNkokkDAX69FT85/TqBXAJ/46/R8pLKld/GJM8BgNP9YUi2se
Ur1cf4OTqFUxWHs8JcAGYty9R2f3M0UrjpgW6RblRWs7iQE+BBMBAgAoAhsDBgsJ
CAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCVhJ7ngUJCWsfcAAKCRDXOtZDCtR41pr3
B/9o5p8/9bgvq6MoUCIw1nuvuRszXYTiAhurNewBTXikPFrIcoHAYEsue58E1TyT
z6p4KXzPJDJOHFVBx/tod9ddHeCwACRjnBg4fayGiRPpjzL/OJYq0ZMKYofjEDI1
u7YP88XYV64Cq+aYgYSsIraCOi3NKC6AVkD9xBI68zco/lkfTBVML1YqPS1hmppc
vl2aHH9ttVaPT2KdPL/qBdKNR1pw330GEjn1+EXb4B63L1P5PRM4GTa/8rPIoUgD
sVmQgvaLvZMJ1dOJnLxDkBUBhdlvs4j+lApLvEQl2mDT3qUall1ClgYOZq1tA64l
W4Ma5t6Kio7eJRPbSDFl45xruQENBFJK9qIBCAC+k1tFOeDS4gMxEgRkfiVLHFem
wJWQiGZHYhtDgjh6w6mB8G3WZ+/gD2CMp5DgHFRC1sW2iMj3gOzrfyxzd9AmWbhX
YceR6EFkTc6OVsaIb+eHH/Zo3DKyB1Dq9CA5fjjnEQzti+KKSZYWzB0Fkt7qrfOS
6YM1zMjEUxUUwsl1qirx5DuByWLDX7ULU7H/xmPVhHUVZO8XEaFV2m+ICx8Y6B98
KMeJ0Qz8b8wp2g7vWEkwS2R6IjF0kMrRxnxUvwA6EUiZuFphhuY/lWCJusLl1olg
OE+BKMEUStJWEi0s+pd8FL1vOLeNKbIUFro0+oZr9byABpkPNjMxKV36uj1dABEB
AAGJATwEGAEIACYCGwwWIQSKoJKBpBL8a+UK7uTXOtZDCtR41gUCZR7R3AUJFLUO
ugAKCRDXOtZDCtR41jMlB/9Wz2Qsl/xCbrc/nzEQ5mmiL8nGcLIb0unMdbrODAYS
Usfix1iheAQVTOBBR9Pf0PW1rM0YdTgpvvykWIpP7IDoXx6DlboFz8Oe0eXjWQFg
31A3zRIsUn9SOLrZw4vwSr8bbK6gwUmb+AxdFDZ736/A3cppkXqo+CO16n1q4yid
+/AN4Z3Dz7zSD/vOetUG2PH4kyehfTw97nWjrAjSjOu/xJgBZvUWUXu8SYcVshPW
tdWz/bS/Ea+FpdmBXUGtSzSogX71G4ZJrrc8i75axwPLgBcytDrWk0HbfgtSqrSY
xqf4oA4TjZCGjxl8LUkHVqmFzBkosLlIROUxZzzosimiuQENBFJK920BCADVvB4g
dJ6EWRmx8xUSxrhoUNnWxEf8ZwAqhzC1+7XBY/hSd/cbEotLB9gxgqt0CLW56VU4
FPLTw8snD8tgsyZN6KH1Da7UXno8oMk8tJdwLQM0Ggx3aWuztItkDfBc3Lfvq5T0
7YfphqJO7rcSGbS4QQdflXuOM9JLi6NStVao0ia4aE6Tj68pVVb3++XYvqvbU6Nt
EICvkTxEY93YpnRSfeAi64hsbaqSTN4kpeltzoSD1Rikz2aQFtFXE03ZC48HtGGh
dMFA/Ade6KWBDaXxHGARVQ9/UccfhaR2XSjVxSZ8FBNOzNsH4k9cQIb2ndkEOXZX
njF5ZjdI4ZU0F+t7ABEBAAGJATwEGAEIACYCGyAWIQSKoJKBpBL8a+UK7uTXOtZD
CtR41gUCZR7SCQUJFLUOHAAKCRDXOtZDCtR41sDrCACl7ezk7SWACOP30NVvoF1F
c6UyBKMrvBZihrvlsBB1ps8e5lQvYXfenw8Jbn8O8S7GmWyADSPRGSaSz9dmgYD9
r4YD6vO3eL9TG4wl3tQqZOBmMviPqp9DN+/pssIpJ0Mch+abSk6SocNfiHgqKh3e
VwKaivtiTh1xrI3nL1/FBOTJLFd+QRM+NDHYeVcosWoxAygUrJ0qSpHM7IJ60Byg
+sSn9A4zYkiS01j2oY2BEW491ngcBy31D90y4ncMtnjuZiTCGRPSqiee3dxHXW+w
HhxaW3XhdPaGVv/oBfoDLE3hFLwpnf/cdIABO6Cyf8DzuPjUWcAtDuIpPkLbku41
=Kc4m
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGbVuGcBEADW0CPivAYNuZPU7EJBv0L+dLdeLSd5yo4wsHg0wIKyY2gIHSGB
7Dtm5kiCTV/MR0R+VSZ5MZcCkmUNwUHO/3okn5hhBhL+27b+blxwmLtL5P9XXoTi
TZvrv7Vt//wrZGYml04Dz+o6khLqYvzr3GZk4erETqAPAxLmiFguo26LbiUnY0Qp
tpp+Skr6C7884mq9SglvDwOCboqpLyYMxi1SYldwHw9DmZ9Tt52D7MHtyqh8pUDA
1lQtJm7tMQCvCP4xKlDjkM4ijD2ou686HHLAWm5gH0kT6OJV7SnmvsJrcxhfy/VK
P+AZMjhi2dy1zyYv/8PXLrTkRvs4+WPrBBpJ33LegtpW/C+DPKT4TlZPpWtqM5e0
YN4fSsvxP+4jkol2Tiw5X+XtAn4lMgTejamqKj+wAye5kqA0uTYNdbTkBv5bqRHu
vGqRXb8zHcM10U4oo4d7DufxEPfjq2Nwfw1OGmAe03HMKGGD0l492J1TEcqXMg4U
sywyoohzBr3womw2Vm9O9zn44e8YiheP5TtidYVaZDpoUyaT28J3Vm3HdI4l0gFS
oW9nRQ6Sb4+ShtNKxDBgcbaysf+W/GlUHpEWXgMa/mAk4CxiaK6FNFSk1negLznD
6kLhxkK4XGh08fvMCdQDwgtDwCXc+4G4IquyHyt4Cg2/JdgP+G2Zv6qmZwARAQAB
tC1NYWdudXMgQW5kZXJzc29uIChVU0JBKSA8bWFuZGVyc3NvbkBzdW5ldC5zZT6J
AlcEEwEIAEEWIQQOflRXU56cXkw9l7IffIlrNLKBZAUCZtW4ZwIbAwUJA8JnAAUL
CQgHAgIiAgYVCgkICwIEFgIDAQIeBwIXgAAKCRAffIlrNLKBZCbGEAC2YtJ3vAny
0qD87NBPz3oIFHl8PLVrU0MnThu+qqraGxiYL1hk6jFCuwpcHKJQyeqMDQta1nmj
n4yWxJnd5RikFh60xeUkzTsWksSOHl3e4rUlaWOnrCjZ1CkbvnI7xt9fwICDC4Zg
+mjuYfyLxY/PH9X2PNWIRXzGnFAmftoTuqedbXTZWTJ5SkOsq7BtMDJDpbICmjwm
tLFtp7yWNuVcu5aVJ01Rho0Ki+J3hRuCtUJGusSJEAIBGvYNL6qpgkXYH/IdYJOu
eOMjNTUeE9RNY8uSNJZ8tXTfim87xCw67SOW46qmOvFYS+KGsnSCOFfScILxwnQ0
6bR/HPNiifH80edi8M4X+JITCCFGsF71OfXpM3JVuQyWlYRePGzqiskgsoC+W8ns
n2ukWdou0sJFBC7hUrazXn0jxGsp42/BBD3TohT2uZnpFdnfs5rDCMmdDnZtva47
DCaHo9TzWNsHBN8VnippGsOYRwAh9GEfkzP5xTYgpYCHtLwy7Xop6pwyyiOhXRz0
G3aks7ibJ2jG4m+FFynN6eEOihcrm0loBX0L1f/8+u3lgca4CVDjGqqSnMsSVPbn
GrIB44/CWLhL9A2iOoNmxTsP14MLXbMSNDX9LvlXKVWVKB/fIulhcAD1hxjjNeFM
VQyWSuGNIeO0fzq/LSGOjYKlrZSA5kRqsbkCDQRm1bhnARAAs2fVZXnkd+8U/61C
kJfCAxO+exVU/RJnWmMFGJE9NQwnKIgskllrjbzaus8Ohgga58Wm61HeCP68W+Pt
/THlylTDSGPlN0Cz4IeaaUEpynWRLlFMQVO+iaBHODyrVcs0wFoyZLfaOlD+zYLU
faFkZE6s7gqiZudlzIqXX2zNn58VZ3STxUEzP4MTaEEGOKGz6Ykr5bGIgN9DswpA
KFmPMQKIDcVupu0Nmj+NVb+fxYTTu8kY3NEK6ct+5eyUzRfqwdxeihKbH/aEknZk
TaYJjp+Exahm29+G4zEDJxLkJPOcfYeFpJ3tGmGyxfZI/xi1aoGSZn0T1wChyDcc
MDu/Cn9ZOKuxYUQyrhmFi05UqqOZsvRqZuyowpmNMP948GgtcU53k/UeSWeHupNO
4ssgKUs3TcBMrFIFx/brIR9uCXBkkcK7K5tJmZbCWfLBXKxxoEFVG18GJqKnb0bS
pfWcZSHNJQdxlKcILH56QduTD7xuORVHhkTurHn5oD/gEaK0wvCQ+MWnkBxZtaTt
6w/UG0Qgt7U06GxAjuyFRUBni0Ljd40JjXxpqsQbVEf+5Illk1k5jX9XtPaXLEPs
2zor2rUhKYOg7fMtVPHJ1P8KyDZbhLHTOcu1B/spTngBlxSXUU2GqQ69H/GFQpzT
NPkVfMNKpnpj5dapjJoyjMPs0KEAEQEAAYkCPAQYAQgAJhYhBA5+VFdTnpxeTD2X
sh98iWs0soFkBQJm1bhnAhsgBQkDwmcAAAoJEB98iWs0soFk4tMQAL5Fz4kzkjWw
eHZrwLsZLsenhf5tTyTCBeZDteLAGd8LP4Foz9OHC0NHOEHui8uP03s2WDOI+d/5
dM0yoLtKD1DqB6mRDqpxjem3vGV2Ypjstd+52QFsM0GXzsIa21k24SEdluwivg0N
hf2p8nyfa3IRKCQRBAJVV+1z4L4/tYRp8WBcWdxtp/gNVm4GmJUDNwEokdNN5O92
cA6gSdgAmIQb3hGIPjivOE7t0szVJ9+0kfXivuToVExprgb6Wtse2jN2DlM8ALvK
KWvlNBvrcev2hmZGy0IfPA5kQehZeFASXwa/yCDwzW9EzDJOcR8UU01/Kgsz2CNm
oel2pjvfaw7Fdb9vWmNwO2poC1JmZUEXIBPPApkXxaqsclFpTk8tGnEQlFauaIFm
HcwOBCe3gtogapz1MKg876oBqYxmUbxgoDAjquyyOllzhCvrJ2aDhSAHgoRDCbDv
8zmIGDtueojP34UfoD7x+oDIK+0eHlzseicQL/EHLH7jQpBFdGnwp65gbZzHiXfV
pvD/6dAe2Xa7ffPmoPU/aSxVGahY9NzFjUmlI8SoknIyQY4OjBzq/FOXKuHR2IWc
Ph6clodkz87xU1hJBN2PvICZBd5Wd0Yb7xKfQNuR05CMolTkthKZuyCCMKkWGTYc
LYRZ6TH7ZRyeyeKha/lF2F5FvAHe78Y4uQINBGbVuGcBEADJ0kbyEV26b8qeksMq
T7YDYVSClepOG566JM9SShkmllPb13TZLpNlx49VQljkZLji7ObkNLLtANcdw30b
/vwRX7pJ9AuXj/DyUAOW3lexCpHW8UHaYx7oY+c3d7PFu5wBzQ/NLvXu9E0NK1rg
MjTqRBiG8bL70HrGJGxcW9yC5+ZCM4A70NJjBGXA4rltWwUEpe7xryjj/pWldtME
wt7naMBDUGpFyImRVZ6zijOKsIdKRAp3P0ta0GvvqIgpaCqYs6rwsIFhHhIaMoEQ
Ex16V2anNCAwm0g57h12zBF7Og635KzbTP6rMFm9TLFPVTeE2Qk3noduN3Syr6eO
BKACqiNNMx20n5q1Kjy1lP2JPz/ayHVs9i3Yu1uCFZcSl38/CpXfH6B0QjTaN1Fg
fnT3nscQbjJDtFuxNvmUzFQUxjlESNWbafr94h6L5FLQ0Fe3aFFm8heVmsKb8pXz
77MdLIqVoP+jLQLmED7x1JFdF5GXo35CRDGCgP5Mplc3reC07FUYGUMkxdio+Md7
1Jgs5VlnHQlOYUYvhgB9DAO1g7f0XQJX2QcIvwpZzQ4Rm607xraWbzI9vSJaMNzH
w8UbXlqTAOmBBhcvn3Apb1vbKJjvgtsee+r4xrBZBZ8n9EiE2O1K9ESzttLkvKIo
2zXRrVTfvSp7bvgo48hkCxAAXwARAQABiQI7BBgBCAAmFiEEDn5UV1OenF5MPZey
H3yJazSygWQFAmbVuGcCGwwFCQPCZwAACgkQH3yJazSygWRejA/3Tw1YNHl43fzg
IEJ5JBfEqKyrl1nOCeqaD3uRzpM/27RokdgrxiLtZsKIZ1x6rmKHP4qr/g8S8Azi
ODLTFrA+iwL754QqTkocettdOdTRMXlAejpeq3RFVq7KmW6FGQ9Z5AQOhjop30uC
vSDc4vz0Dc6OtK2t7nFedx8xhubfWfYmO6ce7hB2uM1ujCfLVNFNIN7SoMkMtvdw
d7K1jYdQsG99OAP9EDtI+iOk0X4AzgWRK//1Gy9niaH1T/Sp8lEzF3r9DOv4VSpQ
W8WvSJic5AZvXdFUUzvWs/Ujd1FH6MXEfam2oOCnUdN8DfifPZn572j4iiW4Em05
8MegNZjLYaCntWeHfOobETEy+ZxgzGiNEmNwKoK4wbOPJJHYMxySIf3W38kcLLcA
QLfyhVG0PX/7XCLfu57ccY+IKvAQfxix++b/m7D19Kix836UtT+xPpgzzkT6v5SW
BSkPxodOot9Bnkj+XfUmop7VGrb6yUogWB6e+WY5g+Vny4jc8hgn5jcQj7ntM6qO
svCQd4vDUJ75T+X8LSbPP4gF+CUQEtEylg6BwsXFVbpBJax2lgbMBtHSuv3vnTho
uzlaUSqaW6D9mcTYq7RgA+FaTGJuOiQ0oLHyuirc1sHNZN/PwRgf7utKqAhVM0oM
lg9EJPYxbeglKcRcZ+wAuiDSL/mrsg==
=cGQE
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,105 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBFYFJjUBCADEsu09ayc4rKGFBrheedf7ycTg30Ra3xCPsAYRCZeBXGuGT3RD
WOjpd9vpxJULoGYKntpLuFRHjFmao4xi9DbYqZjf75NM1OaPoEeI2nEfKk8Nhyqf
xxcvMKqrBehEkpErv70IRaybfDc/q+/UkAdPVTiP8MPYaH4jUCVIOH0dGXuyCTvP
LwlcXnAdUkvG8WJKIBrLmsMaquFA6wZe6quAOyQA3xTi8p2LOamjg85Sn9i0wlcM
9U3h/r8vbsrK2h6CD9WcrKgVqowGZdqBlRu8QtAIPbeLHm8Rxt3JDkkAvsB+863I
ymeCX0eaWv4tGzz7i1GTTC2peCFTs2iSJJdhABEBAAG0H01hcmlhIEhhaWRlciA8
bWFyaWFoQG5vcmR1Lm5ldD6JAVQEEwEKAD4CGwMFCwkIBwMFFQoJCAsFFgIDAQAC
HgECF4AWIQTxz3YAZoN7gktdYAd0FKdgynR+VwUCZSO/YgUJEwpAOAAKCRB0FKdg
ynR+V33RB/4vAjVfOX3v23pAAuCqzJZLhYWIwL+jEcv8K1YX6SebJwF/XE/ZGZZU
l92zou33G2b1iU8PyNTTqBpxdKDWtISZckqWwg4ZMGs6EyoNztHB4Iuq1pkILA/1
OfB1SXRrYg2atdJw37qPPPdAY1GLnAXpulxHcoP7yDdoPVwjYGr7qUYv3n1PPfoS
zXO7oQIJYRlnMudDu1fUAG2fC4r4DRSKQi0uF5uTWFUEQuUn6zJI7Fa16dp1k3qD
jBGVPAdrt9Pje+7mPuWr5BmGvj5d/3o5HeW3xO384ovptTjV7vvvFEgq6pNwo7/2
kjrwL6zUp2glExHDvdf9kQXjcY2j+p65iQEcBBMBCgAGBQJXKIcBAAoJEPxjiDIB
AtoOI70H/3hPK+KyEF9MYDvMjgWQFjf+x471S0VIbkZ/4/y4AN8prYdxsdulitSK
bnED/JSMPrjVnMogO1CgYQb15OezW34wC9C1s6FRSWfdIrU7ukqSxEqHpPRsvRVD
BVHXvQeue3fOpY5IXPgbcJ+M3KD+36hMwqvML5/3JuVOwMjLWuVysEnqlGZ3UPqk
MKu8hM7hEsSvrmYwPjgn/imLI8T2eHYKe9nnc7t4Bwj4vlAKhrP20tkLl1tcraPU
kzm1E/qKS1sZ+tAgjDW/MSqi9Mjn3YPHY74aFt9yXbXARwpC3bZ0UPSX+Y3KvJ1g
v75royuplOf1FvzymmH4GnTsZedj7jeJAhwEEAEIAAYFAlayH1cACgkQgHpf1LMz
e3eUZBAAydPhI0aHoNKzBaOj/5RD88UCHuAyBNaMJEA9TgUaTpR5OLw+Xx0oR7Uc
HWtRh3ObQpLh/TXLNDoailMv96LXMAMy9f5nVHgAC1uIFaSC6XIowEWpHzXi8rMu
BXhZe6OuGPRoLQBEjAywVCoUGnfdGwJa++qgVt5OG1K980e1274ATWPG9ateli2k
frLAm+eLiRD+/ruD8UhqbG9cah7r+L0rAbnTVZoPOrA+NRKcB363xy7RZIN2QPo+
9ZhFIdsO3QjItDG4urVLeQUhr1tEoSYVL5m5VltxzA4l2Lnmtfd/0tvqk41aQrKh
8xLenJYUHmLzf4dam2Cu6UOrEh+evxQPWzNqlvj5Af1ajFBLsHRKCAkHypvRu2AF
nL4vlN9mlaS58hVQEo67xgVNjgWxGOkDf9FTqp/o2GXKHlvEgfA8P8rXg1MBVTKl
2aWHk/S7Islw1OHe29uumNLCzGw6eywWb8Oalay5HvKDq1c7ZcK+E0Q9kQe90rOr
ZGpphzQqKypIaF4xdsF3pAnjNBplAPJ9ofJN7TdF8s6oH24Aui+w5J9PFPvrVYRe
+keF3dLICR2XLT0Zp9KeKrY0CNUVEBStg1XhryppYJ6nIiRRhrtV3aRhl9fiOJZ4
xrLoi2xCNXDTDey4p0mG8XxefgNVXJibSSzBhFHZfdGUT82i2x6JAT0EEwEKACcC
GwMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AFAln4nrEFCQe133wACgkQdBSnYMp0
fletsggAnLQ5NmQ1Q0B0rvsmqIGSkQpwEMkZpAOQf5whSJGoLIxP/8VfNgq+7sLm
WFgAR0py0RqEnMty2xApaJBr25WyOihTg5FPD+R1YeJvvVfDP6z2fCSTv3DqVmuD
OiJaPHYZZXO1nzaqbLciDRIyP7VuLDu1+WIrLmbfNMvHG0zjXWjiYhD8w2OvTaju
5/4q8w90hUBTlzk3yE3MBMMRjdQZW3mv9o2gg9A4HcehKqswYsS5lNJXMg5cDIEX
sTz946+sJUmrakomdrcUEdRNbUHs7LxkMjHgxPUZgy3Fq+K4EZZ7TOikC7M7tz3n
HyDkS3gHBlW+7X7xy5LLN841P1RNFokBPQQTAQoAJwIbAwULCQgHAwUVCgkICwUW
AgMBAAIeAQIXgAUCV/t44wUJA9eGLgAKCRB0FKdgynR+V4AiB/9D4mYghJEDOT0y
lQh1eCdYSoihtvlNtXcP5L44PW0mSoEb7vyMUSbV1oahdzaKuvDBjDEs5rIwSBYg
rH3pZnornkehSZT+8MAgJO+Zh8VKt5UQM4OVzXDIKqT77H7X0RlcQLGhmSpVLclJ
CMRsDhn6qtKXTWqsnJnhTpw+3YSFewD5gd5ZFbQYtnguHH4tadmkFm9SZxNbm5P8
7KBycuyc8F9J7GF25a1ONed+ExABGuWgsgr58i7EOLw2zhmoXq0NRL9hn4qTQpOw
xCa6irQ4qxdtxEzj6vKKjS8YO2bZWxgThVBuowijwZNbliSJOjhSzn++LFCkTtKz
+UqnPpguiQE8BBMBCAAnBQJWBSY1AhsDBQkB4TOABQsJCAcDBRUKCQgLBRYCAwEA
Ah4BAheAAAoJEHQUp2DKdH5XhK4H8wdLkTtCuIwYdX/PPLutyBCNLeu5FQeSqkdm
zbqUtWjsMUekd7ONyMY2UApNsVXyC8nlAL9gVit7nhitoPY412fLmuQPVvt/lKb5
V9uD9p+Q/41o3KnkTSuIHf8qDOkgC5k4ysLWKGYgtokQfDoB7fVmM85JHLmuStpF
ROn8CUpT7HJ02IDUZFCcdmnstQ7kHuUgPYBiXDS8CBsfSOQCeyD9WIBV21UKyXIF
km5AAiQ44liSO8HFe+j+6o9iElvcTAZuZcBrZ3lEKriEieMVyrr+A4K76S/ckoXk
IvqffIjBs24KxkPuvlkNoNc/qPk+y99+yU22BdqIF/ga6eWsAIkBPQQTAQoAJwIb
AwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAUCV/39VgUJBbs+IQAKCRB0FKdgynR+
V2UsB/wLg1w9Q1SAadgSzH3VyOJtbs7Vl953Gt5ck+MZwIxbe0eWzLEBd4Cv7vZU
Xrg5I4hvfIceMCvpkQOkpsI6Uy44PN3olMpZq1QK5h88jBCcmMOwkKJzdoOPfCyL
lVxLtd1/EyXWQ6++HGAwj9oxnxYNxnZVC6fcIYuRy2Tlu7d3tiOU352Y7rz/1YAo
a4xbz11eEy9e/rgcJOqbH+v70o1Ys6QyIBHVyS29NqNRQj9l9dbm8yVf1gRBCOMW
JTqPlvteg1D4Q73BqMY4afjxvHPnlSKcpVxNvfwrUZI3fMr0x5CfJqW07YpfHRxg
cryFCCsnf85aUHmgwYxr5v+2F4JPiQIiBBIBCAAMBQJZByF4BYMHhh+AAAoJEI8p
stHVBanp2CEP/0BOFtGJBvU4LF+JbQp2zibQVI2vqg8dW3PCMmelRbaXJ7Kr69fl
S9sR1u/O4EN8j0DyRjyrDanefK6I5qUI27zu6kKKat+qqFhQL0W6kQHtOxw4XLYi
PUPujpn6JcGUpgx0j+VOH1OMKwYy1KNg2rPwhPRWkkdFqRVbf0tF1XJ2pqZcyVpm
cKRaFKOtMGlNDhceJVMd2DTgqabOVkqkwoVfVflkEEuJ8wkJ15vrvsyqIn8RQ/Do
0rXjwdLJWnU3M4b/1xQAR5f2FCg+9Kj9g0E6w1U0e8vTYBzRCuxm+DAUpTASk0AE
uQ3ZMgeK1SZno+vIK6TmdgVEjzrKQjlnajEHm2BHo9Uygap5EI2pyTBxJYlnl/D7
64TBkaqS3cTlr6tIdJPYdUm9di56DuuPwHcWqGHot4K6VCyEeEgWvOVRI6aOEbdq
VaudEn+V4ZKg4D5BOdMzl/hjiUZ8iYITvlBlVemASZUPQsUmWAHEYno4KxJA68Gx
wgIkY3XrFb5fSjz0vHzSEqg2M/VItwURiGJ+M/PSzN1pXA0JuB+iBaJlCatdQXdP
lCDbZODntl0dYc+JHHUTAK+9NwsHVEyaptxAgv+bkeV4RMZAZzu8MhBz7MhTiURW
/L8AKHcw43JSL0wfBjEXMNN+W7b7t1+tNijGB4Dr+6uSiAFK6MNwpq43iQFUBBMB
CgA+AhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAFiEE8c92AGaDe4JLXWAHdBSn
YMp0flcFAl2bF8gFCQt6raMACgkQdBSnYMp0flfsjQgAiQBWZj5BV8SlTvg++E1n
ZbO/UKi9Q2WfSuLVRyKNsr2j5ZBKzUV1j75Bi2oHMRlt2caUQ/bdQaNywpuN5bai
Ibt8PgsF7bHyc3ARD/DDml4eA1KcfrZa7CSvQ5962e7GuMVMd39PxLFwn2LzkPlX
vII8IFNy1BTmpP0Xk+ekVuvomNfk9FCofoO6q8crnX2A2HNfpP8nRNnv6+MaNbsp
lVW+v9uqMQ0ENS2T3Kz9D+uKw6gUN+DMentfOYoJpqxSqHGhn17miNB0vQp20gUv
CYAnE7eHyXPrsvAWz+dnNKAcvFAZEo09FGF2PedwUFlF7zXxjrhqez8/lSYyv09t
zrkBDQRWBSY1AQgAoUzyHOY8kuC6apXSaMpBY8chHfJOqhb5dlS24me4XETz3pur
K70ysiRAwxtDjEryjtkDbvarZ9unxjthHQAuVGJBUHQZrrbAuMP+OaiwCqH8pC3c
5FScQFB5xVjFZ4f++SlrOMsgDcd/2O9JfQ4hT8prfZyD+zgHw3knNYO7aWMcOFow
RQlTNWENRXXLgc+azgUOpkZZK5OnnCWu+p6Jd8tLa7QpHmRDQI6ZgIZcI2AKzRR6
Dz8vnxm9TlYE+8lzu1SUrhojLpCQV3eAFVt8SqhAOR0/1Z0OLexsY77DCzEBhxdb
GQdVZu7u1BoTenlqvBZC3UWOmFa4SVggJitBiQARAQABiQE8BBgBCgAmAhsMFiEE
8c92AGaDe4JLXWAHdBSnYMp0flcFAmUkDh4FCRMKQJYACgkQdBSnYMp0fleEPwf7
BzghiNX9pCFE2PozCZsnIl8jBpMLmEh7QMaxjijAN0g8sSVaP3BQec6vIlK2vTy2
X6pTVpOJXvhpqUrAPpFlk4PPg29K5XWQ2uRqMwx6aYkJd1IOK4JCyeaRzORdd28I
aBjOz/0mE4Ooso6hYoRQ6C6dk/ytaQcXWfjR8WUJ4RBB4jYREN3eEC4muNhSGn32
kmSDfIaVLJ19wqsVfyEwfx+2z5TK8MC7USyhCsSFsHx6yZAdDSXTWwEGqa5Uwwo5
jHhYVP3gGamN1tiqImprDo+imQenbaN0jW619ishYiRsHF3MNuf1AAMdCdgKGN4p
ksHGAPOV6yWHc9QzLUzXE7kBDQRWBSaPAQgAy0Hy97mLKo85CjsZ/Z4Ywx3lnVkO
UkcHD86u4CDXMCe6JgOXO8CeESg6gT4oSfbadq4DOgyihsbUsozuixYZBjWvOrrm
9jy0tGq/mkaGz2erDngEqh2zt0tZpr2WVk0N9ABY7WPfI45b3yhCOV7tflCskCai
vv1AVSpT2luhjEmC6/WeVLuF0v7AT3S/2bUfecwzvTimPDrU7n0IvGct6ZeEN8XD
rEi2ayohCjGAwiz5ccrRw/BQw2C7IIrj5WxfP7lWGK7fyxxzJA+edsQvhIwO0HXt
Yw1Ooh06bbZ+4/9xSqMbzPAV1BvkKW6hlBEowJQM/zzDQlGdJFWIfXnl6wARAQAB
iQE8BBgBCgAmAhsgFiEE8c92AGaDe4JLXWAHdBSnYMp0flcFAmUkDjEFCRMKQEwA
CgkQdBSnYMp0fleGsQgAo04nPGyqw/tNn9VVMHvvkqKrqE+j8PLDSxLEYOJekEG2
Vrvlxu5jqa35cDtSMR7zp+6vXzPEwSpjTjXwuO/dAdfr0qi0kETOaWZfex+Mg4ZN
F5ce+JSMHxYDMvb7ggewrX9wK2Xw5jJaKi7JhxIDoQiPGMEn/oKtQNo4joUBt7+o
IPv8EbDMxcuhW/OYC6AI5tiVsSR0Kz+zqqi8OYLkPeVQJ50YiQihYT/g2OPjAm1M
1NUEE9pCPDffLXAfesvL7xRcHZ5v1FYokqdt4iOGIIcpVTz5OJD4ivhOnTgqumZq
nALjEnVcpjOslstA0HdQroDkMtZ9ZUcbsojDQHMYOg==
=wtzC
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,86 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGTJ/kIBEADY0OB4ldeoEn9EyvmvmRM1e9DGKkUlG9mXrjVwcHiVaCW+/kfw
8tG0LxWpfZcdxk9VzPW39oex02QjK1hrHHgfvaRDtUMCcjz28XG/B72+dEfBi/Oq
14X19PnvA3ZCtK3A7Pki4QWhbyN0+DGgHa+J2hiEPDpr/K/KotPK/1CVOLMUWpAd
0ME/zFdfxgOYSwVoEf6C5g3v6/J5YMi6Pb/RZ76J+r34kRHOrDg0CcPqoBZbkozX
taD84qftMwHI2EIrDY+8JWbDkcM9HpqVqZ7JDgjTGJCOsA3z12K7BOR9r8F6O5+I
tgrD6T/KfiP58SuP++RKZbLOEYJsIphCuqCtkM3s3NRr6tBoTXaOzO4SrCHWSlhp
rgTWVNfMs/kW4cFqw0uVOlrA0gq8GiwxyFr1u8QofzabvlHQIaBW8iNvRSrKhFUU
dmUWIsFt9BMOLKvfQnYwgvCPD9cdOJQwMnK4c8qnoVNBLJ9FbzjvEHRFhejXduR3
oTSjUztEIdQk+Kp/wRwUKq3fOaKul0EkNbTLfueo4ptExKNmlE5q4YWkAOTxKvYN
k52MIAHYDBk7t/IdRBdoPzE3AopSpcOKpElCZBjlLWL+mKZz+PG6FN7j3AqMMq3f
a1UAVGQzcVuHe5xMaqRbN2xLkTL3lWrZJ+KyePfJHEO4UhH8mOwFWWn4MQARAQAB
tB9NaWthZWwgRnJ5a2hvbG0gPG1pZnJAc3VuZXQuc2U+iQJUBBMBCgA+FiEEEux4
tSn8rRM1PX+oFGf51pE1wjYFAmTJ/kICGwMFCQHhM4AFCwkIBwMFFQoJCAsFFgID
AQACHgECF4AACgkQFGf51pE1wjapLRAAkRzLDQCqLt41pyrSHaf7lBduu6Euu7e6
fgMyKvHnmO8j9dufGJUQ4v/s4VfpsMk6x7maD+ncepiwCAS/RWZkvSdkvBCSc460
n+PqvnhPYS9mtczZiQ6sxbx3Z6Rr/EQKKXSCyRsEIjvyZkBEf91WACrMP5nzUjrc
wFwtSnLB6lpz4rJxJa9tV3UF1LzkpCA8qHlmz73yeQ0iJCJMGQ482zVg76P31mLZ
55HUE2yf7GhdggCPPs2WHCKb0gwNOtjOsH2o1o5b4Ch8z1ELTPQzPNGK08bi13/o
xThC05Tb75JidLiJVLoLfB1tlIyc1PSFvECaqxtZgXmSmVEgfK+qyq1+Taz2elGE
+HDbBtYehdMU8dVvrL5ntBMdMNPhDTiOHNGqNXrB5IsXZNvW5/eR5+WeBUMU0T4v
t6W71KdVn/O43C8RowGx88b34VQiFv+w79hY6xRGcPh72ZwbGmuziM1a5kiOS008
TIxXuMuT96OcRlt/XYBKyMh5M8ao8dgYPegCrsFF1hKB83DOAFGWUlx/zUgGd0Ty
v2gtCf/ZUs6OqR3I7ugPj5TuLiiCT2j92CLAWSCefgPIh1p6H8VM8jCqurkgixZt
30K8JI96b2i+A8NslmVEp1ys0k5DqZKltYZgobvREE747vjkeMoarfooAc2QM8uU
HBjLVcdWHdWJAjMEEAEIAB0WIQRKHVuI/uSCQtu8Uh6OLajrBfZG1wUCZMowWgAK
CRCOLajrBfZG1wZVD/9hqFAULEQox3vRzPP0wra5Uqd5TJHrp6BX5p97hRAwQpVw
iT69lVtUC2AFYMuXEOH/Kx27IO6ofV4i9Aftnx5tkn5qQZVD/rarALqt1OkiNIeM
WOHsTS/TO8NrRNktIV2NhqIL3x0adY7jqBqiyuAZFexgKDuGlRidxKg1WsrCXAkR
7gaOfhLGMxJ+TZfpw4tHj5kSQn0+qs4aNTtCvR4K9Fc5jws9TpL3xmmDkCptGMWm
n3GY+GRtrVgoZW4Ch+YQLUzXsz/ktdcL4CeHS6w6cGveu8jm9ZkBdyw6nCGcy8nV
MVutqdr+xgMBD2UQIHklCtwGLMU4759r8ckLgGwE8cpNtzgk3Lu3Dgq02wNOIY1k
zAXjc7kCb2IWYmce5l5HVQzx8i5ALwQb0wEHmfv/vvsenuwbTnq3t5qP/83ApzT0
oUJeZUH9a/8IqHpTuua+KxbFgIdlqUgnEWIP399ng00phfolMqzwBxZZMTIrlwFs
sTh2kzftZWpAusgN7bDUL9X0ek9eLEy99Lyz4GC0QrQaId7EI1tvurVX3/bFLbg7
kUvk+4dlgPj1H/wiwaE1HfTZhKRBgxaB/4yJW7tvbZc7gnBXSAp/pq4X/KsnrHJ3
bH5FsA5ix/iBBaVYlKZCh3XaP3vcolB6cZnpXe9O79EEHn0d7O9mLlCBsWB+b7kC
DQRkyf5CARAAyaSIzxOtRIFeIrrqpCZFSfZR+nASumD+FNMOBN4x48JoMZ0d/YBY
lqktXv0KZRHz6NbcQnh80Y+6ydbZMsJ8nydBrG74cFDw5MJ3GK8i7Z9+wTLe5k2U
gAQM47mFM5kxiOq72aFzjb8vzrqKos8i0Nxec9yODpGoMsGQqx7P82YhGZnv81xv
onySU7tJxDvJFsmYlQXFl2dH0zfcoQ83yP9BQVCx4vSZW1rTYHkuZPYzvcMCX1tw
eiJCBShHrqfJVTLwdmV9uNM9KRQUsbWU9wRfNHe1I+F4anzg/YjDkjLdM7nZM8+X
nLysQkvAJ4tz395K2MKCjHjLrD14kcC6LLMMgMdKui/QQT+wqDV9pjueY188ASzp
VkpWSSorj4U7SEs8vi0hNVvgg+a2CHL7OfOLE3TzEBTSgsEHUjAyeQGQjkPCNFmp
ssdX6tFEaSvkJcNX3iT+UZtz1vSLzgWaB4ZjAOkTfq5xVENa4HDTwjr6arUqO5XK
qAo7KqQ0M7XgNBGjxEQQH4ZqrKh1jx9tpc/d8Cl2ww6B6HUxDc6Wpslsj1vektQp
F7vC/W22X01KycnUYiMuIpyj1cdOrhikUAanJVycWh1brFGfJusSWjFx7U7h+mZw
bEA94gW2pa7iXrvFb3bG8QEgjP3slHf0r02HoSAOAjZm4ReH3iO4O2UAEQEAAYkC
PAQYAQoAJhYhBBLseLUp/K0TNT1/qBRn+daRNcI2BQJkyf5CAhsMBQkB4TOAAAoJ
EBRn+daRNcI2cQoQAM7mvuU/cnTuxzF+9sOvoXnh0IluZUGW/v7tzaIpmaAvuJpA
FXNHCB74O0y8HSfBAvffNJho3NBtu/iVyaCyAKlAY51C24A2qd56/h4YQOFCYie1
K8QODwtGdBspvkOGrZqUDH9KGsoUVWtEd8d3h1gE3wLnhU3m+lMrddGRXN20PDUb
rKR/fZBvQzGXTgGOv3jkIBRraeiTj5gO1rFuRyTFfwt7GgUhgdaW/tGnaL4QxwBV
mWWXxn1xEO1RkeiqpF6n8mcgipXd3/+7YB8zi2m4nvmUoTDzpIzXo6YwrqRjbF7O
50J+PR+Htj+6fJbDV6L4R0o2Xm8EgdNwuqqgOd1vKDgECvAm2bAjXx6KV0BHvBtK
rj2rmuh2lDK3EoN40bRJOrJAKG3SORYQT3hC6Q1O7Awg/rnEXYv8krTDy6t4HtLX
q5zh3gFGGW6GCNdk/nrKkjgNw6n6002FhpZ95tNRnCmC6pSLBF7WFWhBdz1L6+Ou
VsKeOnehDx6S5tJ230BZH2PTZdaMvlfqoWMA+oxzoq0cs91NuUCjGMOme3HyphwX
1uXhMPhwg1IplSWkU8eNxaYCHqNFUj3NI8qBCtB9bpkhdWvOENMOs8vS3EA5nuOI
vgCPBQzw3DhHuFVfETBv8YB89Moeh/0smRk6bGUryuuUk2T3BBUkR2X6zC1ZuQIN
BGTJ/rcBEACa/UyPWvjUCugWLJ+esCcZKKj60nvjKjrFZl7HnMZ0iQcn1nJ6gDJw
cJ3VP14oeBpNREP7F/UZJDswI7WdmguZzGly1mRvMIOalwMkF3OOjJPRd+qZteti
PzBRmjg6THTwmWEvstqpikv4HcJ0EIKnG/vVjJFizzM6B0kkGKSy9HdtQTGYvyxh
l5c6JngpCQfbF6EbiNbF5x502grr/9gFcNcscKc0mwTJnraIn9/tUcJqj08AHveb
7U+E69t6WY98NlKbQEF5zn4xhn+SKW2Kb43FwAT7b8cO6AlWEEYSMStGcqubgPeO
SZakwq6k+v6qmie8YvrK1JJLfPDpXQ5GgB+CjBLtCTwdJDkRMho7z8afhXNlFUaC
/QPuRQBmzK77t1bJJBRjmK6rbcP82k1dObEvqDj5Dau+4OHXhYnUVARG8lgi7fK6
0Su+SEpn2b5K1wMsou8E21CTtykevtGaYNsTYGkBxHNIUSNYwxtiv6zfKcKfpENU
f4PM07KaMJ0gEIbh3LzdDo229pJYRX6tGMLTCTxcYGIIv+Qm1OKm61r4SpExSgD3
k9BH5MUcbkSXpwjjnn0xlz22H0alu+DjYE4I6ax5gkN2c6zERY+KE1m9uzSb9xr+
cuJeEGRew8fIuI1ENbZV3ImEdwB6IeePnDll/C2Z8cTsPNmGSBHC8wARAQABiQI8
BBgBCgAmFiEEEux4tSn8rRM1PX+oFGf51pE1wjYFAmTJ/rcCGyAFCQHhM4AACgkQ
FGf51pE1wjaZVxAAy36VLFK8eYF3qC0TanYDmQGS61kC7eW06vJide7grUaUN6MP
/h0+bzHUN6uci+JMxRWKXuoUPDf96LhDVz6JUlNqzPTx261VPj/vBLoAeDu9wSRS
/w5F2ZvjeGUAwO7IXcgyEgHj6OlpMnDgavE0qEMEHhQzQGKpqZnANMxJ39Oma37x
1+t07BH0jHkNutkFeUf4BzW+wg2dL5e53isHjSV76Z1EImIt7zHAwQh+8OhcHWZo
nQ58kdOgvAJwH3P3mk585KIieo/OO8UYU3jtiq3VUqbFLxa7F3NzlIOC82BzlRp4
AddinxDAtDA+77fQ3n0j5MpyUZKIyiBZf4Q2oGXKZ3UUBTpnT4XPMX7rHKlr6EUM
1g4HXJ3KJB+piNsiZ3swplqS7FLQcN4dehIFYI411wtSZWTRLm9KUEYfvdJVjx4v
xKi/M5Z64HrieRfzpvlGqh8Auy3BBisMDbcjAK3buhQCqhIVvht6eQHTKeaRAFZL
IuBWJm+7SZoRwYWgo0973GJejfJ1h/XullYi9AmDVNDelbJBW33/am//GnjvHQS+
sX6+UlpJNmuFXmwdN+kCTTMLt1iWTV9A1g9KTr9Iw21BBPRGMcvMngicp1qXlRZs
0w7Ie2CMlGWhL+nn9yOYx76csFg5rfqVm01lYGVjP1s+BCHb5jPmZH0AEKs=
=qPHz
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGN3Tr8BEADCExODyx4/0i53qRaDrRSBiwOTV3cyCTWLBu6x9T0JZ3tBnYn2
4KVj26GCuxxrUDXNi2LRwDantg7DD4pTCYCgym1nYZ/SlmgzHSSaK33UvOu0ySVf
m1tlTcJugNiHq/plGPt35MZhbc6PVmug6ghFIcrqzzLy25Kpo6FwuaYDteBOPVwB
CeVgRjVJb/4gMoARVF6JQRvCa0Pl7q3hFk1pG8F+X9Vh/m5TjEpzoGDB9F696TJ3
84jAg5HiSvLlcarz4BCHJYM1PMstrdkS1M7PY/FeOzqBezhvYjZ74RLIl1mTomCd
hgy4JntEATsBpJgv3HTULQOAaeGlNr1RIW+MvAkfzyoVSJ+gFuCo4xX4pHpzMFRJ
VTuhw2NXSpIAhsOz0RycMLJ7lgYCakVh23tJlpEAvlXOAmWZu+/WZ/CzdvCFqMsh
Wb7g+jWJuf0A0UC1O1WVkTwcKz0CqqRVPBr1zz8Q30AyFC0DCj7eI7LHZyHExIev
dwRp2DzFvh8zNT3VdJB4cw2nz5L2ZjSBXyrE+f/qpFSMinD52B3psWiQNfhGzsr7
EHDu9oPq86uq7EtI4kTccbg4Yv5Ujc4KT0A5ZDCXxZ5GCkR0dAjNviX1X48QIwbp
sm/Q2b2a4c8p3Pqrh+5pp4lEmIEgZqmqtiR2B+Tu+kxwRAzjFXQ/lUuD9QARAQAB
tCFQYXRyaWsgSG9sbXF2aXN0IDxwYWhvbEBzdW5ldC5zZT6JAlQEEwEKAD4CGwMF
CwkIBwMFFQoJCAsFFgIDAQACHgUCF4AWIQS/IK1eNfZ1FPEyohhdWw1Ok/dycwUC
ZR/epQUJA8PTJAAKCRBdWw1Ok/dyc570D/4kvIPP+qgMftDdJ3ijlIlUCH9X+ogl
JASbV61RP9HqaMaejuv3GVa87+w0fVdyds/Ph9iQ4U3XcsEoLirL92FpO7G73PRL
qKI2Zvf3gNeiEStRWIRRQ1FxoHjxbhKm9hRAV9sm68U5fjSB1lFL/Z5yIJZzlaLK
MOY24A+Q37uiIEjy8JKFdrhR94a7/B24MiEG1pHfQeaeFmGHyceGsFipGpUSfPhF
KHjdXnouI6J/G2WNOMfLdWijtloPpoC44G3AvSBOyXTmSPn8qUMtJqaMepHnjL7p
KWhZk0sWx81GiNhGBn9SAP3abJzt2hphRYmvmTKEF7oNFTjmVrU/FNZPmeYxn0g2
HcVM89LRqfYGPq3LWeEpBlOpxGBjvDHLOQFguhDHvGJU+Rlga9MUqED+05xqn10C
yEQBqLdPTgYPYsyGkarEhyUCfayN+2pObsnAPqAL0aEUXId/SbGRpXKPRnudwfpL
HIpFmUvmv4q6SC6WFKyLX6jAMA5x81hYOg/Go8BM0k6aQ3jd9/seW3GFzDJW35r0
SunkAzRUtCTlcTL8NQTZ3pQSfzfpdZFGtO+CrZ4yk+koxIREwMyhqKQFzXZss/4C
jL/Wkpbxfv3OecBGdBYOS4HmISoagUiQpe5/AiKFg/fni0HQ8Hyhgv5PI2TqOk9F
OxQdJGg1ci1GVbkCDQRjd06/ARAAzdjVOlhdismRUWnWgnPs5migD43J21fdLB3k
TioR5FvZiYYljraFckqaMCSXJIvnLyMOEsi2dQjKctVljiMheZ0ppqGG/tHzOkKw
xExczsR5AdHEEVSNX993KYgrjQ6wo+kvp1S4q9tG+jJQtbRlxAEXskFzVSz8Ep6h
yLRTAtaQuNt85hHCB4Py94gLukYfn0u0EReiZSDtBlzmWhX6Vrjxbf+SG+FgkKym
XYM102Q1Upgfdh69UmMavZF8Y+09Z1D/cMng3BZDGNO64CB86CH2yPyXaxPIZArG
yhD6/Pb0aRYcKJbFWgNxmF8k+oEGISxoTtivsbppaZfWsAA3eD802TwVZawHSEVo
vucDZSYv2PaeE1ywpQJ+vdb3kjdLBfp15lbAe1kgUOVBCKrWYirYk+Khrd1o3X2y
YTU7tzwkG4OjCrWD2zlxf6ux497nHpUy8lB3Ol0CuYhuW0Ws2S7/bPcN/a9mQohY
Hgu6BZbtUf9DfXep3FM35Gru7MWr6Y18/pXNRkki84+ylEFxBY1INc6J07geq2nS
z3dEkbzy3X3JSCjOxxNOz8PRrBqRDnbnWR3Ir743Kug2+kcg6vLWch6f2/t4CIkf
TNToYqHEv2IsUHfJ0FzR52VZklToVCJCcAV2Nrlx6EnWAgDjfu7dLNkNRX3zRntl
JcOXYOEAEQEAAYkCPAQYAQoAJgIbDBYhBL8grV419nUU8TKiGF1bDU6T93JzBQJl
H97aBQkDw7iAAAoJEF1bDU6T93JzXhMP/0f6X4MTyX2qq5sSbo8OahMj7kQ6j6zf
PG+7+bMf0O5MKe56YwCTLUOAprrPbFOnMiC9AEQlorONGSy3nCZ370Z6RineBqsA
K/Ir99GI/l8KBP4W0y98SqlM3tX1EZ/IlwhypYuE5OnE/T5OsRyOn6no4T6FGYfb
7nxJFtQYtePIXGKdllftQY8eniyo3GOA9l9wT72ZMqJ1Gi0F9qhtNJ7fI1LQOLnp
IshNVDXCu689KYbWvFFGALYfSKSNrm+R3/jk3R1KuwjB+4zlmrqqSB/56f79bLzM
mr0NcJcT/QOWy81SQbXB+oGJXkYVPBiS+yJCjytwRq80QY7POAQ+dqHB2GOYk+Xr
PiQN2tyOc4GoV7VTwNZEYK0KNO2OLPgejpPoco7GpcCWK9jxt/LllBzkfxdbRFpP
JAAWYi2WE3sLEn6BGvHjLZWaxoljk2KUWsa5r0GgV5dcDZXjKvHv6lFlE95QiS3D
NrFsljL6L/+cnoCrwuPfEWNHkIOL/l0TcbuXUZ6TudiZ7lH1wCDurEDnXgTaq1aE
VjAaeTL+ZfEyuG5nvv7LwYp6bzcweY00ByIrkJey0u9nyWvrz5yBtAyc+vPn7Es6
gkke2a79vLGJQXmR0TWB8DlBRwcb84TszkMAywcKAB5htnINCI6Wac9tTGLBkRZF
CoLrPyhxVWBNuQINBGN3T00BEAC1jt8vKEVSX7VjUPRNNKBS9oTldnT3IaZ1Xh8m
oGKX2ezaoQnuLaaKOkKHMPVYpwJNJcLtOP3btSNYC0e61aHmEO2MimSRvskSsd3f
oP5wDg9f4Bel0XZla+SdNjHM/FK6MndM5GppYpgAMhVf+6xU9x9OVTcd4jCFKB7o
+3YlpRNSS3kUDJcMUPJ41qAg37CQyYCV02M581vYsCe/8qYEeihdLnEXBDiYqZ8C
U4BdML83/xv7pqsSs0ZUknul2IcIwpElKMpkb3dYJcKaAac2WyEDAwtFywEPJLGo
HuaEmhD421F9YhVZDmGB9r6yGdTlF5MYUFhpHyB9IdNm8Vv8tWjI8qAor19Y27H6
kNPmnUx5ZEPOHuBtlldpyXZhixssutPb9+0D0xKzzvR6aOgXsAtpUnqi9WxF3OFZ
mR6fIk4d0I4nezauX4fnlwrlNyQ7hWAvRDhYNei8ixpsMrp+0pcq987GpXe+KTGS
eS4Fd8aXBl/iu6ZGPBQ6zNNwEfuDhVOJGO3mf3MDlIlBPctRyZleYApaDqNXF+bm
4bkTSPmFxBSfuPQI34ZEMmXpXuFS0mL6gxwgIL/0VuU9Bqj+tqGmMSi6jtaQCrTD
c78+GMmupj3jl3s5FYRcTdASkrSf2RNLm9f96hcwLp8ail3UTmSvVZ4I5QXMbwe4
YWVcUwARAQABiQI8BBgBCgAmAhsgFiEEvyCtXjX2dRTxMqIYXVsNTpP3cnMFAmUf
3ukFCQPDuIAACgkQXVsNTpP3cnM9Mg//elO/jVsVX99iaVDbVqJaV0n3GzrTRseA
kdgJgoG+gUUWKbwrn1sBCQbUAeJ/cobL54kqAN6NCWY7R4FD8HsOBgXm7//vY2yI
fa0YXGD7FTt6Z5tCVO7fVPB9FDn1lXRD2vnqhnGhqnXQq5XXh3owkjRgBj07Q/FI
7BiPWCuJV/UydNah+u6CtT+abBsFjjki7Sk498BRzhiXrXoFeDxOU34ok7KAtUD4
lmrOfCHXJdywH/YEBPG0WJfRSqUKeTt9bUDw/W4tUxYQZqdRg24BbLSj/7+6e4qy
g2XoO2kUUvobEqenLy6Mo2kEu3DA3zmCQZetBPbjmVEqayK3ZZ1erTTdxWxVx/1W
hkndf/fy69Rkkejlug5IgXVvIBoIJiHmjjkRMhpWS53hNQxkuMLhMK0sptpwpVHk
mpELhOeQarsZQm/VwrvIiedhsdsd1H+rYxcIknxYTMyIKGXfP+TgR9mVh/8FQ6ho
2NDaM9OfcgEJqJKiNn2O+pr926CqecEwZCVI/Lhgm0dm8ZPdMqCercUP1KIiZe6x
gJWFm4smNjQdQafVFVs/l9HLJxK1xRiVHDCTxhVvJ4Lc2K88mV2CZj5bk2+XC5NC
q2dYmcuPpKHKnIVDfOaM+Hqmjq6GVqtQRrB+eA45yUcNIttijcdQeozoCEWGkj7A
t+RRGsorBDI=
=4GC/
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,75 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGMMxw8BEACwMQPTB1vmzBqiDEwO6gVywC/kymkx7EcXmlZxIBBz9OSDebVl
cOTag1ilRATmXg79/X+HGTtNhxAIExAFJF8/OiqYWd93kGACA7mUDnJ9HL0PtGL/
5QWj6DVqF3x+6kIg38Jd0e33q1xX5svs+ruFygreNwIOdbNJa2AsSYvHj7Y1IolP
qZVIt5RApdLH5Nbv6DFnt2AvHaus5qd7dD9mlmx5IKIxrkKaBBfEAMSAlJOZtV3n
ECvHfRkVyap+hGil2ksySfz+2FrUgvQXclfgJSgFVxIiuc7vn4pStlM6zeX6JN+j
6+sHnf+NYTZpSDiWFJZcFEF/dv6dSffkKSX/Qtt9P+UyZkXUJ9eLSa516Ujuayhz
XDYwy/mrz4CE+8jON2DXh4as0ojkyLSrva3DyKBSLC2c4JrOncRo/ulqwFvIDW6L
5LGa5XZkTLTTcfKPkaG7Pic1iFHt09SSxWWo8q+3QKWvQpNvYKy8orSUXGu3Stg2
aIOMt+76RePoQ8G5Be1fcB+M1wd2fSA1jyhc6Xa4da7o23dC4y7+gsAcfx4lFbfn
5JoeN3+arJHtFgFNGpVwVhhHKuubNwyNn9V8XoRiWGACFa3BHeCTnI+ZNhaN/CmI
CPwxnUTTh1Ow5yTs/hTq2oA1XOL+4LYuA+W1WkrXAtsb8nyBSGe77tBpfQARAQAB
tB5QYXRyaWsgTHVuZGluIDxwYXRsdUBzdW5ldC5zZT6JAlQEEwEKAD4CGwMFCwkI
BwMFFQoJCAsFFgIDAQACHgUCF4AWIQTPl4ECrQYx+wg6EzigqBK6IknylAUCZGIR
IwUJBRexFAAKCRCgqBK6IknylJAZD/917PTIdJCdy2CHovVUIWr5MI+YiCpd3ndO
cmlikMTW1N6sX/cw1Mmf77RcodfyAaf7DdcasNedEJ6AsYo5V3S2xx/PBueHY3Rb
7fpVraq0ksokaoAltUeWfD7yx0/HoxYjSG0EIx18J7tPSJklERp0wwu+1y5kXle3
KyN4R1Tg5miLAnQvQx316EOQ4prRnfOUS87KZXRa3stCLyxznbOolSLnV156gBEj
6hV0oo3rkLcFiPm6EdmcfV007EipF4/zPIz4AtJFEbaYuX6aNCS2L8X1shOB0V1R
5bdy68liDtCRCetp/jabOxVuIcXUHzBQUp+la99vJpvCLP9I+oYBcmpeHmno6UeR
t87nmtBJY+nM1oajZiOswZnscMV7/UfyzSFzG5brjeRc4WkbuC9L4UFYOi1EHQtj
o8TyHpFFWm0H/ZNcCbzxEW6H0TEi+iV39p1wTzoRWAqJZl1RdKvjuSyEyRdqrO+k
RaBD2HfTzdHDqV9ZLkOhHMA4XD+h0x8dPn8g/cx1zK+asCGSLUztepLIyq5MHX0q
m3bDiHNys2F2VkVh9rekWSWz3EAXOYz0sDNJqM+kn+9iyKJaPErtaYYehPucyCoj
d9w+c8sn69a0KDIvxEBBmxWVcGqWFtCPzMezuQy6Bq3b4sQQCedg9w3gj/eUMHwu
PP9jQ77UsrkCDQRjDMcPARAAyJ4o0694O1xM9HvkdBJZ8fsi1oiB/ciVv+TVHoTd
MAzKK9J7z52X69tdPPuHvGCel+2RdEPyyFC5+4+D9Y4nmaGf1SrUcd/75kbNPZS2
ZGr4OM4hJdU2mYUoBCOjunsWTTt6tOkGjZDmVYK9BgdUQ03S1y8JJuUzPzsbpahD
JH69Eohmy/1cQ/RTAVTKdyMGgC3O/QKtSNP8fwpHgwsdEEiJ0t9BGhec1ZQeNYyi
8dt+5+lVFufLz3AlKqc2pqVkGoL/xVh/D4tnKcLZtSNzTA8yPcy/KptMmkscO7Q+
Aa7vENtd483KOpELtWsK6Huh9Xal2Nvd1A9Tn/SgXtXyf8n4kUFMVIvvHnGNpWO9
TqQFg6zOTlMYuIJxHNzjdPnnkF1z0JS8ssF7wfcoD7LdkGmHThTM6xCsCGi+TPbV
nEEM4shWW1oPlT6LRiCYo4NFi2hIrd3T8JUDjxwkLwO4wLhLxAakb3s5i15u9CYs
xLgseZscNdzXm20BwAknp9sR+GyAiIjK4iz9kxJybWcdgQNBTce6ZLTQV8zFvTzL
kah1lq1pVm5WcgJYX7zBtoR3iBNJPPRyNjPfzhZcflKSVUummp6tfM0ZO0t6WiPR
8d9TBsK8nyvSGMDwl3eiC+Mwp2TI1MJSk+dvJvXysGl+vldbP06BEwqmLF2eBo+a
w+cAEQEAAYkCPAQYAQoAJgIbDBYhBM+XgQKtBjH7CDoTOKCoEroiSfKUBQJkYhFW
BQkFF7FHAAoJEKCoEroiSfKUrNUQAJta8xaUFvQEoez14xyRI6TDk+9hQz64GZND
QK/1VcLqx20KroWsi72EMZibBslWGn4O3GPmfxHJX6iMjQflIMqmddKtIatC0FGF
B+aq167ujZkj+4wIKXy+Z9l0zZS/gtEUc/q3NeLEswG9b4w0x6IoP4v5y/Eppezp
oE/kNrqen4fHCNWm5aUP2yWOwGDPUTfOThTRowuGqgW7lJ+XSLm97O7Hu7OOCBFD
oQvlVaGDqSra8wQHc9vnMSFJ4DJMACgw7iD+gucLHiiuSli482w0s8eZQH0ZSi61
zNsn7wvsia7+llc6UcKcWpQmWreaEsWn+KluVkDUNvaDoH5vbGYsPrjilI2Ip3+A
0kK8WuOV8klc5j+stz+NBgcOEtTDZFMSW/jRrR1JE5kqVeeQsYYG7QHIDo1ix5u/
EL6MGb6BEEUAv/IEcZlGKAe7Acuk+XdXcHM2bEJYQdbCyGjmvis+NDPJPRqtbftL
wekmFY/qPitdv5eF8Qq6GfvVdAfRQ9GjA/+yavZiMeDs9sC2wGV7yGSaDqgHZFAY
h4oIIJZ1J6GLkrjXj6hp3ThWPoHm+1pj42oHcPuQFGVdpDbmp8UrpjYlzSYoujvb
rsjkXP6I1qNAneTD5KUJgdPsCnVL4rJBIbXEmXXD2M09H97CIOaM95XKJFhexmHR
wZOdW1sAuQINBGMMx3IBEACquMY5L5QIVq2QjLpfitlS1dSitYThlYxCxyhUG7Hl
5IdM5w+PAm45hb/ensn8e/oWXk/W4NoYTlP22KzFwkEeUNlEq21AdYAcb+MwJdCq
F/iLP0qpKsznWio7OU3gBn1XqsdVrpewnXIEH9rkin1YIa+m263lrvLKWOhWiu9d
GyZYlbA3fIivBTad6gplWfMwjfbeS2uxPoLdN1lP7UYWefe9iVXvgVi19omA836f
LRZKi+znHVdvExXVGfSxhF0OOylbjT9gohiaqhCWaIoskRaVqHHTQlqOwcei7XCr
dz94Cmxq1XnkvKA9vNVWyv84i5DTpAcxIA/yEE5BXe3qLgek6H5POx6xjyp7EjOw
533Q01iYBDXTiCzoK8zanPNYqlcwb0tYXfxT8HTSgUeHKQL1990yRIuKiwkK2Yec
FfCvpfz257VAZkVjN8IEfw/WhFxSOwL00pUmTLA/DxVFyHuYvdvEs+FANgXX81v1
eniExslCcHp9HiOK3odVM1eE02V6O1Kwxyp7cooUEDZ610x0eePhvx20ssTm3qSX
dWS1rgZ+ZTzhkwxm8OpSFGDrCgxdUs4tmTtjwcUDeOfTu77ef5t3XTqP9QoCz9Cu
Si3ZfKM9G1FXTcgU9ApEgCqeUA/56RgUjFvwt9TTnC6I71/0E2olIrp3O5B8l1kL
XQARAQABiQI8BBgBCgAmAhsgFiEEz5eBAq0GMfsIOhM4oKgSuiJJ8pQFAmRiEVgF
CQUXsOQACgkQoKgSuiJJ8pQ5tg//XbScKgnNbgSR5+jiTIhRbKB9xX060GEhq7Q9
65zAVO5RthUxo8VN4qMEazntggt/HTDMfK54fuZfKT/aLotiuWBAOgVYM/31unGi
FeQir+/47E6SB0FoZEWfCO5jMPdkfvpDZ1rdLkJ2ow666ktR2fwkQsMZ1QPPu5xf
k1oPy1ouuGusx7mQga0jAqJ6uKy/s1Dw0ndYIFiHMYn7Z5C759O/folBeaXs+AlP
il54rSetCTwmQ959Ma0o+RpjLPaOMLOtI1jDB3n8BPRa5bcKaKFUtX9j7AJ3j8P9
UySaKoHZ79/aTnYudstM5r9EpYVuJpGO0/AjrIrQIxci8yl5BUljSVQC03L5KJZ3
frJNyoZne2odmowLe8O6536gqX+nfqdJUiyt3+Q0qheHZvwqq0Tgge8G75DBm6wD
QgYPRXcTDCerEco4M/MC/d8Df/8L/pUSz6GeIoUqWx6jkpiqWZq7OC/ULhi1Os+T
kIMxDwOyGAPhkZ+dH+tT1MMOs40o7fbiK//JAdMD/R0S5d4gSUhmFjS57Ar1OI4F
CnXiJ2mzl065r1doxAhb5vYnj7pOCn1GM0WknZgb519+cfc2nimFC/V3pAVcLf5N
MTbLyd/uVw9516jAymB6NZWucz1RNZFbGTGUZfCyeJQcOqoQvF9znby45HqctFNP
r5A3scw=
=k+1S
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -0,0 +1,130 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGH4/cgBEADIPhFLAnngQbOpTG6gbarEKQGJ078N/cUeISaQDX9PaMyZ9IzU
yvn0JdR17rTofN8XAAhfWcX5qL09XWE67Sa2M5XFbr/5SPaQpE7Altxjl5QVpk7B
umlwgU0T8Z5nX0B0vpPXzZ5fV7RhvPz0muKbFd2TcqAEasFb55VEDjz+ngfPIw8i
EKWhhu2Ll6OTL6bzGdZ5++Ip7wSkCRorRiifyHHM+P7EtVmBx+HyeqClDjAXzvSA
cAw/G1RdSj02qlavYx1mkCTshUWEft9UNeb6QE0bn4alX+uLYRkEZOkdSBd6Eqgt
4o0Y91vg12INY6WOtBkT/7UhPSZDy+mfa94lfchE035V/Fj/DMeoJekKO0DYIrzi
IRHxzEjgXRdjQ3HOnz/2qPFf1lPMoBZiZniWbvYbAoG9GiRrdLh/pgJpc5JDu/b0
FNYzur6fCWhPnXu2kbMltTMvhFyPv8eK+eTP3HlyNTmX89SnSYFXfDQxtIGO6AVt
ertXnOXBEC4FN0aoaAUrLE3KnU7k63O5Z22dyUuyQSbrKvONe0RgwLX2IaFcv/Dk
36OZgHf5xizY0xThV6geq8HpMECmailEcfYYDJAHI1H4fdWwMU9MjHtvJ4cu8mld
oKDIJdiozFG4NGx5t9Zurc5gImLZoPsWaNknSPn1fHCYAiSm2oIzMV6ibwARAQAB
iQKJBB8BCgB9BYJh+P3IAwsJBwkQnnxhalzbE89HFAAAAAAAHgAgc2FsdEBub3Rh
dGlvbnMuc2VxdW9pYS1wZ3Aub3JnmaI/ED4jxIu79r+Fu8PQ75LRTzHK5dmAViO0
uUe6boYDFQoIApsBAh4BFiEEbK1wndVaMkeGgIupnnxhalzbE88AAAKhEACYzH+U
eorxsx+aYR3qJvwZOY8uiPqUs4zPaufoPJAZFLztNAjLvYcPYj34+v+KkvqPvB2+
svsa9v3J4cwnfc2VnAYsZh14vZzAJXWfkXUHX0xJv60pYwy+QQDEJxyOcSYwpXSN
/Eeq+1qmsS0nb64zdkJCAG6z/MPjl7yOeUjCM3e+LctKxIkiXsSqfGWMFjb6MAKP
CGfpnCuu88GdSHVlNf5cVkTtJhh2NdcSlLN/hDQDn0vUo6Y8pshiB9OYuKEVE08U
fKxLiJz3F/YSRGBnO4FKFk+meVB0x55tK5hg6fQDGlrP0rPOOZ1lHVlfOgjWIiPS
/JGOnF6AGtz7yV1AxdmZsd7H6a+M1FX6gjcgehrndI34wmrrfa/4yMnKZr4udFv2
X//3ivSLF0/8alOh9F+CBkJzxUkvhlKssrcWn4paKYRFWcw9G1374QZ3cGyAonCO
Yrigcxrsi16b16czHBqswtDzK1bjbNwIVffanJBAg79CnNuxX3qJSGBCWl8bLZkK
/PFuRWDMu4VNJKRI3YEU4pWAwtDb+szxwJmSrh4pu0oz88iqpMXr9+RQD7i+Bagi
DNiOsVph6uDdh7QihZmAHF2encFfG/l6bv0mJdvP0Cyk7sBO9pZ19MiBRoSMe4HL
QFcl9h0kps63WLgcq/QpFEB1gYZ+ExFXyQuSxrQgRnJlZHJpayBQZXR0YWkgPHBl
dHRhaUBzdW5ldC5zZT6JAowEEwEKAIAFgmH4/cgDCwkHCRCefGFqXNsTz0cUAAAA
AAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmcixOeEe9R3WK/oCzqo
qmu/eK4cCeGDcqQ+CRyITkpHbwMVCggCmQECmwECHgEWIQRsrXCd1VoyR4aAi6me
fGFqXNsTzwAAEmUP/0wDLtk2BW7qevVTACaGwdl9rmMhLxzgCXzGbkuohdcFqDRm
dtPj+soYVoYOPUBQyLmyk+XliS32zkaaNdt4NgCk7YzK16Me2SNxS8KtoBotgO4A
0b65sCxHn3BfluCaWwY0Lkiu6+/vqjqI4FJyoKnM55C4c0pQr+vYtSaGDA15eD0A
RWiWxRa1CRLqpJcoSn0L8U3W9CC/YkvuFjCqqhZ0xDbf4rnpgTpZoZd3CHHdTiQu
pECue3NOrSsd5pLL03No7sdiZfi5aH4/VOI+0pWGMfL6XWKqln9637EfTwPyR8db
KDlabCNg7ikCJQ3z3cagZSVpijA8sT9xpZ76nsp31vrYAViKzH+t/i7UuIJbqYQp
4TqJ3eTAS9k1XgUp7i4j1TqJAAIZ6N5M4XKy5zblJmHNwWVrDEVbCS0HW59Cz4S6
t0+pT98Z+Onr+WrYQnNodt1SCNo4bMvvMrISlYf4Thx+69MXa88OajpsEYXHIlwc
RResVvBVaA9fwjC5lCYBWPrDlwwREx9amqzxLoVW48A5pZlfF/kmfb0UPoL5ItJ1
J6iDqSkc95c43++X1rmrOeFa2Kz/SMlxFUO+2r0xR6Ap1rCyT5rKRkDLl3xTK8zC
0KotXRgZI9GkQC5aH/OiNedAzqdJqKDOyLDvmvljgrLnrioDrSgX15V5tqG0uQIN
BGH4/cgBEACwdio4gvADawt8HLtVvAWLY5vb32yb7KeFT9B5lWOvLnkg2KQ0hU1a
EsHk1TIom+SA1nJ6cuLtixVfY/XF3iQH8vsxHf29nhBMRoC+PX85bNjinDv4XNPj
gY0DVTrsx9q9MKc2ohBRZE95xhCr97Uyyb/JFuo5GrcQVIJMi+aGU+5DNm3/VhhB
BWFdDEDW2OtvJsfySbmHm1JoPt50c5dBoq+O4R0nvAvapg6Ct98yFGJmIpFSedxH
XH0YzGSG8ru7nOifeX6knARxSRdC3XtciRQQDHHQA78oKSGCQkyBpRS2VIhF6Kuj
3wmN8nTvrvmQDyu3N8eK76psVUrIYdGD2syPV1JQ/6wKmQLbDKDibf2hJ5tCEIvM
P1ipxkKn0OFPKkgK+KmyO6K6r2bpSzpSIm7P0RBrbjWIEqxUmiLEEu4Lo0HwoNhT
EmWBxMtNWuXo44+buBpxwqCXY0l9LBWkr3S3S5ldc/buz7+BAQfZL6WjR21u7wPJ
pEcV1ze54xuixitSXCkxnyoUc+6GCcNBKnJPXd07q3EUPdKzPXBLxb5I15q6i13c
yMlgXqdjTxHXTW1ohyW2+rsUdwFfTcYlidHH0XPRvBym1YtPhV6iLSpE5Xhh8ilQ
mLMrWqx7LvKuwVX2chx2jKv2CwhBPkIlShJoxi8+g3P05gpfTM9rFwARAQABiQUC
BBgBCgL2BYJh+P3IBYkFo5qACRCefGFqXNsTz0cUAAAAAAAeACBzYWx0QG5vdGF0
aW9ucy5zZXF1b2lhLXBncC5vcmeNP0xFxrxhSaN/0u4mIUg30aag1kp/BKBtxhuT
3XlkugKbAsG8oAQZAQoAbwWCYfj9yAkQzfLDgemnUb1HFAAAAAAAHgAgc2FsdEBu
b3RhdGlvbnMuc2VxdW9pYS1wZ3Aub3JnJPr+uwSdB9jKBP14qnNQvxQpV7dypr/K
45bf6+Ia9fIWIQRjpa++/6ZXGLDFAb7N8sOB6adRvQAATc4P/1QINlCt3o0AwkKr
+JIUTaVxWTvqDIRfUTlQWz5JhvIOXM2FE2qUP8ArG7kw1/LYheMKM2ftqh6EHodG
5yT9N2rucbVfLAzLxMuXjJoCEBMzcOm8Uf8YK37OoaFhQ97r4eoe5TpSgJlKS1Lm
GRJVDC9L7nbUkCOfUZoksBPmVyL4mywELbTJi7nSe+iPm7yaO5AYUgXoxFZbnRxu
kaf7ngxRdFvnqydlf/idUBeqfAwlJwqu5ctjLlqgag+KVyCf9lt0yIFf2sn+Tz/7
6McYjXBk/7dmeEqXmxduetRmAAVMmsKBGqoeKwskZGHPB9psalDe2xLvnK4ABuw2
4aH7MmyE/OnxAQPBjzR0/8K0m5AAzm+C3tUzgJ0RPwEHree2h4WOcUP2yTCq7xwI
fxvmLBbhEjpBxb0mr3zTYsYtFW044YE/iE6kh9KX3jRzomvL4firkkNJffBhQJJJ
tAXb2nSmnunKH2R6C4V2XYUnBuuCyOYitqmoL0a8h2Qg3u1NCC5xtuhQaS/c2QkT
+icb7FK5XlWOSbf+WwbRTBwqrdR2dsuUkCAlyRxnYfsqPiWiTVn2l3biZOyN0sEB
3I70j8LxS+Lvqibbk/ecESc9vZUgWROHV3buRoW/4+238AF8MAFhsHwA2Dksc5X4
B7N6KPUp4PEGzu5D20JmvIiym02nFiEEbK1wndVaMkeGgIupnnxhalzbE88AAKSm
D/4/Ssb+JGa/H9UXUZpoxe5mmyfAmL6QtxtlysvusY+AUMPBxJTXonczm11J0xW8
I8iVXan7OVodMruuMFfrOWUbybk+uuHECfVQ9woKev2XL3AoxRtSwZfXuunaJ4f2
WtPnx9CIyu/OBjS8R+E+PFsK1u6txPmrh5FW4iieXBlpkexgPpChlg4HRiIitUpv
dT6ba2nyEd7q6jKnza8PCwkihyMjGiaGKRnmcd40SsyXfbg0hbdbHqjV/KhbdgJr
pXku4NUNHW/HTc8R/VVteC8NRGVQ5LOrKOtqg6FZ1vIQdQrjjPb/aiE+Lye7/SV5
RwhlXGIRMLvgM15ESPgpCP9PoT5Ga+G3/8uSzlHrKQNmFZ/Ni/QR0xzehENkoF/y
Jvy0kFfuvfXA68H0f/MJtVQ+CMWQf1dsDi7PAaDRxRk7gxsWKl7aYhuz/B9CNkoS
1O9QE8hUJHM3c+b5XhXCdw6G6QbhEsztE1A08XUN3Zk26TbD9gnZkwKaB+Gnk9IN
dSVzZDnFkUfWiLDi3URC5rn+Hrf2/mBBGtwOJJMi5+SwRJShJvR2XO3SephOxQLE
UNOMtJ7FBZk1jNyv5F1lbG4rAOGlv8wPHm8kFijSTEWvLY2EUgNzIkC8VryhV9Um
xDIWGRBE7m5J/pG8OIsq21xSDu2H+lvwiTt0GY62RnzPabkCDQRh+P3IARAAuUFF
AoWst9HmwefnNEIsi4Nk5pMygof0jMDZFqQ5mPOzi6krbwgTUZu4Il0w5pfKJt2K
88dQOC+kSEagdpOAEp3q1xVPcd1GduYqIlRghHzS1flfQBhC2PZOrByFn695zNZT
TPTxe38jsQBGHGJeC+Mg2thejZJo2XHaLYM5gF0CFXdUivCz1x9dkx+fcPHMmVIz
W+DS5+KJR/N8wh2Uw/VF6aWZikrbqZXrSx9aqdZpRPnyJ4OILhCXV2JCWUlS8l+k
5eEiQwi8zqtXlp0mOjJEV1HvWzAduvKqXa3ArUHR8WsKZpWkzLl3Zy0Pbxt+4u4D
uaWG9bFEpb78qSCMraKHs/ZQKuwdHQgeBy20x7GsN7grNKXT+xVIG/MQVlGGm1iM
O3MxSYs42mYidp3Dj8wamOAD69LWozPyLyCF+cpEH+S3E2C62480kiXEXEXhKTfj
WzbzggHy9CNa2TLFhcD+G7CxAYxwNyEuf7BPIr9hjNm3HBkczwYoKnhB2EJMarnW
1LmSClmnqPyJrfR9SDEdBw2vF67tc/wcDv3ctPnv0H9x6EqhxQx7IE1D96Bi4+Cm
s7JwyfqX4fQlBeBT2c1MiXfwyEugaxKVX/h9olfuBEi+ZoGryyTdM83Q1klMvI1l
e4ffykqzHA2H14CUDbnLCe3S5a01WSyCUa72bwUAEQEAAYkChAQYAQoAeAWCYfj9
yAWJBaOagAkQnnxhalzbE89HFAAAAAAAHgAgc2FsdEBub3RhdGlvbnMuc2VxdW9p
YS1wZ3Aub3JnhjfNA1exO+rxxBgIipULe5IoQ7FSuQWtKdXBteFLyUQCmwwWIQRs
rXCd1VoyR4aAi6mefGFqXNsTzwAA+4MP/j492+RXBje+9UU0Ez2yIll4j4JqL8nP
A8PibBSpzq4y/F3OPrzTCG1nmgvFGKNKcyGpKncG7bEt5Yh8II8UEwjuiwfMhanq
IBfnhxYtQWjKKySgDi7TssQUoQA0DLWLP7QPnAZ1hMWQpd2v8LVQBlwH6hIauspp
NEMiBrZw0BCitM46YhNfCkGfcD742j9P33ju03GQtEct7lxWUCQf09s9ZxUhc1u7
/izQgyMoS/lmokdupUjrCCV3Au/u8xgUXka9jX8tz8AWHPvx/IpVdd5LZm/THs+P
vfeStUk6zqA3viVmCfSD8nX51vrs64bNragVWcvi034shAobgysJeKNmVdN0zZyG
zO8G6B0aDv5/D/sO8vyBTAxxtoiSW+plvsQaOM4uhGBoqbqXff2GYDViy/RVsjqW
x574UJgJC5H7q+uUkHaN+buRxmpLgrRmaurYCSwnkturraNgmPjevosNh/kr8JmS
z5Q1NWaWSZvPPp2Jh7TeVxeCiCZZAcq/SgkMMO43raPRG8iCrJy1/u+BoI9sz+qM
LWa+8ACRDdXJ7fRnqkorlrtPUwRXruiS8subfhHaVYd2a3+r/7lw8TRfRuoKKDT2
4TBr499a2QBmqL9bzlkctB96Rgo+JG8SS8iEAqTk9x0SqgTXVgjtCBrXV7DwUTR/
DbdskhnT1dQeuQINBGH4/cgBEADnIQ2mZJT13YuBUOLM4Xlkp1165nlKvSC3oNE2
Z47sKmcgwgKwPJssd1WsmkKDOsoxsvS6FJiAbmCQe/EdwT4dolRpVjczpp9p+w6w
jtTXsWPsSUDbT0ZD8IOmOr24F8Z0WY/ho1Bmm3LwCMbW30KROpZn9VWyzGT6QTGw
iZF/lyItsdGcYC2qgaXJpI0sEc5W1WK4ozpTu7z3BtzpyjOvVAQirF7Dp2yU3dLB
93vj+/BYnB5F/1cmTWfu6lGRtO60E0j9DSH20AqTGfsJI4fPM7tbJnT2Fhj+MS8b
Hf6iEnh2QwlUSUdMlJAxXVu1XcLiSbbHXV4Mh7gCuGB0p0rMGiBg9W/t+D2dYsBQ
xuXq8fT4iqlaHaUwoVYtsDTMIg3c17mcYni5VRk2d49qpva6zR0zU3v0X2YtvHWl
CCYBmjWSS/8X8FUgHVOaCEAOjTU89TvG9uvxXoqO64Wznx7sjywkaWuwmNck2K3x
lhccw5iy+K1xxalKgcel6nMxdoBuW2RFRAYCCAT8IH+ONzLOcGj/+sRJx+bl18qY
WcZGcYA9IbfJCNXuQHX4uRLjtml+zNac3Kefmw1jyBRUUkWbdcAsW3kvf3+CcP62
URCk+eFMywnGk8N6UX9akSxgMKTR3IHuqZLHtzbgUxgeRHCLUid9GwsqDmu3fC8f
LRK7sQARAQABiQKEBBgBCgB4BYJh+P3IBYkFo5qACRCefGFqXNsTz0cUAAAAAAAe
ACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmct+zTDHUZ6Bm3R8Ppvd/bN
JZM1ZIMgVssw3CAr4P+FsgKbIBYhBGytcJ3VWjJHhoCLqZ58YWpc2xPPAAAk3w/+
MI9j1sls78BhTKrkUfI4S+DrQHQ5Fa5n7mBPaAVGUj/rT8UM1YZejWUvB7Eu6qlf
e1+Ukl0E7WEqyLr1P6WOh+kW+k199gWqM/e5NcZQdRD5G99eXtC1iVdnhmZyJP73
EWp5HSFdejNpnJ2S2LqPHtFSJt1hoQScdKYHOegIJwehfBOPD/3c4zQyXZ/8EZHV
IjdvmS4QVcymuEiu5W0gZq6Uq5YfPtf3uWEfwXtjanXJCK3jYWVUVu99M73p1Xj5
q9aRKdIWHu/2NMliZCtwYYbwVFLuDAN86+zXslP1As+AcxPbMvpjkt+AivhS9g6U
uvov6ou4tjUsfHZKiKU4K2u96QkaprM0oPD73GX+fb5VnnaMh7Jh9Tkptx4VgSuZ
WNHenrqhPG+GhaOdksSLFyrcn5jQcTx2BDaOFzeWtX5R4Iu1tVAqdEDJhcK+36Ee
VoM+E7eppzA6bWRx46gKK2RMkqLbeJJJSn9kR/cxDPlkBomRGHXEKLf4ljXWG0Us
G5iBKU9bU1oVH4eathSJ/8qURDiEaXmGa8Q/Ojo8mq/uwdDWzniCAr7TLHfSWQMb
1pmd3mk2SjsOIogMeHJxNK5g1QXHkFGbHcqGx2R3G2aHKcUgYfwRJc0m2gpp7Ywx
u6jTlXM0VVxcL3hfAAom3jb0NC/f0+wqcER+OZQ5UGw=
=jlA+
-----END PGP PUBLIC KEY BLOCK-----

View file

@ -5,9 +5,38 @@ ssh_authorized_keys:
name: mandersson+3DB765E9ADBF28C9068A@sunet.se
type: ssh-rsa
user: root
mandersson+0E7E545753:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQCzZ9VleeR37xT/rUKQl8IDE757FVT9EmdaYwUYkT01DCcoiCySWWuNvNq6zw6GCBrnxabrUd4I/rxb4+39MeXKVMNIY+U3QLPgh5ppQSnKdZEuUUxBU76JoEc4PKtVyzTAWjJkt9o6UP7NgtR9oWRkTqzuCqJm52XMipdfbM2fnxVndJPFQTM/gxNoQQY4obPpiSvlsYiA30OzCkAoWY8xAogNxW6m7Q2aP41Vv5/FhNO7yRjc0Qrpy37l7JTNF+rB3F6KEpsf9oSSdmRNpgmOn4TFqGbb34bjMQMnEuQk85x9h4Wkne0aYbLF9kj/GLVqgZJmfRPXAKHINxwwO78Kf1k4q7FhRDKuGYWLTlSqo5my9Gpm7KjCmY0w/3jwaC1xTneT9R5JZ4e6k07iyyApSzdNwEysUgXH9ushH24JcGSRwrsrm0mZlsJZ8sFcrHGgQVUbXwYmoqdvRtKl9ZxlIc0lB3GUpwgsfnpB25MPvG45FUeGRO6sefmgP+ARorTC8JD4xaeQHFm1pO3rD9QbRCC3tTTobECO7IVFQGeLQuN3jQmNfGmqxBtUR/7kiWWTWTmNf1e09pcsQ+zbOivatSEpg6Dt8y1U8cnU/wrINluEsdM5y7UH+ylOeAGXFJdRTYapDr0f8YVCnNM0+RV8w0qmemPl1qmMmjKMw+zQoQ==
name: mandersson+0E7E545753@sunet.se
type: ssh-rsa
user: root
mandersson+B4D855B5:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQDcxRqdDF5X9R8keP5EThzsMhKn4PLXEdPgbWDE9unrHCD+qVpSJpZhucVMieb/C/VSdyCn7Xx7cd5N6HO6ANFoMQ0w7c2HHyHTXnDiy6pbtRs18+UV/xDaokfUlrw8uRkgK7ZNyIz3GpBQGjjS+pWbpiLfbVybeQZb3zbV1l68aPP4a+51synM10M+whB64NH+iRuOjDLE50zVdNBIdXZ0wDCNljf2nDxwNLezVd6uOwDECF04IzEp8AmQavO9Y8548BOm3p2ndNCXdg6tK+KOmsMD8BWxNAMOPYEQniyFh4dDQe+kFQsM4X//diU+Y+D70r2VWmtWHaWmBYutJmgzplBGJ/Vwy+MFfCtqWx4Z/oqUJALZelUrUbmYXnnwnVVLw3WIV7I31zaX1seBkDkyo5h9Kf4PK5dleNUKk9G/YrjsuC/OrP2n7wh4O9vAfoLaOh4reH2VbH1UyKmNLC2K9hEHsTSvfyvVaGevz6e19wKGyipG+fPNIL6a1jZbIpGpZIg2LsnaiKxq1gmlOVrIz4Re8XVpsMEAC6TW2Mg/D+WJmf1N3b6h7Zli8XYZ4pHo42KUElqauwpaGvDtsW1Rt+mm6lvwcvZbp0hVmeKLeksoa8pgQHy/AASqAFk7guympoSeut/s2fqHAPDmz9s6aMM7q6x9VFeWsi66H9Lmzw==
name: mandersson+B4D855B5@sunet.se
type: ssh-rsa
user: root
pettai+07431497:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQDnIQ2mZJT13YuBUOLM4Xlkp1165nlKvSC3oNE2Z47sKmcgwgKwPJssd1WsmkKDOsoxsvS6FJiAbmCQe/EdwT4dolRpVjczpp9p+w6wjtTXsWPsSUDbT0ZD8IOmOr24F8Z0WY/ho1Bmm3LwCMbW30KROpZn9VWyzGT6QTGwiZF/lyItsdGcYC2qgaXJpI0sEc5W1WK4ozpTu7z3BtzpyjOvVAQirF7Dp2yU3dLB93vj+/BYnB5F/1cmTWfu6lGRtO60E0j9DSH20AqTGfsJI4fPM7tbJnT2Fhj+MS8bHf6iEnh2QwlUSUdMlJAxXVu1XcLiSbbHXV4Mh7gCuGB0p0rMGiBg9W/t+D2dYsBQxuXq8fT4iqlaHaUwoVYtsDTMIg3c17mcYni5VRk2d49qpva6zR0zU3v0X2YtvHWlCCYBmjWSS/8X8FUgHVOaCEAOjTU89TvG9uvxXoqO64Wznx7sjywkaWuwmNck2K3xlhccw5iy+K1xxalKgcel6nMxdoBuW2RFRAYCCAT8IH+ONzLOcGj/+sRJx+bl18qYWcZGcYA9IbfJCNXuQHX4uRLjtml+zNac3Kefmw1jyBRUUkWbdcAsW3kvf3+CcP62URCk+eFMywnGk8N6UX9akSxgMKTR3IHuqZLHtzbgUxgeRHCLUid9GwsqDmu3fC8fLRK7sQ==
name: pettai+07431497@sunet.se
type: ssh-rsa
user: root
kano+0DA0A7A5708FE257:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAACAQC5Llc/yl585Uj1CcJPcImWKFRkLOL1OhHhIHcVgj90eqoYz0vtmaw+MzlAj7DgwdtXb1WRAjjoulLZhEkHQ6iL9VePMJFqxN+YKvl+YZnJuOIAoH0CvS8Ej0TzZV2wuhchrWo5YrhVqi9PfFEt5xSHq/B0EFl797R6bFF75g0OE0EdJxtd1UmKQLJxtn/6gZoa7Z4ZuZqm8lL8cpBdm4qWFUGaz8CpCVwuGK9mdoszU/74tWkEcKnYD2DEIC0B/lZ9BeluRgw3Qf1Grf8G9D44OjbB+QkuiO34ru2hVKjTrfCnDEq+pfPzoNXVVUIlAxvoOqjCAnKZv080cJq3fYwjMkMTfU4JaH9y+Byidft1wcgV0T2aayUBMEuF6FbblUhLfhi5C04IfnCWYarquNfLkGy1LnVcejDG17o77Vz8oLlJ8kThMPdOt8hbOZjrdO7y9+Olk0QPYme8AW0sQTthM4+5mlQ3bHIX40QRoA6xm4+gPISqZQhdEmHR9iialCsx4KV2qpBkeNsvnBuC54Ltwmr5/nNSpKkfPJ8t7wKe42DPhxvg1Tb+GV6YIhDYJaHzbT1OVLO9X9YsjKGxtF6kxo46+0rOx3FDfYfG77qKKc3XmDaJLUcwVHO+PlBAWnfvMuWzSLWFduOHvm9gb49jsxw4rAB8iYLO8YHv4eqkhw==
name: kano@sunet.se
type: ssh-rsa
user: root
mariah+CA747E57:
ensure: present
key: AAAAB3NzaC1yc2EAAAADAQABAAABAQDLQfL3uYsqjzkKOxn9nhjDHeWdWQ5SRwcPzq7gINcwJ7omA5c7wJ4RKDqBPihJ9tp2rgM6DKKGxtSyjO6LFhkGNa86uub2PLS0ar+aRobPZ6sOeASqHbO3S1mmvZZWTQ30AFjtY98jjlvfKEI5Xu1+UKyQJqK+/UBVKlPaW6GMSYLr9Z5Uu4XS/sBPdL/ZtR95zDO9OKY8OtTufQi8Zy3pl4Q3xcOsSLZrKiEKMYDCLPlxytHD8FDDYLsgiuPlbF8/uVYYrt/LHHMkD552xC+EjA7Qde1jDU6iHTpttn7j/3FKoxvM8BXUG+QpbqGUESjAlAz/PMNCUZ0kVYh9eeXr
name: mariah+CA747E57@nordu.net
type: ssh-rsa
user: root
mgmt_addresses:
- 130.242.125.68
- 2001:6b0:8:4::68
- 130.242.121.73
- 2001:6b0:7:6::73

View file

@ -1,3 +1,77 @@
# Note that the matching is done with re.match()
'^ns[0-9]?.mnt.se$':
nameserver:
.+:
sunet::server:
ssh_allow_from_anywhere: false
'^k8sc[1-9].matrix.test.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.191.38 k8sc1
- 89.45.237.236 k8sc2
- 89.46.20.227 k8sc3
- 89.47.191.230 k8sw1
- 89.45.236.152 k8sw2
- 89.46.20.18 k8sw3
- 89.47.190.78 k8sw4
- 89.45.236.6 k8sw5
- 89.46.21.195 k8sw6
traefik: false
sunet::frontend::register_sites:
sites:
'kube-matrixtest.matrix.test.sunet.se':
frontends:
- 'se-fre-lb-1.sunet.se'
- 'se-tug-lb-1.sunet.se'
port: '443'
'^k8sw[1-9].matrix.test.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.191.38 k8sc1
- 89.45.237.236 k8sc2
- 89.46.20.227 k8sc3
- 89.47.191.230 k8sw1
- 89.45.236.152 k8sw2
- 89.46.20.18 k8sw3
- 89.47.190.78 k8sw4
- 89.45.236.6 k8sw5
- 89.46.21.195 k8sw6
traefik: false
'^lb[1-9]\.matrix\.test\.sunet\.se$':
matrix::lb:
'^mgmt[1-9]\.matrix\.test\.sunet\.se$':
matrix::podmanhost:
rootless: true
rlusers:
- matrixinstaller
'^k8sc[1-9].matrix.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.190.119 k8sc1
- 89.45.237.43 k8sc2
- 89.46.21.148 k8sc3
- 89.47.190.103 k8sw1
- 89.45.237.161 k8sw2
- 89.46.20.60 k8sw3
- 89.47.190.237 k8sw4
- 89.45.236.55 k8sw5
- 89.46.20.191 k8sw6
traefik: false
'^k8sw[1-9].matrix.sunet.se$':
sunet::microk8s::node:
channel: 1.31/stable
peers:
- 89.47.190.119 k8sc1
- 89.45.237.43 k8sc2
- 89.46.21.148 k8sc3
- 89.47.190.103 k8sw1
- 89.45.237.161 k8sw2
- 89.46.20.60 k8sw3
- 89.47.190.237 k8sw4
- 89.45.236.55 k8sw5
- 89.46.20.191 k8sw6
traefik: false
'^lb[1-9]\.matrix\.sunet\.se$':
matrix::lb:

View file

@ -21,6 +21,11 @@ node default {
}
$ssh_authorized_keys = hiera_hash('ssh_authorized_keys', undef)
if is_hash($ssh_authorized_keys) {
create_resources('ssh_authorized_key', $ssh_authorized_keys)
}
# edit and uncomment to manage ssh root keys in a simple way
#class { 'cosmos::access':
@ -49,3 +54,11 @@ node default {
# proto => "tcp"
# }
#}
if $::facts['hostname'] =~ /^k8s[wc]/ {
warning('Setting nftables to installed but disabled')
ensure_resource ('class','sunet::nftables::init', { enabled => false })
} else {
warning('Enabling nftables')
ensure_resource ('class','sunet::nftables::init', { })
}

View file

@ -0,0 +1,154 @@
#!/usr/bin/env python3
""" Write out a puppet cosmos-modules.conf """
import hashlib
import os
import os.path
import socket
import sys
try:
from configobj import ConfigObj
os_info = ConfigObj("/etc/os-release")
except (IOError, ModuleNotFoundError):
os_info = None
try:
fqdn = socket.getfqdn()
hostname = socket.gethostname()
except OSError:
host_info = None
else:
domainname = '.'.join([x for x in fqdn.split('.')[1:]])
hostname = fqdn.split('.')[0]
environment = 'test' if 'test' in fqdn.split('.') else 'prod'
service = fqdn.split('.')[1]
host_info = {
"domainname": domainname,
"environment": environment,
"fqdn": fqdn,
"hostname": hostname
}
def get_file_hash(modulesfile):
"""
Based on https://github.com/python/cpython/pull/31930: should use
hashlib.file_digest() but it is only available in python 3.11
"""
try:
with open(modulesfile, "rb") as fileobj:
digestobj = hashlib.sha256()
_bufsize = 2**18
buf = bytearray(_bufsize) # Reusable buffer to reduce allocations.
view = memoryview(buf)
while True:
size = fileobj.readinto(buf)
if size == 0:
break # EOF
digestobj.update(view[:size])
except FileNotFoundError:
return ""
return digestobj.hexdigest()
def get_list_hash(file_lines):
"""Get hash of list contents"""
file_lines_hash = hashlib.sha256()
for line in file_lines:
file_lines_hash.update(line)
return file_lines_hash.hexdigest()
def create_file_content(modules):
"""
Write out the expected file contents to a list so we can check the
expected checksum before writing anything
"""
file_lines = []
file_lines.append("# Generated by {}\n".format( # pylint: disable=consider-using-f-string
os.path.basename(sys.argv[0])).encode("utf-8"))
for key in modules:
file_lines.append("{0:11} {1} {2} {3}\n".format( # pylint: disable=consider-using-f-string
key,
modules[key]["repo"],
modules[key]["upgrade"],
modules[key]["tag"],
).encode("utf-8"))
return file_lines
def main():
"""Starting point of the program"""
modulesfile: str = "/etc/puppet/cosmos-modules.conf"
modulesfile_tmp: str = modulesfile + ".tmp"
modules: dict = {
"apparmor": {
"repo": "https://github.com/SUNET/puppet-apparmor.git",
"upgrade": "yes",
"tag": "sunet-2*",
},
"bastion": {
"repo": "https://github.com/SUNET/puppet-bastion.git",
"upgrade": "yes",
"tag": "sunet-2*",
},
"nagioscfg": {
"repo": "https://github.com/SUNET/puppet-nagioscfg.git",
"upgrade": "yes",
"tag": "sunet-2*",
},
"sunet": {
"repo": "https://github.com/SUNET/puppet-sunet.git",
"upgrade": "yes",
"tag": "stable-2*",
},
"ufw": {
"repo": "https://github.com/SUNET/puppet-module-ufw.git",
"upgrade": "yes",
"tag": "sunet-2*",
},
"matrix": {
"repo": "https://platform.sunet.se/matrix/matrix-puppet.git",
"upgrade": "yes",
"tag": "stable-2*",
}
}
# When/if we want we can do stuff to modules here
if host_info:
if host_info["environment"] == "test":
modules["sunet"]["tag"] = "testing-2*"
modules["matrix"]["tag"] = "testing-2*"
# if host_info["fqdn"] == "k8sw1.matrix.test..sunet.se":
# modules["sunet"]["tag"] = "mandersson-test*"
# Build list of expected file content
file_lines = create_file_content(modules)
# Get hash of the list
list_hash = get_list_hash(file_lines)
# Get hash of the existing file on disk
file_hash = get_file_hash(modulesfile)
# Update the file if necessary
if list_hash != file_hash:
# Since we are reading the file with 'rb' when computing our hash use 'wb' when
# writing so we dont end up creating a file that does not match the
# expected hash
with open(modulesfile_tmp, "wb") as fileobj:
for line in file_lines:
fileobj.write(line)
# Rename it in place so the update is atomic for anything else trying to
# read the file
os.rename(modulesfile_tmp, modulesfile)
if __name__ == "__main__":
main()

View file

@ -20,8 +20,9 @@ if ! test -f "${stamp}" -a -f /usr/bin/puppet; then
puppet-module-puppetlabs-apt \
puppet-module-puppetlabs-concat \
puppet-module-puppetlabs-cron-core \
puppet-module-puppetlabs-sshkeys-core \
puppet-module-puppetlabs-stdlib \
puppet-module-puppetlabs-vcsrepo
puppet-module-puppetlabs-vcsrepo
fi

View file

@ -0,0 +1,3 @@
### Patch resource felixconfiguration/default to enable wireguard calico CNI.
kubectl patch felixconfiguration default --type='merge' --patch-file calico-wireguard-patch.yaml

View file

@ -0,0 +1,2 @@
spec:
wireguardEnabled: true

View file

@ -0,0 +1,6 @@
# install cert-manager addon
microk8s enable cert-manager
microk8s enable ingress dns
# init the clusterissuer
kubectl apply -f clusterissuer.yaml
kubectl get clusterissuer -o wide

View file

@ -0,0 +1,16 @@
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: someemailaddress+element@sunet.se
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: lets-encrypt-private-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: public

View file

@ -0,0 +1,9 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-read-namespaces
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]

View file

@ -0,0 +1,65 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: matrix-registry
namespace: matrix-registry
labels:
k8s-app: matrix-registry
kubernetes.io/cluster-service: "true"
spec:
replicas: 3
selector:
matchLabels:
k8s-app: matrix-registry
template:
metadata:
labels:
k8s-app: matrix-registry
kubernetes.io/cluster-service: "true"
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
k8s-app: matrix-registry
containers:
- name: registry
image: registry:2
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 300Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_HTTP_SECRET
valueFrom:
secretKeyRef:
name: matrix-registry-secret
key: http-secret
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
livenessProbe:
httpGet:
path: /
port: registry
readinessProbe:
httpGet:
path: /
port: registry
volumes:
- name: image-store
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false

View file

@ -0,0 +1,31 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
name: matrix-registry-ingress
namespace: matrix-registry
spec:
defaultBackend:
service:
name: matrix-registry-service
port:
number: 5000
ingressClassName: nginx
rules:
- host: registry.matrix.test.sunet.se
http:
paths:
- backend:
service:
name: matrix-registry-service
port:
number: 5000
path: /
pathType: Prefix
tls:
- hosts:
- registry.matrix.test.sunet.se
secretName: tls-secret

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: matrix-registry
labels:
name: matrix-registry-namespace

View file

@ -0,0 +1,13 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: matrix-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs

View file

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: matrix-registry-service
namespace: matrix-registry
spec:
selector:
k8s-app: matrix-registry
ports:
- name: httpregistry
protocol: TCP
port: 5000
targetPort: registry

View file

@ -0,0 +1,26 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: health-node
namespace: health
creationTimestamp:
labels:
app: health-node
spec:
replicas: 3
selector:
matchLabels:
app: health-node
template:
metadata:
creationTimestamp:
labels:
app: health-node
spec:
containers:
- name: echoserver
image: k8s.gcr.io/echoserver:1.10
resources: {}
strategy: {}
status: {}

View file

@ -0,0 +1,32 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
tls:
- hosts:
- kube-matrixtest.matrix.test.sunet.se
secretName: tls-secret
rules:
- host: kube-matrixtest.matrix.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,8 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: health
spec:
finalizers:
- kubernetes

View file

@ -0,0 +1,25 @@
---
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app: health-node
name: health-node
namespace: health
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: health-node
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""

View file

@ -0,0 +1,7 @@
resources:
- health-deployment.yml
- health-ingress.yml
- health-namespace.yml
- health-service.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
# tls:
# - hosts:
# - kube-matrix.matrix.sunet.se
# secretName: tls-secret
rules:
- host: kube-matrix.matrix.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
# tls:
# - hosts:
# - kube-matrixtest.matrix.test.sunet.se
# secretName: tls-secret
rules:
- host: "kube.matrix.test.sunet.se"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,43 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: btsystem-registry
name: btsystemregistry-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- pods
- configmaps
- events
- limitranges
- persistentvolumeclaims
- podtemplates
- replicationcontrollers
- resourcequotas
- secrets
- services
- controllerrevisions
- daemonsets
- deployments
- replicasets
- statefulsets
- localsubjectaccessreviews
- horizontalpodautoscalers
- cronjobs
- jobs
- leases
- networkpolicies
- networksets
- endpointslices
- events
- ingresses
- networkpolicies
- objectbucketclaims
- poddisruptionbudgets
- rolebindings
- roles
- csistoragecapacities
verbs:
- get
- watch
- list

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: matrix
labels:
name: matrix

7
k8s/postgres/README.md Normal file
View file

@ -0,0 +1,7 @@
### Postgres password
To create the postgres password secret you can use the following command.
kubectl apply -f postgres-namespace.yaml
kubectl apply -f postgres-pvc.yaml
kubectl create secret generic postgres-secret --from-literal=postgres-password=xxXxXxX -n postgres
kubectl apply -f postgres-deployment.yaml
kubectl apply -f postgres-service.yaml

View file

@ -0,0 +1,75 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: postgres
labels:
k8s-app: postgres
spec:
replicas: 1
selector:
matchLabels:
k8s-app: postgres
template:
metadata:
labels:
k8s-app: postgres
spec:
containers:
- name: postgresql
image: postgres:17.0-bookworm
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: sharemem
mountPath: /dev/shm
ports:
- containerPort: 5432
name: postgres
protocol: TCP
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
livenessProbe:
exec:
command:
- /bin/bash
- -c
- exec pg_isready -U postgres -h 127.0.0.1 -p 5432
failureThreshold: 6
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
exec:
command:
- /bin/bash
- -c
- exec pg_isready -U postgres -h 127.0.0.1 -p 5432
failureThreshold: 6
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 2
timeoutSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
readOnly: false
- emptyDir:
medium: Memory
sizeLimit: 1Gi
name: sharemem

View file

@ -0,0 +1,7 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: postgres
labels:
usage: postgres-btsystem

View file

@ -0,0 +1,13 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: postgres
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: rook-cephfs

View file

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
spec:
selector:
k8s-app: postgres
ports:
- name: postgres
protocol: TCP
port: 5432
targetPort: 5432

74
k8s/rook/README.md Normal file
View file

@ -0,0 +1,74 @@
### Rook deployment
In the operator.yaml change ROOK_CSI_KUBELET_DIR_PATH to "/var/snap/microk8s/common/var/lib/kubelet"
# initalize rook operator
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-6668b75686-l4zlh 1/1 Running 0 60s
# initalize rook cluster
kubectl create -f cluster-multizone.yaml
takes lots of time before the multizone cluster is initalized
(should be around 47 pods...)
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-6xhjh 2/2 Running 1 (3m42s ago) 4m16s
csi-cephfsplugin-cgmqs 2/2 Running 0 4m16s
csi-cephfsplugin-hs2rx 2/2 Running 1 (3m43s ago) 4m16s
csi-cephfsplugin-km7k6 2/2 Running 0 4m16s
csi-cephfsplugin-ms8c2 2/2 Running 1 (3m42s ago) 4m16s
csi-cephfsplugin-provisioner-dc97f9d65-6tvkn 5/5 Running 2 (3m35s ago) 4m15s
csi-cephfsplugin-provisioner-dc97f9d65-bwdkn 5/5 Running 0 4m15s
csi-cephfsplugin-wlks6 2/2 Running 0 4m16s
csi-rbdplugin-ckgnc 2/2 Running 0 4m18s
csi-rbdplugin-hmfhc 2/2 Running 1 (3m42s ago) 4m18s
csi-rbdplugin-mclsz 2/2 Running 0 4m18s
csi-rbdplugin-nt7rk 2/2 Running 1 (3m42s ago) 4m18s
csi-rbdplugin-provisioner-7f5767b9d5-gvbkr 5/5 Running 0 4m17s
csi-rbdplugin-provisioner-7f5767b9d5-n5mwc 5/5 Running 0 4m17s
csi-rbdplugin-rzk9v 2/2 Running 1 (3m44s ago) 4m18s
csi-rbdplugin-z9dmh 2/2 Running 0 4m18s
rook-ceph-crashcollector-k8sw1-5fd979dcf9-w9g2x 1/1 Running 0 119s
rook-ceph-crashcollector-k8sw2-68f48b45b-dwld5 1/1 Running 0 109s
rook-ceph-crashcollector-k8sw3-7f5d749cbf-kxswk 1/1 Running 0 96s
rook-ceph-crashcollector-k8sw4-84fd486bb6-pfkgm 1/1 Running 0 2m3s
rook-ceph-crashcollector-k8sw5-58c7b74b4c-pdf2j 1/1 Running 0 110s
rook-ceph-crashcollector-k8sw6-578ffc7cfb-bpzgl 1/1 Running 0 2m27s
rook-ceph-exporter-k8sw1-66746d6cf-pljkx 1/1 Running 0 119s
rook-ceph-exporter-k8sw2-6cc5d955d4-k7xx5 1/1 Running 0 104s
rook-ceph-exporter-k8sw3-5d6f7d49b9-rvvbd 1/1 Running 0 96s
rook-ceph-exporter-k8sw4-5bf54d5b86-cn6v7 1/1 Running 0 118s
rook-ceph-exporter-k8sw5-547898b8d7-l7cmc 1/1 Running 0 110s
rook-ceph-exporter-k8sw6-596f7d956d-n426q 1/1 Running 0 2m27s
rook-ceph-mgr-a-6cfc895565-h9qfg 2/2 Running 0 2m37s
rook-ceph-mgr-b-85fc4df4b5-fv6z9 2/2 Running 0 2m37s
rook-ceph-mon-a-868c8f5cff-2tk7l 1/1 Running 0 4m10s
rook-ceph-mon-b-6f9776cf9b-w4dtq 1/1 Running 0 3m12s
rook-ceph-mon-c-8457f5cc77-8mbpj 1/1 Running 0 2m57s
rook-ceph-operator-6668b75686-l4zlh 1/1 Running 0 7m36s
rook-ceph-osd-0-79d7b6c764-shwtd 1/1 Running 0 2m4s
rook-ceph-osd-1-65d99447b5-bnhln 1/1 Running 0 119s
rook-ceph-osd-2-69dbd98748-5vrwn 1/1 Running 0 114s
rook-ceph-osd-3-596b58cf7d-j2qgj 1/1 Running 0 115s
rook-ceph-osd-4-858bc8df6d-wrlsx 1/1 Running 0 2m
rook-ceph-osd-5-7f6fbfd96-65gpl 1/1 Running 0 96s
rook-ceph-osd-prepare-k8sw1-5pgh9 0/1 Completed 0 2m14s
rook-ceph-osd-prepare-k8sw2-6sdrc 0/1 Completed 0 2m14s
rook-ceph-osd-prepare-k8sw3-mfzsh 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw4-dn8gn 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw5-lj5tj 0/1 Completed 0 2m13s
rook-ceph-osd-prepare-k8sw6-8hw4k 0/1 Completed 0 2m12s
# init rook toolbox
kubectl create -f toolbox.yaml
# jump into toolbox
kubectl -n rook-ceph exec -it rook-ceph-tools-5f4464f87-zbd5p -- /bin/bash
# init rook filesystem & storageclass
kubectl create -f filesystem.yaml
kubectl create -f storageclass.yaml

View file

@ -0,0 +1,130 @@
#################################################################################################################
# Define the settings for the rook-ceph cluster with common settings for a production cluster.
# Selected nodes with selected raw devices will be used for the Ceph cluster. At least three nodes are required
# in this example. See the documentation for more details on storage settings available.
# For example, to create the cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster-multizone.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph # namespace:cluster
spec:
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: false
failureDomainLabel: topology.kubernetes.io/zone
zones:
- name: dco
- name: sto3
- name: sto4
mgr:
count: 2
allowMultiplePerNode: false
modules:
- name: rook
enabled: true
- name: pg_autoscaler
enabled: true
cephVersion:
image: quay.io/ceph/ceph:v18.2.4
allowUnsupported: false
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
waitTimeoutForHealthyOSDInMinutes: 10
dashboard:
enabled: true
ssl: true
storage:
useAllNodes: false
nodes:
- name: k8sw1
- name: k8sw2
- name: k8sw3
- name: k8sw4
- name: k8sw5
- name: k8sw6
useAllDevices: false
devices:
- name: "/dev/rookvg/rookvol1"
- name: "/dev/rookvg/rookvol2"
- name: "/dev/rookvg/rookvol3"
deviceFilter: ""
placement:
osd:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- dco
- sto3
- sto4
mgr:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- dco
- sto3
- sto4
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
disruptionManagement:
managePodBudgets: true
csi:
readAffinity:
# Enable read affinity to enable clients to optimize reads from an OSD in the same topology.
# Enabling the read affinity may cause the OSDs to consume some extra memory.
# For more details see this doc:
# https://rook.io/docs/rook/latest/Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-volumes
enabled: false
# cephfs driver specific settings.
cephfs:
# Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options.
# kernelMountOptions: ""
# Set CephFS Fuse mount options to use https://docs.ceph.com/en/quincy/man/8/ceph-fuse/#options.
# fuseMountOptions: ""
# healthChecks
# Valid values for daemons are 'mon', 'osd', 'status'
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
osd:
disabled: false
interval: 60s
status:
disabled: false
interval: 60s
# Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons.
livenessProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false
# Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons.
startupProbe:
mon:
disabled: false
mgr:
disabled: false
osd:
disabled: false

1284
k8s/rook/common.yaml Normal file

File diff suppressed because it is too large Load diff

14044
k8s/rook/crds.yaml Normal file

File diff suppressed because it is too large Load diff

20
k8s/rook/filesystem.yaml Normal file
View file

@ -0,0 +1,20 @@
---
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: rookfs
namespace: rook-ceph
spec:
metadataPool:
failureDomain: zone
replicated:
size: 3
dataPools:
- name: replicated
failureDomain: zone
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true

View file

@ -0,0 +1,3 @@
nbd
rbd
ceph

691
k8s/rook/operator.yaml Normal file
View file

@ -0,0 +1,691 @@
#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
###############################################################################################################
# Rook Ceph Operator Config ConfigMap
# Use this ConfigMap to override Rook-Ceph Operator configurations.
# NOTE! Precedence will be given to this config if the same Env Var config also exists in the
# Operator Deployment.
# To move a configuration(s) from the Operator Deployment to this ConfigMap, add the config
# here. It is recommended to then remove it from the Deployment to eliminate any future confusion.
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-ceph-operator-config
# should be in the namespace of the operator
namespace: rook-ceph # namespace:operator
data:
# The logging level for the operator: ERROR | WARNING | INFO | DEBUG
ROOK_LOG_LEVEL: "INFO"
# The address for the operator's controller-runtime metrics. 0 is disabled. :8080 serves metrics on port 8080.
ROOK_OPERATOR_METRICS_BIND_ADDRESS: "0"
# Allow using loop devices for osds in test clusters.
ROOK_CEPH_ALLOW_LOOP_DEVICES: "false"
# Enable CSI Operator
ROOK_USE_CSI_OPERATOR: "false"
# Enable the CSI driver.
# To run the non-default version of the CSI driver, see the override-able image properties in operator.yaml
ROOK_CSI_ENABLE_CEPHFS: "true"
# Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_RBD: "true"
# Enable the CSI NFS driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_NFS: "false"
# Disable the CSI driver.
ROOK_CSI_DISABLE_DRIVER: "false"
# Set to true to enable Ceph CSI pvc encryption support.
CSI_ENABLE_ENCRYPTION: "false"
# Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance.
# CSI_ENABLE_HOST_NETWORK: "true"
# Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network
# without needing hosts to the network. Holder pods are being removed. See issue for details:
# https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true".
CSI_DISABLE_HOLDER_PODS: "true"
# Set to true to enable adding volume metadata on the CephFS subvolume and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default.
# CSI_ENABLE_METADATA: "true"
# cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases
# like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster.
# CSI_CLUSTER_NAME: "my-prod-cluster"
# Set logging level for cephCSI containers maintained by the cephCSI.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
# Set logging level for Kubernetes-csi sidecar containers.
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
# CSI_SIDECAR_LOG_LEVEL: "0"
# csi driver name prefix for cephfs, rbd and nfs. if not specified, default
# will be the namespace name where rook-ceph operator is deployed.
# search for `# csi-provisioner-name` in the storageclass and
# volumesnashotclass and update the name accordingly.
# CSI_DRIVER_NAME_PREFIX: "rook-ceph"
# Set replicas for csi provisioner deployment.
CSI_PROVISIONER_REPLICAS: "2"
# OMAP generator will generate the omap mapping between the PV name and the RBD image.
# CSI_ENABLE_OMAP_GENERATOR need to be enabled when we are using rbd mirroring feature.
# By default OMAP generator sidecar is deployed with CSI provisioner pod, to disable
# it set it to false.
# CSI_ENABLE_OMAP_GENERATOR: "false"
# set to false to disable deployment of snapshotter container in CephFS provisioner pod.
CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in NFS provisioner pod.
CSI_ENABLE_NFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in RBD provisioner pod.
CSI_ENABLE_RBD_SNAPSHOTTER: "true"
# set to false to disable volume group snapshot feature. This feature is
# enabled by default as long as the necessary CRDs are available in the cluster.
CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: "true"
# Enable cephfs kernel driver instead of ceph-fuse.
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
# NOTE! cephfs quota is not supported in kernel version < 4.17
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
# (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_RBD_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_CEPHFS_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_NFS_FSGROUPPOLICY: "File"
# (Optional) Allow starting unsupported ceph-csi image
ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
# (Optional) control the host mount of /etc/selinux for csi plugin pods.
CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
# ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:3.12.2"
# ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1"
# ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1"
# ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1"
# ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1"
# ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1"
# To indicate the image pull policy to be applied to all the containers in the csi driver pods.
# ROOK_CSI_IMAGE_PULL_POLICY: "IfNotPresent"
# (Optional) set user created priorityclassName for csi plugin pods.
CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical"
# (Optional) set user created priorityclassName for csi provisioner pods.
CSI_PROVISIONER_PRIORITY_CLASSNAME: "system-cluster-critical"
# CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
# Default value is 1.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
# Default value is 1.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_NFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
ROOK_CSI_KUBELET_DIR_PATH: "/var/snap/microk8s/common/var/lib/kubelet"
# Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
# ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI RBD Deployments and DaemonSets Pods.
# ROOK_CSI_RBD_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI NFS Deployments and DaemonSets Pods.
# ROOK_CSI_NFS_POD_LABELS: "key1=value1,key2=value2"
# (Optional) CephCSI CephFS plugin Volumes
# CSI_CEPHFS_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI CephFS plugin Volume mounts
# CSI_CEPHFS_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI RBD plugin Volumes
# CSI_RBD_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI RBD plugin Volume mounts
# CSI_RBD_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI provisioner NodeAffinity (applied to both CephFS and RBD provisioner).
# CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI provisioner tolerations list(applied to both CephFS and RBD provisioner).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_PROVISIONER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI plugin NodeAffinity (applied to both CephFS and RBD plugin).
# CSI_PLUGIN_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI plugin tolerations list(applied to both CephFS and RBD plugin).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_PLUGIN_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI RBD provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_RBD_PROVISIONER_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_RBD_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI RBD plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_RBD_PLUGIN_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_RBD_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI CephFS provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_CEPHFS_PROVISIONER_NODE_AFFINITY: "role=cephfs-node"
# (Optional) CephCSI CephFS provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_CEPHFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI CephFS plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: "role=cephfs-node"
# NOTE: Support for defining NodeAffinity for operators other than "In" and "Exists" requires the user to input a
# valid v1.NodeAffinity JSON or YAML string. For example, the following is valid YAML v1.NodeAffinity:
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) CephCSI CephFS plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_CEPHFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI NFS provisioner NodeAffinity (overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_NFS_PROVISIONER_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS provisioner tolerations list (overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_NFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CephCSI NFS plugin NodeAffinity (overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_NFS_PLUGIN_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS plugin tolerations list (overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_NFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CEPH CSI RBD provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_RBD_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-omap-generator
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI RBD plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_RBD_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_CEPHFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_CEPHFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
# CSI_NFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
# CSI_NFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# Configure CSI CephFS liveness metrics port
# Set to true to enable Ceph CSI liveness container.
CSI_ENABLE_LIVENESS: "false"
# CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
# Configure CSI RBD liveness metrics port
# CSI_RBD_LIVENESS_METRICS_PORT: "9080"
# CSIADDONS_PORT: "9070"
# Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
# CSI_CEPHFS_KERNEL_MOUNT_OPTIONS: "ms_mode=secure"
# (Optional) Duration in seconds that non-leader candidates will wait to force acquire leadership. Default to 137 seconds.
# CSI_LEADER_ELECTION_LEASE_DURATION: "137s"
# (Optional) Deadline in seconds that the acting leader will retry refreshing leadership before giving up. Defaults to 107 seconds.
# CSI_LEADER_ELECTION_RENEW_DEADLINE: "107s"
# (Optional) Retry Period in seconds the LeaderElector clients should wait between tries of actions. Defaults to 26 seconds.
# CSI_LEADER_ELECTION_RETRY_PERIOD: "26s"
# Whether the OBC provisioner should watch on the ceph cluster namespace or not, if not default provisioner value is set
ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"
# Custom prefix value for the OBC provisioner instead of ceph cluster namespace, do not set on existing cluster
# ROOK_OBC_PROVISIONER_NAME_PREFIX: "custom-prefix"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
ROOK_ENABLE_DISCOVERY_DAEMON: "false"
# The timeout value (in seconds) of Ceph commands. It should be >= 1. If this variable is not set or is an invalid value, it's default to 15.
ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: "15"
# Enable the csi addons sidecar.
CSI_ENABLE_CSIADDONS: "false"
# Enable watch for faster recovery from rbd rwo node loss
ROOK_WATCH_FOR_NODE_FAILURE: "true"
# ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.9.1"
# The CSI GRPC timeout value (in seconds). It should be >= 120. If this variable is not set or is an invalid value, it's default to 150.
CSI_GRPC_TIMEOUT_SECONDS: "150"
# Enable topology based provisioning.
CSI_ENABLE_TOPOLOGY: "false"
# Domain labels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# CSI_TOPOLOGY_DOMAIN_LABELS: "kubernetes.io/hostname,topology.kubernetes.io/zone,topology.rook.io/rack"
# Whether to skip any attach operation altogether for CephCSI PVCs.
# See more details [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the CephCSI PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes. for RBD PVC it can cause data corruption,
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false
# since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
CSI_CEPHFS_ATTACH_REQUIRED: "true"
CSI_RBD_ATTACH_REQUIRED: "true"
CSI_NFS_ATTACH_REQUIRED: "true"
# Rook Discover toleration. Will tolerate all taints with all keys.
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
# DISCOVER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s)
# DISCOVER_PRIORITY_CLASS_NAME: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity.
# DISCOVER_AGENT_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) Discover Agent Pod Labels.
# DISCOVER_AGENT_POD_LABELS: "key1=value1,key2=value2"
# Disable automatic orchestration when new devices are discovered
ROOK_DISABLE_DEVICE_HOTPLUG: "false"
# The duration between discovering devices in the rook-discover daemonset.
ROOK_DISCOVER_DEVICES_INTERVAL: "60m"
# DISCOVER_DAEMON_RESOURCES: |
# - name: DISCOVER_DAEMON_RESOURCES
# resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 128Mi
# (Optional) Burst to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_BURST: "10"
# (Optional) QPS to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_QPS: "5.0"
# Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled
ROOK_ENFORCE_HOST_NETWORK: "false"
# RevisionHistoryLimit value for all deployments created by rook.
# ROOK_REVISION_HISTORY_LIMIT: "3"
---
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-operator
namespace: rook-ceph # namespace:operator
labels:
operator: rook
storage-backend: ceph
app.kubernetes.io/name: rook-ceph
app.kubernetes.io/instance: rook-ceph
app.kubernetes.io/component: rook-ceph-operator
app.kubernetes.io/part-of: rook-ceph-operator
spec:
selector:
matchLabels:
app: rook-ceph-operator
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: rook-ceph-operator
spec:
tolerations:
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 5
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: docker.io/rook/ceph:v1.15.3
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
volumeMounts:
- mountPath: /var/lib/rook
name: rook-config
- mountPath: /etc/ceph
name: default-config-dir
env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon, osd pods and csi provisioners(if logrotation is on).
# Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to separate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Time to wait until the node controller will move Rook pods to other
# nodes after detecting an unreachable node.
# Pods affected by this setting are:
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Recommended resource requests and limits, if desired
#resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 200m
# memory: 128Mi
# Uncomment it to run lib bucket provisioner in multithreaded mode
#- name: LIB_BUCKET_PROVISIONER_THREADS
# value: "5"
# Uncomment it to run rook operator on the host network
#hostNetwork: true
volumes:
- name: rook-config
emptyDir: {}
- name: default-config-dir
emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT

View file

@ -0,0 +1,28 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
# If you change this namespace, also change the namespace below where the secret namespaces are defined
clusterID: rook-ceph
# CephFS filesystem name into which the volume shall be created
fsName: rookfs
# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: rookfs-replicated
# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Delete

128
k8s/rook/toolbox.yaml Normal file
View file

@ -0,0 +1,128 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-tools
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-tools
spec:
replicas: 1
selector:
matchLabels:
app: rook-ceph-tools
template:
metadata:
labels:
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: quay.io/ceph/ceph:v17.2.6
command:
- /bin/bash
- -c
- |
# Replicate the script from toolbox.sh inline so the ceph image
# can be run directly, instead of requiring the rook toolbox
CEPH_CONFIG="/etc/ceph/ceph.conf"
MON_CONFIG="/etc/rook/mon-endpoints"
KEYRING_FILE="/etc/ceph/keyring"
# create a ceph config file in its default location so ceph/rados tools can be used
# without specifying any arguments
write_endpoints() {
endpoints=$(cat ${MON_CONFIG})
# filter out the mon names
# external cluster can have numbers or hyphens in mon names, handling them in regex
# shellcheck disable=SC2001
mon_endpoints=$(echo "${endpoints}"| sed 's/[a-z0-9_-]\+=//g')
DATE=$(date)
echo "$DATE writing mon endpoints to ${CEPH_CONFIG}: ${endpoints}"
cat <<EOF > ${CEPH_CONFIG}
[global]
mon_host = ${mon_endpoints}
[client.admin]
keyring = ${KEYRING_FILE}
EOF
}
# watch the endpoints config file and update if the mon endpoints ever change
watch_endpoints() {
# get the timestamp for the target of the soft link
real_path=$(realpath ${MON_CONFIG})
initial_time=$(stat -c %Z "${real_path}")
while true; do
real_path=$(realpath ${MON_CONFIG})
latest_time=$(stat -c %Z "${real_path}")
if [[ "${latest_time}" != "${initial_time}" ]]; then
write_endpoints
initial_time=${latest_time}
fi
sleep 10
done
}
# read the secret from an env var (for backward compatibility), or from the secret file
ceph_secret=${ROOK_CEPH_SECRET}
if [[ "$ceph_secret" == "" ]]; then
ceph_secret=$(cat /var/lib/rook-ceph-mon/secret.keyring)
fi
# create the keyring file
cat <<EOF > ${KEYRING_FILE}
[${ROOK_CEPH_USERNAME}]
key = ${ceph_secret}
EOF
# write the initial config file
write_endpoints
# continuously update the mon endpoints if they fail over
watch_endpoints
imagePullPolicy: IfNotPresent
tty: true
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
env:
- name: ROOK_CEPH_USERNAME
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-username
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: mon-endpoint-volume
mountPath: /etc/rook
- name: ceph-admin-secret
mountPath: /var/lib/rook-ceph-mon
readOnly: true
volumes:
- name: ceph-admin-secret
secret:
secretName: rook-ceph-mon
optional: false
items:
- key: ceph-secret
path: secret.keyring
- name: mon-endpoint-volume
configMap:
name: rook-ceph-mon-endpoints
items:
- key: data
path: mon-endpoints
- name: ceph-config
emptyDir: {}
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5

View file

@ -0,0 +1 @@
../README

Some files were not shown because too many files have changed in this diff Show more