Compare commits

...

89 commits

Author SHA1 Message Date
Micke Nordin 874f216848
Add cert manager
Signed-off-by: Micke Nordin <kano@sunet.se>
2024-11-06 15:56:57 +01:00
Micke Nordin a6c7d4e2f0
Add argo for vr-test 2024-11-06 15:46:49 +01:00
Micke Nordin a9cbc314bb
Pull new image 2024-11-05 08:07:50 +01:00
Micke Nordin 0bb0636d61
Try new version 2024-11-04 11:50:46 +01:00
Micke Nordin 325b05d352
Bump version to 29.0.8.2-4 2024-10-25 11:40:49 +02:00
Magnus Andersson 43c368211e
Update nextcloud version to 29.0.8.2-1 for nordunet and vinnova. 2024-10-15 10:51:16 +02:00
Magnus Andersson 6f477d9c1b
Dummy commit 2024-10-07 08:23:39 +02:00
Micke Nordin fca1f166e0
Signed dummy commit 2024-10-04 17:16:59 +02:00
Micke Nordin 427e71606a Signed dummy commit
Signed-off-by: Micke Nordin <kano@sunet.se>
2024-10-04 17:15:53 +02:00
Micke Nordin cd9fbdf1d5 Set read only again 2024-10-01 11:19:50 +02:00
Micke Nordin 73db86588c Deploy 29.0.7.2-1 and set config to not read only 2024-10-01 11:17:41 +02:00
Micke Nordin aaae51d17b Bump version 2024-08-20 17:12:53 +02:00
Micke Nordin ff93521ac3 switch port 2024-08-19 10:34:05 +02:00
Micke Nordin af6fdfd154 correct ns 2024-08-19 10:32:31 +02:00
Micke Nordin f0bb76f6aa Fix syntax 2024-08-19 10:32:02 +02:00
Micke Nordin 343a09c93e Fix syntax 2024-08-19 10:30:22 +02:00
Micke Nordin ed27635841 Fix syntax 2024-08-19 10:24:41 +02:00
Micke Nordin 6919487a06 Switch port 2024-08-19 10:21:25 +02:00
Micke Nordin cb22a0d919 Switch ingress class 2024-08-19 10:20:26 +02:00
Micke Nordin f6f2047535 Switch ingress class 2024-08-19 10:18:16 +02:00
Micke Nordin a589c1ed76 Correct ingress type 2024-08-19 10:07:40 +02:00
Micke Nordin dfc0ed9d89 Uppgarde nc version 2024-08-05 12:00:02 +02:00
Micke Nordin a9e3c68d28 Add basic auth 2024-06-28 15:13:02 +02:00
Micke Nordin 544b99a843 Rename files 2024-06-28 14:29:48 +02:00
Micke Nordin 0e7417ddee Rename files 2024-06-28 14:28:48 +02:00
Micke Nordin 57d27855db File name 2024-06-28 14:26:16 +02:00
Micke Nordin 39b411e3aa Try another image of the proxy 2024-06-28 14:24:18 +02:00
Micke Nordin 0609270a81 Add more cluster components 2024-06-28 12:57:08 +02:00
Micke Nordin 16da1df16b Add spark service 2024-06-28 12:46:02 +02:00
Micke Nordin 036ef590d2 Maybe its the name that should be like this 2024-06-28 12:35:53 +02:00
Micke Nordin e925058169 Chart is named spark-operator 2024-06-28 12:33:20 +02:00
Micke Nordin b09b0e3237 Correct version 2024-06-28 12:32:14 +02:00
Micke Nordin 9f6220e1a7 Also patch 2024-06-28 12:29:40 +02:00
Micke Nordin 79949532c7 Add ingress 2024-06-28 12:27:58 +02:00
Micke Nordin da9a983596 No resources for now 2024-06-28 12:07:38 +02:00
Micke Nordin e5c66613b3 First stab at deploying spark 2024-06-28 12:03:43 +02:00
Micke Nordin 923f67db76 Correct name 2024-06-24 14:04:32 +02:00
Micke Nordin 85a55ec7a1 Deploy portal 2024-06-24 14:01:35 +02:00
Micke Nordin 2bd544568e Move to nginx 2024-06-24 11:05:04 +02:00
Micke Nordin bfdc489cea Move rds to nginx 2024-06-24 11:00:44 +02:00
Micke Nordin 8927c853ba Move health to nginx 2024-06-24 10:57:47 +02:00
Micke Nordin 4cbeee1a32 Enable nginx for drive-test 2024-06-24 10:46:25 +02:00
Micke Nordin ab5b6a3e11 Correct hostnames 2024-06-19 16:35:44 +02:00
Micke Nordin 7eb3369a04 Add jupyterhub 2024-06-19 16:34:18 +02:00
Micke Nordin d9cd80d8b0 Add overlay for sunet-prod 2024-06-19 15:53:19 +02:00
Micke Nordin 33c6a68cd6 Add overlay for sunet-prod 2024-06-19 15:51:06 +02:00
Micke Nordin 3faace2b35 Bump nextcloud version to 28.0.6.2-1 2024-06-05 11:44:29 +02:00
Richard Freitag 175741f8ef Remove doris connector from values.yaml 2024-05-31 14:12:28 +01:00
Richard Freitag a19e4c71af Change display name for Doris connector 2024-05-31 13:57:39 +01:00
Micke Nordin 944c114c40 Add env variable 2024-05-29 12:36:01 +02:00
Micke Nordin 9376178752 Add gu service 2024-05-29 11:56:15 +02:00
Micke Nordin 01a2c8d258 Add gu service 2024-05-29 11:55:05 +02:00
Micke Nordin 4d7c577c79 New port for doris 2024-05-27 15:54:09 +02:00
Micke Nordin 8f8c6704ea Add debug logging 2024-05-27 15:31:51 +02:00
Micke Nordin d7afb8190a Change to str 2024-05-27 14:48:49 +02:00
Micke Nordin f7085e1a72 Use layer1-port-doris 2024-05-27 14:35:53 +02:00
Micke Nordin 58d973c5f2 Use layer1-port-doris 2024-05-27 14:33:43 +02:00
Micke Nordin 5aebe67702 Merge branch 'mandersson-dorisgu' 2024-05-27 14:25:05 +02:00
Micke Nordin dda10fe921 Fix en variables 2024-05-27 14:18:49 +02:00
Magnus Andersson d45f1bb2d9
Add gu doris config 2024-05-22 16:34:55 +02:00
Micke Nordin d250b8a654 Use 28.0.5.2-6 in test 2024-05-21 16:32:07 +02:00
Micke Nordin 7cf1e79698 Upgrade to new inage 28.0.5.2-5 2024-05-21 08:56:20 +02:00
Micke Nordin a499ab770d Deploy new version of nextcloud 2024-05-08 09:50:13 +02:00
Micke Nordin 48de3a326f Switch to nginx 2024-05-03 15:11:07 +02:00
Micke Nordin 83808375fe remove upstream secret that we don't use 2024-05-03 13:09:08 +02:00
Micke Nordin 4d715bbaa0 update cinder to 1.28 2024-05-03 13:08:14 +02:00
Micke Nordin e6618fe932 Add argocd and health for sunet-test cluster 2024-05-03 11:39:53 +02:00
Micke Nordin d600b65aad Bump nextcloud version to 28.0.5.2-1 2024-04-29 16:59:17 +02:00
Micke Nordin f2b60798c0 Upgrade test to: 28.0.4.2-1 2024-04-15 14:15:41 +02:00
Micke Nordin b753096198 Bump version of the notebook image to fix conficts of access-token file 2024-04-12 11:46:12 +02:00
Micke Nordin 8d5ea9d850 Bump argocd version 2024-03-20 09:30:04 +01:00
Micke Nordin 866da71435 Remove vr from kube deployment 2024-03-19 21:20:07 +01:00
Micke Nordin 5d2bc4e5d1 Remove namespace again 2024-03-19 16:08:09 +01:00
Micke Nordin a13762fe16 Change name of secret 2024-03-19 16:03:17 +01:00
Micke Nordin b7923658ed Use this for vr-prod 2024-03-19 15:35:08 +01:00
Micke Nordin b22515a6d3 Upgrade to 28.0.3.3-1 2024-02-29 18:56:57 +01:00
Richard Freitag 24d594f81b Remove postgres helm charts to fix kustomize error 2024-02-23 15:39:35 +00:00
Micke Nordin 2c0208b801 Try remote chart 2024-02-23 16:14:45 +01:00
Micke Nordin 55bae9844d Fix typo 2024-02-23 16:10:44 +01:00
Micke Nordin 717711ae5d Try to package traefik 2024-02-23 16:05:07 +01:00
Richard Freitag 1ea9007626 Clean build of helm charts after RDS change for external RDS url 2024-02-23 14:53:29 +00:00
Micke Nordin c5664d6078 Remove fault variable 2024-02-23 15:28:49 +01:00
Richard Freitag 50d3fe9874 Add EFSS configuration for RDS to allow for external rds domains 2024-02-23 14:04:51 +00:00
Micke Nordin 988261e761 Run 28.0.2 2024-02-23 12:48:28 +01:00
Micke Nordin 1cbf39495b Now we upgrade 2024-02-23 12:05:39 +01:00
Micke Nordin 58030e3d88 Use 27.1.6.3-7 2024-02-22 17:17:36 +01:00
Micke Nordin 6954383b41 Go back to: 27.1.6.3-5 2024-02-22 15:54:49 +01:00
Micke Nordin e2b9a6634b Upgrade to: 28.0.2.6-2 2024-02-22 15:21:41 +01:00
Micke Nordin 78054e5ecc Deploy jupyter for vr in test 2024-02-20 16:24:31 +01:00
110 changed files with 12367 additions and 103 deletions

View file

@ -11,7 +11,7 @@ spec:
service:
name: argocd-server
port:
number: 8443
number: 80
ingressClassName: traefik
tls:
- hosts:

View file

@ -0,0 +1,15 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: drive@sunet.se
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx

View file

@ -0,0 +1,22 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
spec:
ingressClassName: nginx
tls:
- hosts:
- argocd.drive.test.sunet.se
secretName: tls-secret
rules:
- host: argocd.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
name: https

View file

@ -0,0 +1,3 @@
resources:
- argocd-ingress.yaml
- argocd-cert-issuer.yaml

View file

@ -0,0 +1,28 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- argocd.drive.test.sunet.dev
secretName: tls-secret
rules:
- host: argocd.drive.test.sunet.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80

View file

@ -1,7 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../base
- ../../base
patches:
- path: nextcloud-deployment.yml
- path: nextcloud-ingress.yml
- path: argocd-ingress.yaml

View file

@ -1,26 +1,31 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: customer-ingress
name: argocd-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 8443
tls:
- hosts:
- vr.drive.test.sunet.se
- argocd.drive.sunet.se
secretName: tls-secret
rules:
- host: vr.drive.test.sunet.se
- host: argocd.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: customer-node
name: argocd-server
port:
number: 80

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: argocd-ingress.yaml

View file

@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- sunet-argocd.drive.sunet.se
secretName: tls-secret
rules:
- host: sunet-argocd.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80

View file

@ -0,0 +1,6 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base]
patches:
- path: argocd-ingress.yaml

View file

@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- argocd.drive.test.sunet.se
secretName: tls-secret
rules:
- host: argocd.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80

View file

@ -0,0 +1,6 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base]
patches:
- path: argocd-ingress.yaml

View file

@ -0,0 +1,27 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- sunet-argocd.drive.test.sunet.se
secretName: tls-secret
rules:
- host: sunet-argocd.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80

View file

@ -0,0 +1,6 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base]
patches:
- path: argocd-ingress.yaml

View file

@ -0,0 +1,30 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
annotations:
cert-manager.io/issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- vr-argocd.drive.test.sunet.se
secretName: tls-secret
rules:
- host: vr-argocd.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80

View file

@ -0,0 +1,6 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base]
patches:
- path: argocd-ingress.yaml

View file

@ -3,4 +3,4 @@ kind: Kustomization
namespace: argocd
resources:
- https://raw.githubusercontent.com/argoproj/argo-cd/v2.10.0/manifests/ha/install.yaml
- https://raw.githubusercontent.com/argoproj/argo-cd/v2.10.4/manifests/ha/install.yaml

View file

@ -69,7 +69,7 @@ spec:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
- name: csi-resizer
image: registry.k8s.io/sig-storage/csi-resizer:v1.7.0
image: registry.k8s.io/sig-storage/csi-resizer:v1.8.0
args:
- "--csi-address=$(ADDRESS)"
- "--timeout=3m"
@ -93,7 +93,7 @@ spec:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
- name: cinder-csi-plugin
image: registry.k8s.io/provider-os/cinder-csi-plugin:v1.27.1
image: registry.k8s.io/provider-os/cinder-csi-plugin:v1.28.2
args:
- /bin/cinder-csi-plugin
- "--endpoint=$(CSI_ENDPOINT)"

View file

@ -30,7 +30,7 @@ spec:
restartPolicy: Always
containers:
- name: customer
image: docker.sunet.se/drive/nextcloud-custom:27.1.6.3-5
image: docker.sunet.se/drive/nextcloud-custom:29.0.8.2-4
volumeMounts:
- name: nextcloud-data
mountPath: /var/www/html/config/
@ -127,7 +127,7 @@ spec:
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_VERSION_STRING
value: "26.0.1.2"
value: "28.0.3.3"
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:

View file

@ -4,15 +4,13 @@ kind: Ingress
metadata:
name: customer-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- nordunet.drive.test.sunet.se
secretName: tls-secret
ingressClassName: nginx
rules:
- host: nordunet.drive.test.sunet.se
http:

View file

@ -4,15 +4,13 @@ kind: Ingress
metadata:
name: customer-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- vinnova.drive.test.sunet.se
secretName: tls-secret
ingressClassName: nginx
rules:
- host: vinnova.drive.test.sunet.se
http:

View file

@ -1,35 +0,0 @@
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: customer-node
labels:
app: customer-node
spec:
replicas: 1
template:
metadata:
labels:
app: customer-node
spec:
initContainers:
- image: docker.sunet.se/sunet/docker-jinja:latest
name: init-config
env:
- name: MYSQL_DATABASE
value: "nextcloud_vr"
- name: MYSQL_USER
value: "nextcloud_vr"
- name: GSS_MASTER_URL
value: "https://drive.test.sunet.se"
- name: LOOKUP_SERVER
value: "https://lookup.drive.test.sunet.se"
- name: MAIL_DOMAIN
value: "drive.test.sunet.se"
- name: MAIL_SMTPNAME
value: "noreply@drive.test.sunet.se"
- name: NEXTCLOUD_TRUSTED_DOMAINS
value: "vr.drive.test.sunet.se"
- name: OBJECTSTORE_S3_BUCKET
value: "primary-vr-drive-test.sunet.se"
- name: SITE_NAME
value: "vr.drive.test.sunet.se"

View file

@ -5,20 +5,18 @@ metadata:
name: health-ingress
namespace: health
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: traefik
ingressClassName: nginx
tls:
- hosts:
- kube.drive.test.sunet.se
secretName: tls-secret
rules:
- host: kube.drive.test.sunet.se
http:

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
tls:
- hosts:
- sunet-kube.drive.sunet.se
secretName: tls-secret
rules:
- host: sunet-kube.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: health-ingress
namespace: health
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: health-node
port:
number: 8443
ingressClassName: nginx
tls:
- hosts:
- sunet-kube.drive.test.sunet.se
secretName: tls-secret
rules:
- host: sunet-kube.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: health-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: health-ingress.yml

View file

@ -0,0 +1,28 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyterhub-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: proxy-public
port:
number: 80
ingressClassName: nginx
tls:
- hosts:
- vr-jupyter.drive.sunet.se
secretName: prod-tls-secret
rules:
- host: vr-jupyter.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-public
port:
number: 80

View file

@ -0,0 +1,24 @@
---
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app: jupyterhub-node
name: jupyterhub-node
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: jupyterhub-node
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""

View file

@ -0,0 +1,16 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base/]
helmCharts:
- includeCRDs: true
name: jupyterhub
releaseName: vr-jupyterhub
valuesFile: ./values/values.yaml
version: 3.2.1
namespace: vr-jupyterhub
helmGlobals:
chartHome: ../../../base/charts/
patches:
- path: jupyterhub-ingress.yml
- path: jupyterhub-service.yml

View file

@ -0,0 +1,335 @@
debug:
enabled: true
hub:
config:
Authenticator:
auto_login: true
enable_auth_state: true
JupyterHub:
tornado_settings:
headers: { 'Content-Security-Policy': "frame-ancestors *;" }
db:
pvc:
storageClassName: csi-sc-cinderplugin
extraConfig:
oauthCode: |
import time
import requests
from datetime import datetime
from oauthenticator.generic import GenericOAuthenticator
token_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/index.php/apps/oauth2/api/v1/token'
debug = os.environ.get('NEXTCLOUD_DEBUG_OAUTH', 'false').lower() in ['true', '1', 'yes']
def get_nextcloud_access_token(refresh_token):
client_id = os.environ['NEXTCLOUD_CLIENT_ID']
client_secret = os.environ['NEXTCLOUD_CLIENT_SECRET']
code = refresh_token
data = {
'grant_type': 'refresh_token',
'code': code,
'refresh_token': refresh_token,
'client_id': client_id,
'client_secret': client_secret
}
response = requests.post(token_url, data=data)
if debug:
print(response.text)
return response.json()
def post_auth_hook(authenticator, handler, authentication):
user = authentication['auth_state']['oauth_user']['ocs']['data']['id']
auth_state = authentication['auth_state']
auth_state['token_expires'] = time.time() + auth_state['token_response']['expires_in']
authentication['auth_state'] = auth_state
return authentication
class NextcloudOAuthenticator(GenericOAuthenticator):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.user_dict = {}
async def pre_spawn_start(self, user, spawner):
super().pre_spawn_start(user, spawner)
auth_state = await user.get_auth_state()
if not auth_state:
return
access_token = auth_state['access_token']
spawner.environment['NEXTCLOUD_ACCESS_TOKEN'] = access_token
async def refresh_user(self, user, handler=None):
auth_state = await user.get_auth_state()
if not auth_state:
if debug:
print(f'auth_state missing for {user}')
return False
access_token = auth_state['access_token']
refresh_token = auth_state['refresh_token']
token_response = auth_state['token_response']
now = time.time()
now_hr = datetime.fromtimestamp(now)
expires = auth_state['token_expires']
expires_hr = datetime.fromtimestamp(expires)
expires = 0
if debug:
print(f'auth_state for {user}: {auth_state}')
if now >= expires:
if debug:
print(f'Time is: {now_hr}, token expired: {expires_hr}')
print(f'Refreshing token for {user}')
try:
token_response = get_nextcloud_access_token(refresh_token)
auth_state['access_token'] = token_response['access_token']
auth_state['refresh_token'] = token_response['refresh_token']
auth_state['token_expires'] = now + token_response['expires_in']
auth_state['token_response'] = token_response
if debug:
print(f'Successfully refreshed token for {user.name}')
print(f'auth_state for {user.name}: {auth_state}')
return {'name': user.name, 'auth_state': auth_state}
except Exception as e:
if debug:
print(f'Failed to refresh token for {user}')
return False
return False
if debug:
print(f'Time is: {now_hr}, token expires: {expires_hr}')
return True
c.JupyterHub.authenticator_class = NextcloudOAuthenticator
c.NextcloudOAuthenticator.client_id = os.environ['NEXTCLOUD_CLIENT_ID']
c.NextcloudOAuthenticator.client_secret = os.environ['NEXTCLOUD_CLIENT_SECRET']
c.NextcloudOAuthenticator.login_service = 'Sunet Drive'
c.NextcloudOAuthenticator.username_claim = lambda r: r.get('ocs', {}).get('data', {}).get('id')
c.NextcloudOAuthenticator.userdata_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/ocs/v2.php/cloud/user?format=json'
c.NextcloudOAuthenticator.authorize_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/index.php/apps/oauth2/authorize'
c.NextcloudOAuthenticator.token_url = token_url
c.NextcloudOAuthenticator.oauth_callback_url = 'https://' + os.environ['JUPYTER_HOST'] + '/hub/oauth_callback'
c.NextcloudOAuthenticator.allow_all = True
c.NextcloudOAuthenticator.refresh_pre_spawn = True
c.NextcloudOAuthenticator.enable_auth_state = True
c.NextcloudOAuthenticator.auth_refresh_age = 3600
c.NextcloudOAuthenticator.post_auth_hook = post_auth_hook
serviceCode: |
import sys
c.JupyterHub.load_roles = [
{
"name": "refresh-token",
"services": [
"refresh-token"
],
"scopes": [
"read:users",
"admin:auth_state"
]
},
{
"name": "user",
"scopes": [
"access:services!service=refresh-token",
"read:services!service=refresh-token",
"self",
],
},
{
"name": "server",
"scopes": [
"access:services!service=refresh-token",
"read:services!service=refresh-token",
"inherit",
],
}
]
c.JupyterHub.services = [
{
'name': 'refresh-token',
'url': 'http://' + os.environ.get('HUB_SERVICE_HOST', 'hub') + ':' + os.environ.get('HUB_SERVICE_PORT_REFRESH_TOKEN', '8082'),
'display': False,
'oauth_no_confirm': True,
'api_token': os.environ['JUPYTERHUB_API_KEY'],
'command': [sys.executable, '/usr/local/etc/jupyterhub/refresh-token.py']
}
]
c.JupyterHub.admin_users = {"refresh-token"}
c.JupyterHub.api_tokens = {
os.environ['JUPYTERHUB_API_KEY']: "refresh-token",
}
extraFiles:
refresh-token.py:
mountPath: /usr/local/etc/jupyterhub/refresh-token.py
stringData: |
"""A token refresh service authenticating with the Hub.
This service serves `/services/refresh-token/`,
authenticated with the Hub,
showing the user their own info.
"""
import json
import os
import requests
import socket
from jupyterhub.services.auth import HubAuthenticated
from jupyterhub.utils import url_path_join
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application, HTTPError, RequestHandler, authenticated
from urllib.parse import urlparse
debug = os.environ.get('NEXTCLOUD_DEBUG_OAUTH', 'false').lower() in ['true', '1', 'yes']
def my_debug(s):
if debug:
with open("/proc/1/fd/1", "a") as stdout:
print(s, file=stdout)
class RefreshHandler(HubAuthenticated, RequestHandler):
def api_request(self, method, url, **kwargs):
my_debug(f'{self.hub_auth}')
url = url_path_join(self.hub_auth.api_url, url)
allow_404 = kwargs.pop('allow_404', False)
headers = kwargs.setdefault('headers', {})
headers.setdefault('Authorization', f'token {self.hub_auth.api_token}')
try:
r = requests.request(method, url, **kwargs)
except requests.ConnectionError as e:
my_debug(f'Error connecting to {url}: {e}')
msg = f'Failed to connect to Hub API at {url}.'
msg += f' Is the Hub accessible at this URL (from host: {socket.gethostname()})?'
if '127.0.0.1' in url:
msg += ' Make sure to set c.JupyterHub.hub_ip to an IP accessible to' + \
' single-user servers if the servers are not on the same host as the Hub.'
raise HTTPError(500, msg)
data = None
if r.status_code == 404 and allow_404:
pass
elif r.status_code == 403:
my_debug(
'Lacking permission to check authorization with JupyterHub,' +
f' my auth token may have expired: [{r.status_code}] {r.reason}'
)
my_debug(r.text)
raise HTTPError(
500,
'Permission failure checking authorization, I may need a new token'
)
elif r.status_code >= 500:
my_debug(f'Upstream failure verifying auth token: [{r.status_code}] {r.reason}')
my_debug(r.text)
raise HTTPError(
502, 'Failed to check authorization (upstream problem)')
elif r.status_code >= 400:
my_debug(f'Failed to check authorization: [{r.status_code}] {r.reason}')
my_debug(r.text)
raise HTTPError(500, 'Failed to check authorization')
else:
data = r.json()
return data
@authenticated
def get(self):
user_model = self.get_current_user()
# Fetch current auth state
user_data = self.api_request('GET', url_path_join('users', user_model['name']))
auth_state = user_data['auth_state']
access_token = auth_state['access_token']
token_expires = auth_state['token_expires']
self.set_header('content-type', 'application/json')
self.write(json.dumps({'access_token': access_token, 'token_expires': token_expires}, indent=1, sort_keys=True))
class PingHandler(RequestHandler):
def get(self):
my_debug(f"DEBUG: In ping get")
self.set_header('content-type', 'application/json')
self.write(json.dumps({'ping': 1}))
def main():
app = Application([
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + 'tokens', RefreshHandler),
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + '/?', PingHandler),
])
http_server = HTTPServer(app)
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
http_server.listen(url.port)
IOLoop.current().start()
if __name__ == '__main__':
main()
networkPolicy:
ingress:
- ports:
- port: 8082
from:
- podSelector:
matchLabels:
hub.jupyter.org/network-access-hub: "true"
service:
extraPorts:
- port: 8082
targetPort: 8082
name: refresh-token
extraEnv:
NEXTCLOUD_DEBUG_OAUTH: "no"
NEXTCLOUD_HOST: vr.drive.sunet.se
JUPYTER_HOST: vr-jupyter.drive.sunet.se
JUPYTERHUB_API_KEY:
valueFrom:
secretKeyRef:
name: jupyterhub-secrets
key: api-key
JUPYTERHUB_CRYPT_KEY:
valueFrom:
secretKeyRef:
name: jupyterhub-secrets
key: crypt-key
NEXTCLOUD_CLIENT_ID:
valueFrom:
secretKeyRef:
name: nextcloud-oauth-secrets
key: client-id
NEXTCLOUD_CLIENT_SECRET:
valueFrom:
secretKeyRef:
name: nextcloud-oauth-secrets
key: client-secret
proxy:
chp:
networkPolicy:
egress:
- to:
- podSelector:
matchLabels:
app: jupyterhub
component: hub
ports:
- port: 8082
singleuser:
image:
name: docker.sunet.se/drive/jupyter-custom
tag: lab-4.0.10-sunet4
storage:
dynamic:
storageClass: csi-sc-cinderplugin
extraEnv:
JUPYTER_ENABLE_LAB: "yes"
JUPYTER_HOST: vr-jupyter.drive.sunet.se
NEXTCLOUD_HOST: vr.drive.sunet.se
extraFiles:
jupyter_notebook_config:
mountPath: /home/jovyan/.jupyter/jupyter_server_config.py
stringData: |
import os
c = get_config()
c.NotebookApp.allow_origin = '*'
c.NotebookApp.tornado_settings = {
'headers': { 'Content-Security-Policy': "frame-ancestors *;" }
}
os.system('/usr/local/bin/nc-sync')
mode: 0644

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyterhub-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
defaultBackend:
service:
name: proxy-public
port:
number: 80
tls:
- hosts:
- sunet-jupyter.drive.sunet.se
secretName: tls-secret
rules:
- host: sunet-jupyter.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-public
port:
number: 80

View file

@ -0,0 +1,24 @@
---
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app: jupyterhub-node
name: jupyterhub-node
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: jupyterhub-node
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""

View file

@ -0,0 +1,16 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: [../../../base/]
helmCharts:
- includeCRDs: true
name: jupyterhub
releaseName: sunet-jupyterhub
valuesFile: ./values/values.yaml
version: 3.2.1
namespace: sunet-jupyterhub
helmGlobals:
chartHome: ../../../base/charts/
patches:
- path: jupyterhub-ingress.yml
- path: jupyterhub-service.yml

View file

@ -0,0 +1,337 @@
debug:
enabled: true
hub:
config:
Authenticator:
auto_login: true
enable_auth_state: true
JupyterHub:
tornado_settings:
headers: { 'Content-Security-Policy': "frame-ancestors *;" }
db:
pvc:
storageClassName: csi-sc-cinderplugin
extraConfig:
oauthCode: |
import time
import requests
from datetime import datetime
from oauthenticator.generic import GenericOAuthenticator
token_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/index.php/apps/oauth2/api/v1/token'
debug = os.environ.get('NEXTCLOUD_DEBUG_OAUTH', 'false').lower() in ['true', '1', 'yes']
def get_nextcloud_access_token(refresh_token):
client_id = os.environ['NEXTCLOUD_CLIENT_ID']
client_secret = os.environ['NEXTCLOUD_CLIENT_SECRET']
code = refresh_token
data = {
'grant_type': 'refresh_token',
'code': code,
'refresh_token': refresh_token,
'client_id': client_id,
'client_secret': client_secret
}
response = requests.post(token_url, data=data)
if debug:
print(response.text)
return response.json()
def post_auth_hook(authenticator, handler, authentication):
user = authentication['auth_state']['oauth_user']['ocs']['data']['id']
auth_state = authentication['auth_state']
auth_state['token_expires'] = time.time() + auth_state['token_response']['expires_in']
authentication['auth_state'] = auth_state
return authentication
class NextcloudOAuthenticator(GenericOAuthenticator):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.user_dict = {}
async def pre_spawn_start(self, user, spawner):
super().pre_spawn_start(user, spawner)
auth_state = await user.get_auth_state()
if not auth_state:
return
access_token = auth_state['access_token']
spawner.environment['NEXTCLOUD_ACCESS_TOKEN'] = access_token
async def refresh_user(self, user, handler=None):
auth_state = await user.get_auth_state()
if not auth_state:
if debug:
print(f'auth_state missing for {user}')
return False
access_token = auth_state['access_token']
refresh_token = auth_state['refresh_token']
token_response = auth_state['token_response']
now = time.time()
now_hr = datetime.fromtimestamp(now)
expires = auth_state['token_expires']
expires_hr = datetime.fromtimestamp(expires)
expires = 0
if debug:
print(f'auth_state for {user}: {auth_state}')
if now >= expires:
if debug:
print(f'Time is: {now_hr}, token expired: {expires_hr}')
print(f'Refreshing token for {user}')
try:
token_response = get_nextcloud_access_token(refresh_token)
auth_state['access_token'] = token_response['access_token']
auth_state['refresh_token'] = token_response['refresh_token']
auth_state['token_expires'] = now + token_response['expires_in']
auth_state['token_response'] = token_response
if debug:
print(f'Successfully refreshed token for {user.name}')
print(f'auth_state for {user.name}: {auth_state}')
return {'name': user.name, 'auth_state': auth_state}
except Exception as e:
if debug:
print(f'Failed to refresh token for {user}')
return False
return False
if debug:
print(f'Time is: {now_hr}, token expires: {expires_hr}')
return True
c.JupyterHub.authenticator_class = NextcloudOAuthenticator
c.NextcloudOAuthenticator.client_id = os.environ['NEXTCLOUD_CLIENT_ID']
c.NextcloudOAuthenticator.client_secret = os.environ['NEXTCLOUD_CLIENT_SECRET']
c.NextcloudOAuthenticator.login_service = 'Sunet Drive'
c.NextcloudOAuthenticator.username_claim = lambda r: r.get('ocs', {}).get('data', {}).get('id')
c.NextcloudOAuthenticator.userdata_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/ocs/v2.php/cloud/user?format=json'
c.NextcloudOAuthenticator.authorize_url = 'https://' + os.environ['NEXTCLOUD_HOST'] + '/index.php/apps/oauth2/authorize'
c.NextcloudOAuthenticator.token_url = token_url
c.NextcloudOAuthenticator.oauth_callback_url = 'https://' + os.environ['JUPYTER_HOST'] + '/hub/oauth_callback'
c.NextcloudOAuthenticator.allow_all = True
c.NextcloudOAuthenticator.refresh_pre_spawn = True
c.NextcloudOAuthenticator.enable_auth_state = True
c.NextcloudOAuthenticator.auth_refresh_age = 3600
c.NextcloudOAuthenticator.post_auth_hook = post_auth_hook
serviceCode: |
import sys
c.JupyterHub.load_roles = [
{
"name": "refresh-token",
"services": [
"refresh-token"
],
"scopes": [
"read:users",
"admin:auth_state"
]
},
{
"name": "user",
"scopes": [
"access:services!service=refresh-token",
"read:services!service=refresh-token",
"self",
],
},
{
"name": "server",
"scopes": [
"access:services!service=refresh-token",
"read:services!service=refresh-token",
"inherit",
],
}
]
c.JupyterHub.services = [
{
'name': 'refresh-token',
'url': 'http://' + os.environ.get('HUB_SERVICE_HOST', 'hub') + ':' + os.environ.get('HUB_SERVICE_PORT_REFRESH_TOKEN', '8082'),
'display': False,
'oauth_no_confirm': True,
'api_token': os.environ['JUPYTERHUB_API_KEY'],
'command': [sys.executable, '/usr/local/etc/jupyterhub/refresh-token.py']
}
]
c.JupyterHub.admin_users = {"refresh-token"}
c.JupyterHub.api_tokens = {
os.environ['JUPYTERHUB_API_KEY']: "refresh-token",
}
extraFiles:
refresh-token.py:
mountPath: /usr/local/etc/jupyterhub/refresh-token.py
stringData: |
"""A token refresh service authenticating with the Hub.
This service serves `/services/refresh-token/`,
authenticated with the Hub,
showing the user their own info.
"""
import json
import os
import requests
import socket
from jupyterhub.services.auth import HubAuthenticated
from jupyterhub.utils import url_path_join
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application, HTTPError, RequestHandler, authenticated
from urllib.parse import urlparse
debug = os.environ.get('NEXTCLOUD_DEBUG_OAUTH', 'false').lower() in ['true', '1', 'yes']
def my_debug(s):
if debug:
with open("/proc/1/fd/1", "a") as stdout:
print(s, file=stdout)
class RefreshHandler(HubAuthenticated, RequestHandler):
def api_request(self, method, url, **kwargs):
my_debug(f'{self.hub_auth}')
url = url_path_join(self.hub_auth.api_url, url)
allow_404 = kwargs.pop('allow_404', False)
headers = kwargs.setdefault('headers', {})
headers.setdefault('Authorization', f'token {self.hub_auth.api_token}')
try:
r = requests.request(method, url, **kwargs)
except requests.ConnectionError as e:
my_debug(f'Error connecting to {url}: {e}')
msg = f'Failed to connect to Hub API at {url}.'
msg += f' Is the Hub accessible at this URL (from host: {socket.gethostname()})?'
if '127.0.0.1' in url:
msg += ' Make sure to set c.JupyterHub.hub_ip to an IP accessible to' + \
' single-user servers if the servers are not on the same host as the Hub.'
raise HTTPError(500, msg)
data = None
if r.status_code == 404 and allow_404:
pass
elif r.status_code == 403:
my_debug(
'Lacking permission to check authorization with JupyterHub,' +
f' my auth token may have expired: [{r.status_code}] {r.reason}'
)
my_debug(r.text)
raise HTTPError(
500,
'Permission failure checking authorization, I may need a new token'
)
elif r.status_code >= 500:
my_debug(f'Upstream failure verifying auth token: [{r.status_code}] {r.reason}')
my_debug(r.text)
raise HTTPError(
502, 'Failed to check authorization (upstream problem)')
elif r.status_code >= 400:
my_debug(f'Failed to check authorization: [{r.status_code}] {r.reason}')
my_debug(r.text)
raise HTTPError(500, 'Failed to check authorization')
else:
data = r.json()
return data
@authenticated
def get(self):
user_model = self.get_current_user()
# Fetch current auth state
user_data = self.api_request('GET', url_path_join('users', user_model['name']))
auth_state = user_data['auth_state']
access_token = auth_state['access_token']
token_expires = auth_state['token_expires']
self.set_header('content-type', 'application/json')
self.write(json.dumps({'access_token': access_token, 'token_expires': token_expires}, indent=1, sort_keys=True))
class PingHandler(RequestHandler):
def get(self):
my_debug(f"DEBUG: In ping get")
self.set_header('content-type', 'application/json')
self.write(json.dumps({'ping': 1}))
def main():
app = Application([
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + 'tokens', RefreshHandler),
(os.environ['JUPYTERHUB_SERVICE_PREFIX'] + '/?', PingHandler),
])
http_server = HTTPServer(app)
url = urlparse(os.environ['JUPYTERHUB_SERVICE_URL'])
http_server.listen(url.port)
IOLoop.current().start()
if __name__ == '__main__':
main()
networkPolicy:
ingress:
- ports:
- port: 8082
from:
- podSelector:
matchLabels:
hub.jupyter.org/network-access-hub: "true"
service:
extraPorts:
- port: 8082
targetPort: 8082
name: refresh-token
extraEnv:
NEXTCLOUD_DEBUG_OAUTH: "no"
NEXTCLOUD_HOST: sunet.drive.sunet.se
JUPYTER_HOST: sunet-jupyter.drive.sunet.se
JUPYTERHUB_API_KEY:
valueFrom:
secretKeyRef:
name: jupyterhub-secrets
key: api-key
JUPYTERHUB_CRYPT_KEY:
valueFrom:
secretKeyRef:
name: jupyterhub-secrets
key: crypt-key
NEXTCLOUD_CLIENT_ID:
valueFrom:
secretKeyRef:
name: nextcloud-oauth-secrets
key: client-id
NEXTCLOUD_CLIENT_SECRET:
valueFrom:
secretKeyRef:
name: nextcloud-oauth-secrets
key: client-secret
networkPolicy:
enabled: false
proxy:
chp:
networkPolicy:
egress:
- to:
- podSelector:
matchLabels:
app: jupyterhub
component: hub
ports:
- port: 8082
singleuser:
image:
name: docker.sunet.se/drive/jupyter-custom
tag: lab-4.0.10-sunet5
storage:
dynamic:
storageClass: csi-sc-cinderplugin
extraEnv:
JUPYTER_ENABLE_LAB: "yes"
JUPYTER_HOST: sunet-jupyter.drive.sunet.se
NEXTCLOUD_HOST: sunet.drive.sunet.se
extraFiles:
jupyter_notebook_config:
mountPath: /home/jovyan/.jupyter/jupyter_server_config.py
stringData: |
import os
c = get_config()
c.NotebookApp.allow_origin = '*'
c.NotebookApp.tornado_settings = {
'headers': { 'Content-Security-Policy': "frame-ancestors *;" }
}
os.system('/usr/local/bin/nc-sync')
mode: 0644

View file

@ -4,15 +4,14 @@ kind: Ingress
metadata:
name: jupyterhub-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
defaultBackend:
service:
name: proxy-public
port:
number: 8443
number: 80
tls:
- hosts:
- sunet-jupyter.drive.test.sunet.se

View file

@ -315,7 +315,7 @@ proxy:
singleuser:
image:
name: docker.sunet.se/drive/jupyter-custom
tag: lab-4.0.10-sunet4
tag: lab-4.0.10-sunet5
storage:
dynamic:
storageClass: csi-sc-cinderplugin

View file

@ -0,0 +1,7 @@
resources:
- portal-deployment.yml
- portal-ingress.yml
- portal-namespace.yml
- portal-service.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View file

@ -0,0 +1,27 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: portal-node
namespace: portal
creationTimestamp:
labels:
app: portal-node
spec:
replicas: 3
selector:
matchLabels:
app: portal-node
template:
metadata:
creationTimestamp:
labels:
app: portal-node
spec:
containers:
- name: portal
image: docker.sunet.se/drive/portal:0.1.0-2
imagePullPolicy: Always
resources: {}
strategy: {}
status: {}

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portal-ingress
namespace: portal
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: portal-node
port:
number: 8080
tls:
- hosts:
- portal.drive.test.sunet.se
secretName: tls-secret
ingressClassName: nginx
rules:
- host: portal.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portal-node
port:
number: 8080

View file

@ -0,0 +1,8 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: portal
spec:
finalizers:
- kubernetes

View file

@ -0,0 +1,25 @@
---
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app: portal-node
name: portal-node
namespace: portal
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: portal-node
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: portal-ingress.yml

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portal-ingress
namespace: portal
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: portal-node
port:
number: 8080
ingressClassName: nginx
tls:
- hosts:
- portal.drive.sunet.se
secretName: tls-secret
rules:
- host: portal.drive.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portal-node
port:
number: 8080

View file

@ -0,0 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: portal-ingress.yml

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portal-ingress
namespace: portal
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: portal-node
port:
number: 8080
ingressClassName: nginx
tls:
- hosts:
- portal.drive.test.sunet.se
secretName: tls-secret
rules:
- host: portal.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portal-node
port:
number: 8080

View file

@ -19,35 +19,34 @@ spec:
spec:
containers:
- name: doris
image: docker.sunet.se/rds/doris-rds:ci-RDS-Connectors-13
image: docker.sunet.se/rds/doris-rds:git-15de3c5b9
env:
- name: ASPNETCORE_ENVIRONMENT
value: Development
- name: ASPNETCORE_URLS
value: http://+:80
- name: Domain
value: sunet.se
- name: Logging__LogLevel__Default
value: Debug
- name: ScieboRds__ConnectorServiceName
value: layer1-port-doris
- name: ScieboRds__TokenStorageUrl
value: http://layer3-token-storage
- name: S3__Url
value: dummy
- name: ManifestIndex__Url
value: https://snd-storage-metadata-index-test-snd-dev.apps.k8s.gu.se
- name: ManifestIndex__ApiKey
- name: Doris__ApiKey
valueFrom:
secretKeyRef:
name: doris-api-key
name: doris-gu-secrets
key: "api-key"
- name: S3__AccessKey
- name: Doris__DorisApiEnabled
value: 'true'
- name: Doris__PrincipalDomain
value: gu.se
- name: Doris__ApiUrl
value: https://dev.snd.se/doris/api/rocrate
- name: NextCloud__BaseUrl
value: https://gu.drive.test.sunet.se
- name: NextCloud__User
value: _doris_datasets
- name: NextCloud__Password
valueFrom:
secretKeyRef:
name: doris-s3-key
key: "s3-key"
- name: S3__SecretKey
valueFrom:
secretKeyRef:
name: doris-s3-secret
key: "s3-secret"
name: doris-gu-secret
key: "nextcloudpw"
resources: {}
strategy: {}
status: {}

View file

@ -12,7 +12,7 @@ items:
ports:
- port: 80
protocol: TCP
targetPort: 80
targetPort: 8080
selector:
app: layer1-port-doris
sessionAffinity: None

11
rds/base/gu-service.yml Normal file
View file

@ -0,0 +1,11 @@
---
apiVersion: v1
kind: Service
metadata:
name: gu-drive
namespace: helmrds
spec:
type: ExternalName
externalName: gu.drive.test.sunet.se
ports:
- port: 443

View file

@ -3,6 +3,7 @@ resources:
- doris-deployment.yml
- rds-ingress.yml
- sunet-service.yml
- gu-service.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View file

@ -16,18 +16,22 @@ global:
ADDRESS: https://gu.drive.test.sunet.se
SUPPORT_EMAIL: drive@sunet.se
MANUAL_URL: www.sunet.se/services/molnbaserade-tjanster/sunet-drive
EFSS: nextcloud
- name: sunet.drive.test.sunet.se
ADDRESS: https://sunet.drive.test.sunet.se
SUPPORT_EMAIL: drive@sunet.se
MANUAL_URL: www.sunet.se/services/molnbaserade-tjanster/sunet-drive
EFSS: nextcloud
- name: su.drive.test.sunet.se
ADDRESS: https://su.drive.test.sunet.se
SUPPORT_EMAIL: drive@sunet.se
MANUAL_URL: www.sunet.se/services/molnbaserade-tjanster/sunet-drive
EFSS: nextcloud
- name: kau.drive.test.sunet.se
ADDRESS: https://kau.drive.test.sunet.se
SUPPORT_EMAIL: drive@sunet.se
MANUAL_URL: www.sunet.se/services/molnbaserade-tjanster/sunet-drive
EFSS: nextcloud
storageClass: csi-sc-cinderplugin
layer0-describo:
postgresql:
@ -60,11 +64,11 @@ layer0-web:
environment:
SECRET_KEY: IAMLAYER0WEBSECRET
EMBED_MODE: true
layer1-port-doris:
enabled: false
environment:
DISPLAYNAME: Doris
METADATA_PROFILE: './metadata_profile.json'
# layer1-port-doris:
# enabled: false
# environment:
# DISPLAYNAME: Doris Connector
# METADATA_PROFILE: './metadata_profile.json'
layer1-port-owncloud:
enabled: true
layer1-port-zenodo:

View file

@ -5,9 +5,7 @@ metadata:
name: describo-ingress
namespace: helmrds
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
@ -18,7 +16,7 @@ spec:
- hosts:
- describo.drive.test.sunet.se
secretName: tls-secret
ingressClassName: nginx
rules:
- host: describo.drive.test.sunet.se
http:

View file

@ -0,0 +1,11 @@
---
apiVersion: v1
kind: Service
metadata:
name: gu-drive
namespace: helmrds
spec:
type: ExternalName
externalName: gu.drive.test.sunet.se
ports:
- port: 443

View file

@ -6,3 +6,4 @@ patches:
- path: describo-ingress.yml
- path: rds-ingress.yml
- path: sunet-service.yml
- path: gu-service.yml

View file

@ -5,9 +5,7 @@ metadata:
name: rds-ingress
namespace: helmrds
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
@ -18,7 +16,7 @@ spec:
- hosts:
- rds.drive.test.sunet.se
secretName: tls-secret
ingressClassName: nginx
rules:
- host: rds.drive.test.sunet.se
http:

View file

@ -0,0 +1,38 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
ci/
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# MacOS
.DS_Store
# helm-unittest
./tests
.debug
__snapshot__
# helm-docs
README.md.gotmpl

View file

@ -0,0 +1,11 @@
apiVersion: v2
appVersion: v1beta2-1.6.1-3.5.0
description: A Helm chart for Spark on Kubernetes operator
home: https://github.com/kubeflow/spark-operator
keywords:
- spark
maintainers:
- email: yuchaoran2011@gmail.com
name: yuchaoran2011
name: spark-operator
version: 1.4.2

View file

@ -0,0 +1,146 @@
# spark-operator
![Version: 1.4.2](https://img.shields.io/badge/Version-1.4.2-informational?style=flat-square) ![AppVersion: v1beta2-1.6.1-3.5.0](https://img.shields.io/badge/AppVersion-v1beta2--1.6.1--3.5.0-informational?style=flat-square)
A Helm chart for Spark on Kubernetes operator
**Homepage:** <https://github.com/kubeflow/spark-operator>
## Introduction
This chart bootstraps a [Kubernetes Operator for Apache Spark](https://github.com/kubeflow/spark-operator) deployment using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Helm >= 3
- Kubernetes >= 1.16
## Previous Helm Chart
The previous `spark-operator` Helm chart hosted at [helm/charts](https://github.com/helm/charts) has been moved to this repository in accordance with the [Deprecation timeline](https://github.com/helm/charts#deprecation-timeline). Note that a few things have changed between this version and the old version:
- This repository **only** supports Helm chart installations using Helm 3+ since the `apiVersion` on the chart has been marked as `v2`.
- Previous versions of the Helm chart have not been migrated, and the version has been set to `1.0.0` at the onset. If you are looking for old versions of the chart, it's best to run `helm pull incubator/sparkoperator --version <your-version>` until you are ready to move to this repository's version.
- Several configuration properties have been changed, carefully review the [values](#values) section below to make sure you're aligned with the new values.
## Usage
### Add Helm Repo
```shell
helm repo add spark-operator https://kubeflow.github.io/spark-operator
helm repo update
```
See [helm repo](https://helm.sh/docs/helm/helm_repo) for command documentation.
### Install the chart
```shell
helm install [RELEASE_NAME] spark-operator/spark-operator
```
For example, if you want to create a release with name `spark-operator` in the `default` namespace:
```shell
helm install spark-operator spark-operator/spark-operator
```
Note that `helm` will fail to install if the namespace doesn't exist. Either create the namespace beforehand or pass the `--create-namespace` flag to the `helm install` command.
```shell
helm install spark-operator spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace
```
See [helm install](https://helm.sh/docs/helm/helm_install) for command documentation.
### Upgrade the chart
```shell
helm upgrade [RELEASE_NAME] spark-operator/spark-operator [flags]
```
See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade) for command documentation.
### Uninstall the chart
```shell
helm uninstall [RELEASE_NAME]
```
This removes all the Kubernetes resources associated with the chart and deletes the release, except for the `crds`, those will have to be removed manually.
See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall) for command documentation.
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Affinity for pod assignment |
| batchScheduler.enable | bool | `false` | Enable batch scheduler for spark jobs scheduling. If enabled, users can specify batch scheduler name in spark application |
| commonLabels | object | `{}` | Common labels to add to the resources |
| controllerThreads | int | `10` | Operator concurrency, higher values might increase memory usage |
| envFrom | list | `[]` | Pod environment variable sources |
| fullnameOverride | string | `""` | String to override release name |
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy |
| image.repository | string | `"docker.io/kubeflow/spark-operator"` | Image repository |
| image.tag | string | `""` | if set, override the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | Image pull secrets |
| ingressUrlFormat | string | `""` | Ingress URL format. Requires the UI service to be enabled by setting `uiService.enable` to true. |
| istio.enabled | bool | `false` | When using `istio`, spark jobs need to run without a sidecar to properly terminate |
| labelSelectorFilter | string | `""` | A comma-separated list of key=value, or key labels to filter resources during watch and list based on the specified labels. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockNamespace | string | `""` | Optionally store the lock in another namespace. Defaults to operator's namespace |
| logLevel | int | `2` | Set higher levels for more verbose logging |
| metrics.enable | bool | `true` | Enable prometheus metric scraping |
| metrics.endpoint | string | `"/metrics"` | Metrics serving endpoint |
| metrics.port | int | `10254` | Metrics port |
| metrics.portName | string | `"metrics"` | Metrics port name |
| metrics.prefix | string | `""` | Metric prefix, will be added to all exported metrics |
| nameOverride | string | `""` | String to partially override `spark-operator.fullname` template (will maintain the release name) |
| nodeSelector | object | `{}` | Node labels for pod assignment |
| podAnnotations | object | `{}` | Additional annotations to add to the pod |
| podLabels | object | `{}` | Additional labels to add to the pod |
| podMonitor | object | `{"enable":false,"jobLabel":"spark-operator-podmonitor","labels":{},"podMetricsEndpoint":{"interval":"5s","scheme":"http"}}` | Prometheus pod monitor for operator's pod. |
| podMonitor.enable | bool | `false` | If enabled, a pod monitor for operator's pod will be submitted. Note that prometheus metrics should be enabled as well. |
| podMonitor.jobLabel | string | `"spark-operator-podmonitor"` | The label to use to retrieve the job name from |
| podMonitor.labels | object | `{}` | Pod monitor labels |
| podMonitor.podMetricsEndpoint | object | `{"interval":"5s","scheme":"http"}` | Prometheus metrics endpoint properties. `metrics.portName` will be used as a port |
| podSecurityContext | object | `{}` | Pod security context |
| priorityClassName | string | `""` | A priority class to be used for running spark-operator pod. |
| rbac.annotations | object | `{}` | Optional annotations for rbac |
| rbac.create | bool | `false` | **DEPRECATED** use `createRole` and `createClusterRole` |
| rbac.createClusterRole | bool | `true` | Create and use RBAC `ClusterRole` resources |
| rbac.createRole | bool | `true` | Create and use RBAC `Role` resources |
| replicaCount | int | `1` | Desired number of pods, leaderElection will be enabled if this is greater than 1 |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resources | object | `{}` | Pod resource requests and limits Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m". Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error: 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits. |
| resyncInterval | int | `30` | Operator resync interval. Note that the operator will respond to events (e.g. create, update) unrelated to this setting |
| securityContext | object | `{}` | Operator container security context |
| serviceAccounts.spark.annotations | object | `{}` | Optional annotations for the spark service account |
| serviceAccounts.spark.create | bool | `true` | Create a service account for spark apps |
| serviceAccounts.spark.name | string | `""` | Optional name for the spark service account |
| serviceAccounts.sparkoperator.annotations | object | `{}` | Optional annotations for the operator service account |
| serviceAccounts.sparkoperator.create | bool | `true` | Create a service account for the operator |
| serviceAccounts.sparkoperator.name | string | `""` | Optional name for the operator service account |
| sidecars | list | `[]` | Sidecar containers |
| sparkJobNamespaces | list | `[""]` | List of namespaces where to run spark jobs |
| tolerations | list | `[]` | List of node taints to tolerate |
| uiService.enable | bool | `true` | Enable UI service creation for Spark application |
| volumeMounts | list | `[]` | |
| volumes | list | `[]` | |
| webhook.enable | bool | `false` | Enable webhook server |
| webhook.namespaceSelector | string | `""` | The webhook server will only operate on namespaces with this label, specified in the form key1=value1,key2=value2. Empty string (default) will operate on all namespaces |
| webhook.objectSelector | string | `""` | The webhook will only operate on resources with this label/s, specified in the form key1=value1,key2=value2, OR key in (value1,value2). Empty string (default) will operate on all objects |
| webhook.port | int | `8080` | Webhook service port |
| webhook.portName | string | `"webhook"` | Webhook container port name and service target port name |
| webhook.timeout | int | `30` | The annotations applied to init job, required to restore certs deleted by the cleanup job during upgrade |
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| yuchaoran2011 | <yuchaoran2011@gmail.com> | |

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,79 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "spark-operator.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "spark-operator.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "spark-operator.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "spark-operator.labels" -}}
helm.sh/chart: {{ include "spark-operator.chart" . }}
{{ include "spark-operator.selectorLabels" . }}
{{- if .Values.commonLabels }}
{{ toYaml .Values.commonLabels }}
{{- end }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "spark-operator.selectorLabels" -}}
app.kubernetes.io/name: {{ include "spark-operator.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to be used by the operator
*/}}
{{- define "spark-operator.serviceAccountName" -}}
{{- if .Values.serviceAccounts.sparkoperator.create -}}
{{ default (include "spark-operator.fullname" .) .Values.serviceAccounts.sparkoperator.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.sparkoperator.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to be used by spark apps
*/}}
{{- define "spark.serviceAccountName" -}}
{{- if .Values.serviceAccounts.spark.create -}}
{{- $sparkServiceaccount := printf "%s-%s" .Release.Name "spark" -}}
{{ default $sparkServiceaccount .Values.serviceAccounts.spark.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.spark.name }}
{{- end -}}
{{- end -}}

View file

@ -0,0 +1,140 @@
# If the admission webhook is enabled, then a post-install step is required
# to generate and install the secret in the operator namespace.
# In the post-install hook, the token corresponding to the operator service account
# is used to authenticate with the Kubernetes API server to install the secret bundle.
{{- $jobNamespaces := .Values.sparkJobNamespaces | default list }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spark-operator.fullname" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "spark-operator.selectorLabels" . | nindent 6 }}
strategy:
type: Recreate
template:
metadata:
{{- if or .Values.podAnnotations .Values.metrics.enable }}
annotations:
{{- if .Values.metrics.enable }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.port }}"
prometheus.io/path: {{ .Values.metrics.endpoint }}
{{- end }}
{{- if .Values.podAnnotations }}
{{- toYaml .Values.podAnnotations | trim | nindent 8 }}
{{- end }}
{{- end }}
labels:
{{- include "spark-operator.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "spark-operator.serviceAccountName" . }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if gt (int .Values.replicaCount) 1 }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
{{- end }}
envFrom:
{{- toYaml .Values.envFrom | nindent 10 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 10 }}
{{- if or .Values.metrics.enable .Values.webhook.enable }}
ports:
{{ if .Values.metrics.enable -}}
- name: {{ .Values.metrics.portName | quote }}
containerPort: {{ .Values.metrics.port }}
{{- end }}
{{ if .Values.webhook.enable -}}
- name: {{ .Values.webhook.portName | quote }}
containerPort: {{ .Values.webhook.port }}
{{- end }}
{{ end -}}
args:
- -v={{ .Values.logLevel }}
- -logtostderr
{{- if eq (len $jobNamespaces) 1 }}
- -namespace={{ index $jobNamespaces 0 }}
{{- end }}
- -enable-ui-service={{ .Values.uiService.enable}}
- -ingress-url-format={{ .Values.ingressUrlFormat }}
- -controller-threads={{ .Values.controllerThreads }}
- -resync-interval={{ .Values.resyncInterval }}
- -enable-batch-scheduler={{ .Values.batchScheduler.enable }}
- -label-selector-filter={{ .Values.labelSelectorFilter }}
{{- if .Values.metrics.enable }}
- -enable-metrics=true
- -metrics-labels=app_type
- -metrics-port={{ .Values.metrics.port }}
- -metrics-endpoint={{ .Values.metrics.endpoint }}
- -metrics-prefix={{ .Values.metrics.prefix }}
{{- end }}
{{- if .Values.webhook.enable }}
- -enable-webhook=true
- -webhook-secret-name={{ include "spark-operator.webhookSecretName" . }}
- -webhook-secret-namespace={{ .Release.Namespace }}
- -webhook-svc-name={{ include "spark-operator.webhookServiceName" . }}
- -webhook-svc-namespace={{ .Release.Namespace }}
- -webhook-config-name={{ include "spark-operator.fullname" . }}-webhook-config
- -webhook-port={{ .Values.webhook.port }}
- -webhook-timeout={{ .Values.webhook.timeout }}
- -webhook-namespace-selector={{ .Values.webhook.namespaceSelector }}
- -webhook-object-selector={{ .Values.webhook.objectSelector }}
{{- end }}
- -enable-resource-quota-enforcement={{ .Values.resourceQuotaEnforcement.enable }}
{{- if gt (int .Values.replicaCount) 1 }}
- -leader-election=true
- -leader-election-lock-namespace={{ default .Release.Namespace .Values.leaderElection.lockNamespace }}
- -leader-election-lock-name={{ .Values.leaderElection.lockName }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.volumeMounts }}
volumeMounts:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- with .Values.sidecars }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View file

@ -0,0 +1,19 @@
{{ if and .Values.metrics.enable .Values.podMonitor.enable }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ include "spark-operator.name" . -}}-podmonitor
labels: {{ toYaml .Values.podMonitor.labels | nindent 4 }}
spec:
podMetricsEndpoints:
- interval: {{ .Values.podMonitor.podMetricsEndpoint.interval }}
port: {{ .Values.metrics.portName | quote }}
scheme: {{ .Values.podMonitor.podMetricsEndpoint.scheme }}
jobLabel: {{ .Values.podMonitor.jobLabel }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
{{- include "spark-operator.selectorLabels" . | nindent 6 }}
{{ end }}

View file

@ -0,0 +1,148 @@
{{- if or .Values.rbac.create .Values.rbac.createClusterRole -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "spark-operator.fullname" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
{{- with .Values.rbac.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
rules:
- apiGroups:
- ""
resources:
- pods
- persistentvolumeclaims
verbs:
- "*"
- apiGroups:
- ""
resources:
- services
- configmaps
- secrets
verbs:
- create
- get
- delete
- update
- patch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- create
- get
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- events
verbs:
- create
- update
- patch
- apiGroups:
- ""
resources:
- resourcequotas
verbs:
- get
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- create
- get
- update
- delete
- apiGroups:
- sparkoperator.k8s.io
resources:
- sparkapplications
- sparkapplications/status
- sparkapplications/finalizers
- scheduledsparkapplications
- scheduledsparkapplications/status
- scheduledsparkapplications/finalizers
verbs:
- "*"
{{- if .Values.batchScheduler.enable }}
# required for the `volcano` batch scheduler
- apiGroups:
- scheduling.incubator.k8s.io
- scheduling.sigs.dev
- scheduling.volcano.sh
resources:
- podgroups
verbs:
- "*"
{{- end }}
{{ if .Values.webhook.enable }}
- apiGroups:
- batch
resources:
- jobs
verbs:
- delete
{{- end }}
{{- if gt (int .Values.replicaCount) 1 }}
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- {{ .Values.leaderElection.lockName }}
verbs:
- get
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "spark-operator.fullname" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
{{- with .Values.rbac.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ include "spark-operator.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "spark-operator.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}

View file

@ -0,0 +1,12 @@
{{- if .Values.serviceAccounts.sparkoperator.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "spark-operator.serviceAccountName" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
{{- with .Values.serviceAccounts.sparkoperator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,39 @@
{{- if or .Values.rbac.create .Values.rbac.createRole }}
{{- $jobNamespaces := .Values.sparkJobNamespaces | default list }}
{{- range $jobNamespace := $jobNamespaces }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: spark-role
namespace: {{ $jobNamespace }}
labels:
{{- include "spark-operator.labels" $ | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- pods
- services
- configmaps
- persistentvolumeclaims
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spark
namespace: {{ $jobNamespace }}
labels:
{{- include "spark-operator.labels" $ | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "spark.serviceAccountName" $ }}
namespace: {{ $jobNamespace }}
roleRef:
kind: Role
name: spark-role
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}

View file

@ -0,0 +1,14 @@
{{- if .Values.serviceAccounts.spark.create }}
{{- range $sparkJobNamespace := .Values.sparkJobNamespaces | default (list .Release.Namespace) }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "spark.serviceAccountName" $ }}
namespace: {{ $sparkJobNamespace }}
{{- with $.Values.serviceAccounts.spark.annotations }}
annotations: {{ toYaml . | nindent 4 }}
{{- end }}
labels: {{ include "spark-operator.labels" $ | nindent 4 }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,14 @@
{{/*
Create the name of the secret to be used by webhook
*/}}
{{- define "spark-operator.webhookSecretName" -}}
{{ include "spark-operator.fullname" . }}-webhook-certs
{{- end -}}
{{/*
Create the name of the service to be used by webhook
*/}}
{{- define "spark-operator.webhookServiceName" -}}
{{ include "spark-operator.fullname" . }}-webhook-svc
{{- end -}}

View file

@ -0,0 +1,13 @@
{{- if .Values.webhook.enable -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "spark-operator.webhookSecretName" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
data:
ca-key.pem: ""
ca-cert.pem: ""
server-key.pem: ""
server-cert.pem: ""
{{- end }}

View file

@ -0,0 +1,15 @@
{{- if .Values.webhook.enable -}}
apiVersion: v1
kind: Service
metadata:
name: {{ include "spark-operator.webhookServiceName" . }}
labels:
{{- include "spark-operator.labels" . | nindent 4 }}
spec:
selector:
{{- include "spark-operator.selectorLabels" . | nindent 4 }}
ports:
- port: 443
targetPort: {{ .Values.webhook.portName | quote }}
name: {{ .Values.webhook.portName }}
{{- end }}

View file

@ -0,0 +1,301 @@
suite: Test spark operator deployment
templates:
- deployment.yaml
release:
name: spark-operator
tests:
- it: Should contain namespace arg when sparkJobNamespaces is equal to 1
set:
sparkJobNamespaces:
- ns1
asserts:
- contains:
path: spec.template.spec.containers[0].args
content: -namespace=ns1
- it: Should add pod annotations if podAnnotations is set
set:
podAnnotations:
key1: value1
key2: value2
asserts:
- equal:
path: spec.template.metadata.annotations.key1
value: value1
- equal:
path: spec.template.metadata.annotations.key2
value: value2
- it: Should add prometheus annotations if metrics.enable is true
set:
metrics:
enable: true
port: 10254
endpoint: /metrics
asserts:
- equal:
path: spec.template.metadata.annotations["prometheus.io/scrape"]
value: "true"
- equal:
path: spec.template.metadata.annotations["prometheus.io/port"]
value: "10254"
- equal:
path: spec.template.metadata.annotations["prometheus.io/path"]
value: /metrics
- it: Should add secrets if imagePullSecrets is set
set:
imagePullSecrets:
- name: test-secret1
- name: test-secret2
asserts:
- equal:
path: spec.template.spec.imagePullSecrets[0].name
value: test-secret1
- equal:
path: spec.template.spec.imagePullSecrets[1].name
value: test-secret2
- it: Should add pod securityContext if podSecurityContext is set
set:
podSecurityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
asserts:
- equal:
path: spec.template.spec.securityContext.runAsUser
value: 1000
- equal:
path: spec.template.spec.securityContext.runAsGroup
value: 2000
- equal:
path: spec.template.spec.securityContext.fsGroup
value: 3000
- it: Should use the specified image repository if image.repository and image.tag is set
set:
image:
repository: test-repository
tag: test-tag
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: test-repository:test-tag
- it: Should use the specified image pull policy if image.pullPolicy is set
set:
image:
pullPolicy: Always
asserts:
- equal:
path: spec.template.spec.containers[0].imagePullPolicy
value: Always
- it: Should add container securityContext if securityContext is set
set:
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 3000
asserts:
- equal:
path: spec.template.spec.containers[0].securityContext.runAsUser
value: 1000
- equal:
path: spec.template.spec.containers[0].securityContext.runAsGroup
value: 2000
- equal:
path: spec.template.spec.containers[0].securityContext.fsGroup
value: 3000
- it: Should add metric ports if metrics.enable is true
set:
metrics:
enable: true
port: 10254
portName: metrics
asserts:
- contains:
path: spec.template.spec.containers[0].ports
content:
name: metrics
containerPort: 10254
count: 1
- it: Should add webhook ports if webhook.enable is true
set:
webhook:
enable: true
port: 8080
portName: webhook
asserts:
- contains:
path: spec.template.spec.containers[0].ports
content:
name: webhook
containerPort: 8080
count: 1
- it: Should add resources if resources is set
set:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
asserts:
- equal:
path: spec.template.spec.containers[0].resources
value:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- it: Should add sidecars if sidecars is set
set:
sidecars:
- name: sidecar1
image: sidecar-image1
- name: sidecar2
image: sidecar-image2
asserts:
- contains:
path: spec.template.spec.containers
content:
name: sidecar1
image: sidecar-image1
count: 1
- contains:
path: spec.template.spec.containers
content:
name: sidecar2
image: sidecar-image2
count: 1
- it: Should add volumes if volumes is set
set:
volumes:
- name: volume1
emptyDir: {}
- name: volume2
emptyDir: {}
asserts:
- contains:
path: spec.template.spec.volumes
content:
name: volume1
emptyDir: {}
count: 1
- contains:
path: spec.template.spec.volumes
content:
name: volume2
emptyDir: {}
count: 1
- it: Should add volume mounts if volumeMounts is set
set:
volumeMounts:
- name: volume1
mountPath: /volume1
- name: volume2
mountPath: /volume2
asserts:
- contains:
path: spec.template.spec.containers[0].volumeMounts
content:
name: volume1
mountPath: /volume1
count: 1
- contains:
path: spec.template.spec.containers[0].volumeMounts
content:
name: volume2
mountPath: /volume2
count: 1
- it: Should add nodeSelector if nodeSelector is set
set:
nodeSelector:
key1: value1
key2: value2
asserts:
- equal:
path: spec.template.spec.nodeSelector.key1
value: value1
- equal:
path: spec.template.spec.nodeSelector.key2
value: value2
- it: Should add affinity if affinity is set
set:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
asserts:
- equal:
path: spec.template.spec.affinity
value:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- it: Should add tolerations if tolerations is set
set:
tolerations:
- key: key1
operator: Equal
value: value1
effect: NoSchedule
- key: key2
operator: Exists
effect: NoSchedule
asserts:
- equal:
path: spec.template.spec.tolerations
value:
- key: key1
operator: Equal
value: value1
effect: NoSchedule
- key: key2
operator: Exists
effect: NoSchedule

View file

@ -0,0 +1,90 @@
suite: Test spark operator rbac
templates:
- rbac.yaml
release:
name: spark-operator
tests:
- it: Should not render spark operator rbac resources if rbac.create is false and rbac.createClusterRole is false
set:
rbac:
create: false
createClusterRole: false
asserts:
- hasDocuments:
count: 0
- it: Should render spark operator cluster role if rbac.create is true
set:
rbac:
create: true
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
name: spark-operator
- it: Should render spark operator cluster role if rbac.createClusterRole is true
set:
rbac:
createClusterRole: true
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
name: spark-operator
- it: Should render spark operator cluster role binding if rbac.create is true
set:
rbac:
create: true
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: spark-operator
- it: Should render spark operator cluster role binding correctly if rbac.createClusterRole is true
set:
rbac:
createClusterRole: true
release:
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: spark-operator
- contains:
path: subjects
content:
kind: ServiceAccount
name: spark-operator
namespace: NAMESPACE
count: 1
- equal:
path: roleRef
value:
kind: ClusterRole
name: spark-operator
apiGroup: rbac.authorization.k8s.io
- it: Should add extra annotations to spark operator cluster role if rbac.annotations is set
set:
rbac:
annotations:
key1: value1
key2: value2
documentIndex: 0
asserts:
- equal:
path: metadata.annotations.key1
value: value1
- equal:
path: metadata.annotations.key2
value: value2

View file

@ -0,0 +1,54 @@
suite: Test spark operator service account
templates:
- serviceaccount.yaml
release:
name: spark-operator
tests:
- it: Should not render service account if serviceAccounts.sparkoperator.create is false
set:
serviceAccounts:
sparkoperator:
create: false
asserts:
- hasDocuments:
count: 0
- it: Should render service account if serviceAccounts.sparkoperator.create is true
set:
serviceAccounts:
sparkoperator:
create: true
asserts:
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark-operator
- it: Should use the specified service account name if serviceAccounts.sparkoperator.name is set
set:
serviceAccounts:
sparkoperator:
name: custom-service-account
asserts:
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: custom-service-account
- it: Should add extra annotations if serviceAccounts.sparkoperator.annotations is set
set:
serviceAccounts:
sparkoperator:
annotations:
key1: value1
key2: value2
asserts:
- equal:
path: metadata.annotations.key1
value: value1
- equal:
path: metadata.annotations.key2
value: value2

View file

@ -0,0 +1,133 @@
suite: Test spark rbac
templates:
- spark-rbac.yaml
release:
name: spark-operator
tests:
- it: Should not render spark rbac resources if rbac.create is false and rbac.createRole is false
set:
rbac:
create: false
createRole: false
asserts:
- hasDocuments:
count: 0
- it: Should render spark role if rbac.create is true
set:
rbac:
create: true
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
name: spark-role
- it: Should render spark role if rbac.createRole is true
set:
rbac:
createRole: true
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
name: spark-role
- it: Should render spark role binding if rbac.create is true
set:
rbac:
create: true
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: spark
- it: Should render spark role binding if rbac.createRole is true
set:
rbac:
createRole: true
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: spark
- it: Should create a single spark role with namespace "" by default
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
name: spark-role
namespace: ""
- it: Should create a single spark role binding with namespace "" by default
values:
- ../values.yaml
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: spark
namespace: ""
- it: Should render multiple spark roles if sparkJobNamespaces is set with multiple values
set:
sparkJobNamespaces:
- ns1
- ns2
documentIndex: 0
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
name: spark-role
namespace: ns1
- it: Should render multiple spark role bindings if sparkJobNamespaces is set with multiple values
set:
sparkJobNamespaces:
- ns1
- ns2
documentIndex: 1
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: spark
namespace: ns1
- it: Should render multiple spark roles if sparkJobNamespaces is set with multiple values
set:
sparkJobNamespaces:
- ns1
- ns2
documentIndex: 2
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
name: spark-role
namespace: ns2
- it: Should render multiple spark role bindings if sparkJobNamespaces is set with multiple values
set:
sparkJobNamespaces:
- ns1
- ns2
documentIndex: 3
asserts:
- containsDocument:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
name: spark
namespace: ns2

View file

@ -0,0 +1,112 @@
suite: Test spark service account
templates:
- spark-serviceaccount.yaml
release:
name: spark-operator
tests:
- it: Should not render service account if serviceAccounts.spark.create is false
set:
serviceAccounts:
spark:
create: false
asserts:
- hasDocuments:
count: 0
- it: Should render service account if serviceAccounts.spark.create is true
set:
serviceAccounts:
spark:
create: true
asserts:
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark-operator-spark
- it: Should use the specified service account name if serviceAccounts.spark.name is set
set:
serviceAccounts:
spark:
name: spark
asserts:
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark
- it: Should add extra annotations if serviceAccounts.spark.annotations is set
set:
serviceAccounts:
spark:
annotations:
key1: value1
key2: value2
asserts:
- equal:
path: metadata.annotations.key1
value: value1
- equal:
path: metadata.annotations.key2
value: value2
- it: Should create multiple service accounts if sparkJobNamespaces is set
set:
serviceAccounts:
spark:
name: spark
sparkJobNamespaces:
- ns1
- ns2
- ns3
documentIndex: 0
asserts:
- hasDocuments:
count: 3
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark
namespace: ns1
- it: Should create multiple service accounts if sparkJobNamespaces is set
set:
serviceAccounts:
spark:
name: spark
sparkJobNamespaces:
- ns1
- ns2
- ns3
documentIndex: 1
asserts:
- hasDocuments:
count: 3
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark
namespace: ns2
- it: Should create multiple service accounts if sparkJobNamespaces is set
set:
serviceAccounts:
spark:
name: spark
sparkJobNamespaces:
- ns1
- ns2
- ns3
documentIndex: 2
asserts:
- hasDocuments:
count: 3
- containsDocument:
apiVersion: v1
kind: ServiceAccount
name: spark
namespace: ns3

View file

@ -0,0 +1,31 @@
suite: Test spark operator webhook secret
templates:
- webhook/secret.yaml
release:
name: spark-operator
namespace: spark-operator
tests:
- it: Should not render the webhook secret if webhook.enable is false
asserts:
- hasDocuments:
count: 0
- it: Should render the webhook secret with empty data fields
set:
webhook:
enable: true
asserts:
- containsDocument:
apiVersion: v1
kind: Secret
name: spark-operator-webhook-certs
- equal:
path: data
value:
ca-key.pem: ""
ca-cert.pem: ""
server-key.pem: ""
server-cert.pem: ""

View file

@ -0,0 +1,33 @@
suite: Test spark operator webhook service
templates:
- webhook/service.yaml
release:
name: spark-operator
tests:
- it: Should not render the webhook service if webhook.enable is false
set:
webhook:
enable: false
asserts:
- hasDocuments:
count: 0
- it: Should render the webhook service correctly if webhook.enable is true
set:
webhook:
enable: true
portName: webhook
asserts:
- containsDocument:
apiVersion: v1
kind: Service
name: spark-operator-webhook-svc
- equal:
path: spec.ports[0]
value:
port: 443
targetPort: webhook
name: webhook

View file

@ -0,0 +1,189 @@
# Default values for spark-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Common labels to add to the resources
commonLabels: {}
# replicaCount -- Desired number of pods, leaderElection will be enabled
# if this is greater than 1
replicaCount: 1
image:
# -- Image repository
repository: docker.io/kubeflow/spark-operator
# -- Image pull policy
pullPolicy: IfNotPresent
# -- if set, override the image tag whose default is the chart appVersion.
tag: ""
# -- Image pull secrets
imagePullSecrets: []
# -- String to partially override `spark-operator.fullname` template (will maintain the release name)
nameOverride: ""
# -- String to override release name
fullnameOverride: ""
rbac:
# -- **DEPRECATED** use `createRole` and `createClusterRole`
create: false
# -- Create and use RBAC `Role` resources
createRole: true
# -- Create and use RBAC `ClusterRole` resources
createClusterRole: true
# -- Optional annotations for rbac
annotations: {}
serviceAccounts:
spark:
# -- Create a service account for spark apps
create: true
# -- Optional name for the spark service account
name: ""
# -- Optional annotations for the spark service account
annotations: {}
sparkoperator:
# -- Create a service account for the operator
create: true
# -- Optional name for the operator service account
name: ""
# -- Optional annotations for the operator service account
annotations: {}
# -- List of namespaces where to run spark jobs
sparkJobNamespaces:
- ""
# - ns1
# -- Operator concurrency, higher values might increase memory usage
controllerThreads: 10
# -- Operator resync interval. Note that the operator will respond to events (e.g. create, update)
# unrelated to this setting
resyncInterval: 30
uiService:
# -- Enable UI service creation for Spark application
enable: true
# -- Ingress URL format.
# Requires the UI service to be enabled by setting `uiService.enable` to true.
ingressUrlFormat: ""
# -- Set higher levels for more verbose logging
logLevel: 2
# -- Pod environment variable sources
envFrom: []
# podSecurityContext -- Pod security context
podSecurityContext: {}
# securityContext -- Operator container security context
securityContext: {}
# sidecars -- Sidecar containers
sidecars: []
# volumes - Operator volumes
volumes: []
# volumeMounts - Operator volumeMounts
volumeMounts: []
webhook:
# -- Enable webhook server
enable: false
# -- Webhook service port
port: 8080
# -- Webhook container port name and service target port name
portName: webhook
# -- The webhook server will only operate on namespaces with this label, specified in the form key1=value1,key2=value2.
# Empty string (default) will operate on all namespaces
namespaceSelector: ""
# -- The webhook will only operate on resources with this label/s, specified in the form key1=value1,key2=value2, OR key in (value1,value2).
# Empty string (default) will operate on all objects
objectSelector: ""
# -- The annotations applied to init job, required to restore certs deleted by the cleanup job during upgrade
timeout: 30
metrics:
# -- Enable prometheus metric scraping
enable: true
# -- Metrics port
port: 10254
# -- Metrics port name
portName: metrics
# -- Metrics serving endpoint
endpoint: /metrics
# -- Metric prefix, will be added to all exported metrics
prefix: ""
# -- Prometheus pod monitor for operator's pod.
podMonitor:
# -- If enabled, a pod monitor for operator's pod will be submitted. Note that prometheus metrics should be enabled as well.
enable: false
# -- Pod monitor labels
labels: {}
# -- The label to use to retrieve the job name from
jobLabel: spark-operator-podmonitor
# -- Prometheus metrics endpoint properties. `metrics.portName` will be used as a port
podMetricsEndpoint:
scheme: http
interval: 5s
# nodeSelector -- Node labels for pod assignment
nodeSelector: {}
# tolerations -- List of node taints to tolerate
tolerations: []
# affinity -- Affinity for pod assignment
affinity: {}
# podAnnotations -- Additional annotations to add to the pod
podAnnotations: {}
# podLabels -- Additional labels to add to the pod
podLabels: {}
# resources -- Pod resource requests and limits
# Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m".
# Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error:
# 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits.
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
batchScheduler:
# -- Enable batch scheduler for spark jobs scheduling. If enabled, users can specify batch scheduler name in spark application
enable: false
resourceQuotaEnforcement:
# -- Whether to enable the ResourceQuota enforcement for SparkApplication resources.
# Requires the webhook to be enabled by setting `webhook.enable` to true.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement.
enable: false
leaderElection:
# -- Leader election lock name.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability.
lockName: "spark-operator-lock"
# -- Optionally store the lock in another namespace. Defaults to operator's namespace
lockNamespace: ""
istio:
# -- When using `istio`, spark jobs need to run without a sidecar to properly terminate
enabled: false
# labelSelectorFilter -- A comma-separated list of key=value, or key labels to filter resources during watch and list based on the specified labels.
labelSelectorFilter: ""
# priorityClassName -- A priority class to be used for running spark-operator pod.
priorityClassName: ""

View file

@ -0,0 +1,11 @@
---
resources:
- spark-master-controller.yml
- spark-master-service.yml
- spark-ui-proxy-controller.yml
- spark-ui-proxy-ingress.yml
- spark-ui-proxy-service.yml
- spark-worker-controller.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

View file

@ -0,0 +1,23 @@
kind: ReplicationController
apiVersion: v1
metadata:
name: spark-master-controller
spec:
replicas: 1
selector:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: registry.k8s.io/spark:1.5.2_v1
command: ["/start-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
requests:
cpu: 100m

View file

@ -0,0 +1,14 @@
kind: Service
apiVersion: v1
metadata:
name: spark-master
spec:
ports:
- port: 7077
targetPort: 7077
name: spark
- port: 8080
targetPort: 8080
name: http
selector:
component: spark-master

View file

@ -0,0 +1,29 @@
kind: ReplicationController
apiVersion: v1
metadata:
name: spark-ui-proxy-controller
spec:
replicas: 1
selector:
component: spark-ui-proxy
template:
metadata:
labels:
component: spark-ui-proxy
spec:
containers:
- name: spark-ui-proxy
image: iguaziodocker/spark-ui-proxy:0.1.0
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
args:
- spark-master:8080
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 120
timeoutSeconds: 5

View file

@ -0,0 +1,30 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spark-ui-proxy-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
defaultBackend:
service:
name: spark-ui-proxy
port:
number: 80
tls:
- hosts:
- spark.drive.test.sunet.se
secretName: tls-secret
rules:
- host: spark.drive.test.sunet.se
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: spark-ui-proxy
port:
number: 80

Some files were not shown because too many files have changed in this diff Show more