v3.3.0
This commit is contained in:
parent
94e0e53fc8
commit
6d507a15d2
91
README.md
91
README.md
@ -66,11 +66,10 @@ docker exec <COMPOSE_PROJECT_NAME>-php-fpm-1 curl -u elastic:<ELASTIC_PASSWORD>
|
||||
|
||||
|**folders**|**description**|
|
||||
|:---|:---|
|
||||
|deploy.sh|files to deploy on k8s or k3s (see ./deploy.s -h)|
|
||||
|helm|Chart root folder|
|
||||
|Chart.yaml|Charts file|
|
||||
|values-configs.yml|configs file use for kubernetes manifest|
|
||||
|values-secrets.yaml|secrets file for kubernetes manifet (Must be encrypted with sops in a production environment)|
|
||||
|values-xxxx-configs.yml|configs file use for kubernetes manifest|
|
||||
|values-xxxx-secrets.yaml|secrets file for kubernetes manifet (Must be encrypted with sops in a production environment)|
|
||||
|templates/elasticsearch|manifests for elasticsearch|
|
||||
|templates/kibana|manifests for kibana|
|
||||
|templates/mariadb|manifests for mariadb|
|
||||
@ -93,29 +92,21 @@ kubectl create secret docker-registry secret-regcred --dry-run=client \
|
||||
```bash
|
||||
cat certs/tls.key | base64 -w0
|
||||
```
|
||||
copy the base64 result into file `values-secrets.yaml` in ssl_key key
|
||||
copy the base64 result into file `values-xxxx-secrets.yaml` in ssl_key key
|
||||
```bash
|
||||
cat certs/tls.crt | base64 -w0
|
||||
```
|
||||
copy the base64 result into file `values-config.yaml` in ssl_crt key
|
||||
copy the base64 result into file `values-xxxx-config.yaml` in ssl_crt key
|
||||
|
||||
### Docker image version
|
||||
|
||||
In the `helm/Chart.yaml` file, the `appVersion` value must match the version of the docker image (see DOCKER_IMAGE_VERSION in the `.env` file and SITE_VERSION in the `docker/php-fpm/.env` file)
|
||||
|
||||
## Deployment by script
|
||||
|
||||
This is the recommended way
|
||||
|
||||
>This script builds the docker image based on the Kubernetes VM architecture (AMD64 or ARM64). At each deployment the minor version of the image is incremented by 1.
|
||||
```bash
|
||||
./deploy.md -n wwwgmo -k k3s
|
||||
```
|
||||
## Manual deployment
|
||||
# Deployment
|
||||
### Set kubesystem config
|
||||
```bash
|
||||
rm -f $HOME/.kube/config
|
||||
```
|
||||
for **kind**
|
||||
```bash
|
||||
sudo cp /root/.kube/config $HOME/.kube/config
|
||||
```
|
||||
for **k3s**
|
||||
```bash
|
||||
ln -s $HOME/.kube/k3s $HOST/.kube/config
|
||||
@ -127,15 +118,15 @@ ln -s $HOST/.kube/k8s $HOST/.kube/config
|
||||
### Set namespace and kube system
|
||||
```bash
|
||||
export NS=wwwgmo
|
||||
export KUBE_SYS=k3s|k8s
|
||||
export KUBE_SYS=kind|k3s|k8s
|
||||
```
|
||||
### Test template
|
||||
```bash
|
||||
helm template $NS --set kube=$KUBE_SYS ./helm --values=./helm/values-configs.yaml --values=./helm/values-secrets.yaml --namespace $NS
|
||||
helm template $NS ./helm --values=./helm/values-$KUBE_SYS-configs.yaml --values=./helm/values-$KUBE_SYS-secrets.yaml --namespace $NS
|
||||
```
|
||||
### Chart deployment
|
||||
```bash
|
||||
helm upgrade $NS --set kube=$KUBE_SYS ./helm --install --atomic --cleanup-on-fail --values=./helm/values-configs.yaml --values=./helm/values-secrets.yaml --namespace $NS --create-namespace
|
||||
helm upgrade $NS ./helm --install --atomic --cleanup-on-fail --values=/helm/values-$KUBE_SYS-configs.yaml --values=./helm/values-$KUBE_SYS-secrets.yaml --namespace $NS --create-namespace
|
||||
```
|
||||
|
||||
## Remove
|
||||
@ -145,10 +136,12 @@ kubectl delete namespaces $NS
|
||||
```
|
||||
## NOTES
|
||||
### Cronjob
|
||||
**No longer needed**. A job (`job-mariadb.yaml`), launched during deployment, has been created. We leave the procedure below for information
|
||||
When we deploy manually (I do not why) you must trig manually the cronjob to make a DB backup to termine correctly the helm command
|
||||
```bash
|
||||
kubectl create job -n $NS --from=cronjob/cronjob-mariadb-backupdb dbbackup-$(date +%Y-%m-%d-%H-%M-%S)
|
||||
```
|
||||
|
||||
## Database
|
||||
Not necessary because created during deployment. We leave the procedure below for information
|
||||
|
||||
@ -198,34 +191,50 @@ done
|
||||
[MariaDB Statefulset](https://mariadb.org/create-statefulset-mariadb-application-in-k8s/)
|
||||
[PHP-FPM, nginx, kubernetes and docker](https://matthewpalmer.net/kubernetes-app-developer/articles/php-fpm-nginx-kubernetes.html)
|
||||
|
||||
https://www.elastic.co/guide/en/elasticsearch/reference/8.18/docker.html
|
||||
|
||||
|
||||
## Changelog
|
||||
### 3.25 (2024-04-14)
|
||||
**New features:**
|
||||
* added elasticsearch and kibana
|
||||
* added option `install` to script `docker.sh` to install php elasticserch module
|
||||
* created `deploy.sh` script
|
||||
|
||||
**Fixed bugs:**
|
||||
* problem with display environment var in php site
|
||||
### 3.3.0 (2025-06-29)
|
||||
**New Features:**
|
||||
- Displayed the Elasticsearch PHP client version on the `esinfo.php` page.
|
||||
- Removed the condition checking for **k8s** or **k3s** in Kubernetes deployment.
|
||||
- Added configuration and secret values for the `kind` system.
|
||||
- Added build multi-platform docker image
|
||||
|
||||
**Bug Fixes:**
|
||||
- Fixed somes bugs
|
||||
|
||||
**Updated:**
|
||||
* added new features in README.md
|
||||
* added Changelog part in README.md
|
||||
---
|
||||
|
||||
### 2.5 (2024-03-29)
|
||||
**Fixed bugs:**
|
||||
* fixed somes bugs
|
||||
### 3.2.5 (2024-04-14)
|
||||
|
||||
**New features:**
|
||||
* posibility to deploy on k3s or k8s
|
||||
**New Features:**
|
||||
- Added **Elasticsearch** and **Kibana**.
|
||||
- Introduced `install` option in `docker.sh` script to install the PHP Elasticsearch module.
|
||||
- Created `deploy.sh` script for deployment.
|
||||
|
||||
**Bug Fixes:**
|
||||
- Fixed an issue with displaying environment variables in the PHP site.
|
||||
|
||||
**Updates:**
|
||||
- Enhanced `README.md` with new feature documentation.
|
||||
- Added a **Changelog** section to the `README.md`.
|
||||
|
||||
**Updated:**
|
||||
* updated README.md
|
||||
---
|
||||
|
||||
### 1.0 (2024-03-01)
|
||||
* Created from scratch
|
||||
### 2.5.0 (2024-03-29)
|
||||
|
||||
**New Features:**
|
||||
- Added support for deploying on **k3s** or **k8s**.
|
||||
|
||||
**Bug Fixes:**
|
||||
- Various bug fixes.
|
||||
|
||||
**Updates:**
|
||||
- Updated `README.md`.
|
||||
|
||||
---
|
||||
|
||||
### 1.0.0 (2024-03-01)
|
||||
- Initial project creation.
|
||||
@ -1,138 +0,0 @@
|
||||
# GMo Lab
|
||||
#version: '2.3'
|
||||
services:
|
||||
|
||||
## Linux nginx mysql php
|
||||
# wwwgmo-nginx:
|
||||
# container_name: wwwgmo-nginx
|
||||
# hostname: wwwgmo-nginx
|
||||
# image: nginxinc/nginx-unprivileged:1.23-alpine
|
||||
# volumes:
|
||||
# - './docker/nginx/default.conf:/etc/nginx/conf.d/default.conf'
|
||||
# - './certs:/etc/nginx/certs/'
|
||||
# - './src:/var/www/html:rw,cached'
|
||||
# ports:
|
||||
# - '${NGINX_PORT}:8080' #local:docker
|
||||
# depends_on:
|
||||
# - wwwgmo-php-fpm
|
||||
##
|
||||
# wwwgmo-php-fpm:
|
||||
# container_name: wwwgmo-php-fpm
|
||||
# hostname: wwwgmo-php-fpm
|
||||
# #image: wwwgmo
|
||||
# image: ${DOCKER_IMAGE}:${DOCKER_IMAGE_VERSION}
|
||||
# env_file:
|
||||
# - ./docker/php-fpm/.env
|
||||
# volumes:
|
||||
# - './src/:/var/www/html:rw,cached'
|
||||
# build:
|
||||
# context: .
|
||||
# dockerfile: ./docker/php-fpm/Dockerfile
|
||||
# ports:
|
||||
# - '9000:9000' #local:docker
|
||||
# depends_on:
|
||||
# - wwwgmo-mariadb
|
||||
#
|
||||
# wwwgmo-mariadb:
|
||||
# container_name: wwwgmo-mariadb
|
||||
# hostname: wwwgmo-mariadb
|
||||
# #image: mysql:8.0-debian
|
||||
# #image: mysql/mysql-server:8.0.27-aarch64
|
||||
# image: mariadb:10.11.7
|
||||
# volumes:
|
||||
# - 'wwwgmo-mariadb:/var/lib/mysql:z'
|
||||
# env_file:
|
||||
# - ./docker/mariadb/.env # ports:
|
||||
#
|
||||
# wwwgmo-phpmyadmin:
|
||||
# container_name: wwwgmo-pma
|
||||
# image: phpmyadmin
|
||||
# links:
|
||||
# - wwwgmo-mariadb
|
||||
# env_file:
|
||||
# - ./docker/mariadb/.env
|
||||
# restart: always
|
||||
# ports:
|
||||
# - ${PMA_PORT_WEB}:80
|
||||
#
|
||||
## EK
|
||||
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.18.2
|
||||
container_name: elasticsearch
|
||||
environment:
|
||||
- node.name=es01
|
||||
- cluster.name=es-docker-cluster
|
||||
- discovery.type=single-node
|
||||
- bootstrap.memory_lock=true
|
||||
- xpack.security.enabled=true
|
||||
- ELASTIC_PASSWORD=changeme
|
||||
- xpack.security.http.ssl.enabled=false
|
||||
- ES_JAVA_OPTS=-Xms1g -Xmx1g
|
||||
ulimits:
|
||||
memlock:
|
||||
soft: -1
|
||||
hard: -1
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
volumes:
|
||||
# - esdata:/usr/share/elasticsearch/data
|
||||
- wwwgmo-elasticsearch:/usr/share/elasticsearch/data
|
||||
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana:8.18.2
|
||||
container_name: kibana
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
environment:
|
||||
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
|
||||
- ELASTICSEARCH_USERNAME=elastic
|
||||
- ELASTICSEARCH_PASSWORD=changeme
|
||||
#- xpack.security.enabled=true
|
||||
|
||||
ports:
|
||||
- "5601:5601"
|
||||
|
||||
|
||||
# wwwgmo-elasticsearch:
|
||||
# container_name: wwwgmo-elasticsearch
|
||||
# hostname: wwwgmo-elasticsearch
|
||||
# image: 'docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}'
|
||||
# #image: 'docker.elastic.co/elasticsearch/elasticsearch:8.8.1'
|
||||
# volumes:
|
||||
# - 'wwwgmo-elasticsearch:/usr/share/elasticsearch/data'
|
||||
# restart: unless-stopped
|
||||
# env_file:
|
||||
# - ./docker/elasticsearch/.env
|
||||
# ulimits:
|
||||
# memlock:
|
||||
# soft: -1
|
||||
# hard: -1
|
||||
# nofile:
|
||||
# soft: 65536
|
||||
# hard: 65536
|
||||
# cap_add:
|
||||
# - IPC_LOCK
|
||||
# ports:
|
||||
# - '9200:9200'
|
||||
# - '9300:9300'
|
||||
#
|
||||
## kibana
|
||||
# wwwgmo_kibana:
|
||||
# container_name: wwwgmo-kibana
|
||||
# hostname: wwwgmo-kibana
|
||||
# image: docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}
|
||||
# #image: docker.elastic.co/kibana/kibana:8.8.1
|
||||
# restart: unless-stopped
|
||||
# env_file:
|
||||
# - ./docker/kibana/.env
|
||||
# ports:
|
||||
# - 5601:5601
|
||||
# #depends_on:
|
||||
# # - wwwgmo-elasticsearch
|
||||
#
|
||||
volumes:
|
||||
# wwwgmo-mariadb:
|
||||
wwwgmo-elasticsearch:
|
||||
# esdata:
|
||||
@ -26,4 +26,4 @@ version: 1.0.0
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
# Also set the docker image tag
|
||||
appVersion: "3.25-amd64"
|
||||
appVersion: "3.3.0"
|
||||
|
||||
@ -14,3 +14,4 @@ data:
|
||||
ELASTIC_HOST: "service-elasticsearch"
|
||||
xpack.security.enabled: "true"
|
||||
xpack.security.transport.ssl.enabled: "false"
|
||||
xpack.security.http.ssl.enabled: "false"
|
||||
|
||||
@ -11,5 +11,6 @@ type: Opaque
|
||||
stringData:
|
||||
ELASTIC_USERNAME: elastic
|
||||
ELASTIC_PASSWORD: "{{ required ".Values.elastic.password entry is required!" .Values.elastic.password }}"
|
||||
# user for kibana to connect on elasticsearch
|
||||
KIBANA_PASSWORD: "{{ required ".Values.kibana.password entry is required!" .Values.kibana.password }}"
|
||||
KIBANA_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
|
||||
@ -118,18 +118,8 @@ spec:
|
||||
app: elastic
|
||||
tier: elastic
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: {{ required ".Values.elastic.persistentVolumeClaim.k3sStorageClassName entry is required!" .Values.elastic.persistentVolumeClaim.k3sStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: {{ required ".Values.elastic.persistentVolumeClaim.kindStorageClassName entry is required!" .Values.elastic.persistentVolumeClaim.kindStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
accessModes: [ "ReadWriteMany" ]
|
||||
storageClassName: {{ required ".Values.elastic.persistentVolumeClaim.k8sStorageClassName entry is required!" .Values.elastic.persistentVolumeClaim.k8sStorageClassName }}
|
||||
{{- end }}
|
||||
accessModes: {{ required ".Values.common.pvc.accessModes entry is required!" .Values.common.pvc.accessModes }}
|
||||
storageClassName: {{ required ".Values.common.pvc.storageClassName entry is required!" .Values.common.pvc.storageClassName }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.elastic.persistentVolumeClaim.storageRequest entry is required!" .Values.elastic.persistentVolumeClaim.storageRequest }}
|
||||
|
||||
23
helm/templates/kibana/configmap-kibana.yaml
Normal file
23
helm/templates/kibana/configmap-kibana.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: configmap-kibana
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: site
|
||||
tier: kibana
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
# envFrom:
|
||||
# - secretRef:
|
||||
# name: secret-elasticsearch
|
||||
data:
|
||||
#ELASTICSEARCH_HOSTS: "{{ required ".Values.elastic.host entry is required!" .Values.elastic.host }}"
|
||||
# ELASTICSEARCH_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
# ELASTICSEARCH_PASSWORD: "{{ required ".Values.kibana.password entry is required!" .Values.kibana.password }}"
|
||||
#KIBANA_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
#KIBANA_PASSWORD: "{{ required ".Values.kibana.password entry is required!" .Values.kibana.password }}"
|
||||
#KIBANA_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
#server.ssl.enabled: false
|
||||
ELASTICSEARCH_HOSTS: "{{ required ".Values.elastic.host entry is required!" .Values.elastic.host }}"
|
||||
ELASTICSEARCH_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
|
||||
@ -4,20 +4,13 @@ metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: ingress-kibana
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
ingressClassName: traefik
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
ingressClassName: nginx
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
ingressClassName: nginx
|
||||
{{- end }}
|
||||
ingressClassName: {{ required ".Values.common.ingress.ingressClassName entry is required!" .Values.common.ingress.ingressClassName }}
|
||||
{{- if .Values.site.tls.enabled }}
|
||||
tls:
|
||||
- hosts:
|
||||
{{ required ".Values.site.ingress.kibana.hosts entry is required!" .Values.site.ingress.kibana.hosts }}
|
||||
{{ required ".Values.site.ingres.kibana.hosts entry is required!" .Values.site.ingress.kibana.hosts }}
|
||||
secretName: secret-ingress-tls
|
||||
|
||||
{{- end }}
|
||||
rules:
|
||||
- host: {{ required ".Values.site.urlKibana entry is required!" .Values.site.urlKibana }}
|
||||
http:
|
||||
@ -10,5 +10,9 @@ stringData:
|
||||
#ELASTICSEARCH_PASSWORD: "{{ required ".Values.elastic.password entry is required!" .Values.elastic.password }}"
|
||||
#ELASTIC_USERNAME: elastic
|
||||
#ELASTIC_PASSWORD: "{{ required ".Values.elastic.password entry is required!" .Values.elastic.password }}"
|
||||
KIBANA_PASSWORD: kibanaPass55w0rd
|
||||
KIBANA_USERNAME: kibana_system_user
|
||||
#KIBANA_PASSWORD: kibanaPass55w0rd
|
||||
#KIBANA_USERNAME: kibana_system_user
|
||||
#ELASTICSEARCH_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
ELASTICSEARCH_PASSWORD: "{{ required ".Values.kibana.password entry is required!" .Values.kibana.password }}"
|
||||
#ELASTICSEARCH_HOSTS: "{{ required ".Values.elastic.host entry is required!" .Values.elastic.host }}"
|
||||
|
||||
@ -8,7 +8,6 @@ metadata:
|
||||
data:
|
||||
MYSQL_DATABASE: "{{ required ".Values.mariadb.databaseName entry is required!" .Values.mariadb.databaseName }}"
|
||||
MYSQL_USER: "{{ required ".Values.site.phpfpmSite.db.user entry is required!" .Values.site.phpfpmSite.db.user }}"
|
||||
MYSQL_PASSWORD: "{{ required ".Values.mariadb.dbPass entry is required!" .Values.mariadb.dbPass }}"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
56
helm/templates/mariadb/job-mariadb.yaml
Normal file
56
helm/templates/mariadb/job-mariadb.yaml
Normal file
@ -0,0 +1,56 @@
|
||||
# templates/mariadb/job-mariadb-backup-initial.yaml
|
||||
# For the pod to start properly, the backup job must be run only once during deployment.
|
||||
# This initial run is to create the backup Persistent Volume Claim (PVC).
|
||||
# The pod depends on the presence of this PVC, identified as 'mariadb-datadir-bck',
|
||||
# and will not be able to launch without its creation.
|
||||
# ----
|
||||
# Pour que le pod puisse démarrer correctement, il est nécessaire d'exécuter
|
||||
# le job de sauvegarde une seule fois au moment du déploiement.
|
||||
# Cette exécution initiale a pour but de créer le Persistent Volume Claim (PVC) de sauvegarde.
|
||||
# En effet, le pod est dépendant de la présence de ce PVC, identifié comme 'mariadb-datadir-bck',
|
||||
# et ne pourra pas se lancer sans sa création préalable.
|
||||
# ----
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: {{ include "site.fullname" . }}-mariadb-backup-initial
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: mariadb
|
||||
tier: cronjob-initial
|
||||
{{- include "site.labels" . | nindent 8 }}
|
||||
spec:
|
||||
automountServiceAccountToken: false
|
||||
containers:
|
||||
- image: {{ required ".Values.mariadb.repository entry is required!" .Values.mariadb.repository }}:{{ required ".Values.mariadb.tag entry is required!" .Values.mariadb.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: cronjob-mariadb-backupdb-initial
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: configmap-mariadb-envvars
|
||||
- secretRef:
|
||||
name: secret-mariadb
|
||||
command: ["/bin/sh", "-c"]
|
||||
args:
|
||||
- set -x;
|
||||
ls -l /var/backups;
|
||||
/usr/bin/mysqldump --verbose --hex-blob --complete-insert --single-transaction --skip-lock-tables --skip-add-locks --routines -h service-mariadb -uroot -p$MYSQL_ROOT_PASSWORD $MYSQL_DATABASE | gzip - > /var/backups/$MYSQL_DATABASE-$(date +%Y-%m-%d_%H%M%S).sql.gz;
|
||||
# Note: you might want to adjust the find command if this is the very first backup
|
||||
# find /var/backups/ -mindepth 1 -type f -mtime +14 -exec rm {} \;
|
||||
volumeMounts:
|
||||
- name: mariadb-datadir-bck
|
||||
mountPath: /var/backups
|
||||
|
||||
restartPolicy: OnFailure # Important for Jobs
|
||||
|
||||
terminationGracePeriodSeconds: {{ required ".Values.mariadb.terminationGracePeriodSeconds entry is required!" .Values.mariadb.terminationGracePeriodSeconds }}
|
||||
|
||||
volumes:
|
||||
- name: mariadb-datadir-bck
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-mariadb-datadir-bck
|
||||
31
helm/templates/mariadb/pvc-mariadb.yaml
Normal file
31
helm/templates/mariadb/pvc-mariadb.yaml
Normal file
@ -0,0 +1,31 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: pvc-mariadb-datadir
|
||||
labels:
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
accessModes: {{ required ".Values.common.pvc.accessModes entry is required!" .Values.common.pvc.accessModes }}
|
||||
storageClassName: {{ required ".Values.common.pvc.storageClassName entry is required!" .Values.common.pvc.storageClassName }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.site.marioadb.persistentVolumeClaim.storageRequest entry is required!" .Values.mariadb.persistentVolumeClaim.storageRequest }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: pvc-mariadb-datadir-bck
|
||||
labels:
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
accessModes: {{ required ".Values.common.pvc.accessModes entry is required!" .Values.common.pvc.accessModes }}
|
||||
storageClassName: {{ required ".Values.common.pvc.storageClassName entry is required!" .Values.common.pvc.storageClassName }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.mariadb.persistentVolumeClaim.backupDbStorageRequest entry is required!" .Values.mariadb.persistentVolumeClaim.backupDbStorageRequest }}
|
||||
@ -8,3 +8,5 @@ metadata:
|
||||
type: Opaque
|
||||
stringData:
|
||||
MYSQL_ROOT_PASSWORD: "{{ required ".Values.mariadb.rootPass entry is required!" .Values.mariadb.rootPass }}"
|
||||
MYSQL_PASSWORD: "{{ required ".Values.mariadb.dbPass entry is required!" .Values.mariadb.dbPass }}"
|
||||
|
||||
@ -38,13 +38,19 @@ spec:
|
||||
- containerPort: 3306
|
||||
name: mariadb
|
||||
|
||||
# The startupProbe is used to determine whether the application has started successfully
|
||||
# before checking the other probes (livenessProbe and readinessProbe).
|
||||
# This is useful for databases like MariaDB, as they can take a while to start up.
|
||||
startupProbe:
|
||||
tcpSocket:
|
||||
port: 3306
|
||||
failureThreshold: 12
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
initialDelaySeconds: 10 # waiting time before the first test
|
||||
periodSeconds: 10 # test frequency (every 10 seconds)
|
||||
failureThreshold: 30 # number of failures before considering the container not starting
|
||||
|
||||
# The livenessProbe is used to determine whether the MariaDB container is still alive,
|
||||
# i.e., whether it is functioning properly.
|
||||
# If the probe fails, Kubernetes will restart the container.
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 3306
|
||||
@ -53,13 +59,29 @@ spec:
|
||||
initialDelaySeconds: 0
|
||||
timeoutSeconds: 5
|
||||
|
||||
# The readinessProbe is used to determine whether the container is ready to receive traffic.
|
||||
# This is important because MariaDB might be up and running but not yet ready to serve
|
||||
# queries (for example, if restore or prepare queries are in progress).
|
||||
readinessProbe:
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 10
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
tcpSocket:
|
||||
port: 3306
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 3306
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 0
|
||||
timeoutSeconds: 5
|
||||
#readinessProbe:
|
||||
# tcpSocket:
|
||||
# port: 3306
|
||||
# periodSeconds: 10
|
||||
# failureThreshold: 3
|
||||
# initialDelaySeconds: 0
|
||||
# timeoutSeconds: 5
|
||||
|
||||
resources:
|
||||
requests:
|
||||
@ -13,8 +13,10 @@ data:
|
||||
DB_NAME: "{{ required ".Values.site.phpfpmSite.db.name entry is required!" .Values.site.phpfpmSite.db.name }}"
|
||||
DB_PORT: "{{ required ".Values.site.phpfpmSite.db.port entry is required!" .Values.site.phpfpmSite.db.port }}"
|
||||
DB_TABLE: "{{ required ".Values.site.phpfpmSite.db.tabl entry is required!" .Values.site.phpfpmSite.db.tabl }}"
|
||||
PMA_URL: "https://{{ required ".Values.site.urlPma entry is required!" .Values.site.urlPma }}"
|
||||
PMA_URL: "http://{{ required ".Values.site.urlPma entry is required!" .Values.site.urlPma }}"
|
||||
#PMA_URL: "{{ required ".Values.site.urlPma entry is required!" .Values.site.urlPma }}"
|
||||
ES_HOST: "{{ required ".Values.site.phpfpmSite.es.host entry is required!" .Values.site.phpfpmSite.es.host }}"
|
||||
ES_USER: "{{ required ".Values.site.phpfpmSite.es.user entry is required!" .Values.site.phpfpmSite.es.user }}"
|
||||
ES_INDEX: "{{ required ".Values.site.phpfpmSite.es.index entry is required!" .Values.site.phpfpmSite.es.index }}"
|
||||
KIBANA_URL: "https://{{ required ".Values.site.urlKibana entry is required!" .Values.site.urlKibana }}"
|
||||
KIBANA_URL: "http://{{ required ".Values.site.urlKibana entry is required!" .Values.site.urlKibana }}"
|
||||
#KIBANA_URL: "{{ required ".Values.site.urlKibana entry is required!" .Values.site.urlKibana }}"
|
||||
|
||||
@ -99,7 +99,7 @@ spec:
|
||||
# - ALL
|
||||
|
||||
### Container 2 : PHP-FPM
|
||||
- image: {{ required ".Values.site.phpfpmSite.repository entry is required!" .Values.site.phpfpmSite.repository }}:{{ required ".Chart.AppVersion entry is required!" .Chart.AppVersion }}
|
||||
- image: {{ required ".Values.site.phpfpmSite.repository entry is required!" .Values.site.phpfpmSite.repository }}:{{ required "Values.site.phpfpmSite.imageTag entry is required!" .Values.site.phpfpmSite.imageTag }}
|
||||
imagePullPolicy: {{ required ".Values.site.phpfpmSite.pullPolicy entry is required!" .Values.site.phpfpmSite.pullPolicy }}
|
||||
name: phpfpm
|
||||
|
||||
|
||||
@ -22,16 +22,13 @@ metadata:
|
||||
{{- end }}
|
||||
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
ingressClassName: traefik
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
ingressClassName: nginx
|
||||
{{- end }}
|
||||
ingressClassName: {{ required ".Values.common.ingress.ingressClassName entry is required!" .Values.common.ingress.ingressClassName }}
|
||||
{{- if .Values.site.tls.enabled }}
|
||||
tls:
|
||||
- hosts:
|
||||
{{ required ".Values.site.ingress.site.hosts entry is required!" .Values.site.ingress.site.hosts }}
|
||||
secretName: secret-ingress-tls
|
||||
{{- end }}
|
||||
rules:
|
||||
- host: {{ required ".Values.site.host entry is required!" .Values.site.host }}
|
||||
http:
|
||||
|
||||
@ -10,21 +10,8 @@ metadata:
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.site.persistentVolumeClaim.k3sStorageClassName entry is required!" .Values.site.persistentVolumeClaim.k3sStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.site.persistentVolumeClaim.kindStorageClassName entry is required!" .Values.site.persistentVolumeClaim.kindStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: {{ required ".Values.site.persistentVolumeClaim.k8sStorageClassName entry is required!" .Values.site.persistentVolumeClaim.k8sStorageClassName }}
|
||||
{{- end }}
|
||||
accessModes: {{ required ".Values.common.pvc.accessModes entry is required!" .Values.common.pvc.accessModes }}
|
||||
storageClassName: {{ required ".Values.common.pvc.storageClassName entry is required!" .Values.common.pvc.storageClassName }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.site.persistentVolumeClaim.storageRequest entry is required!" .Values.site.persistentVolumeClaim.storageRequest }}
|
||||
@ -10,6 +10,13 @@ metadata:
|
||||
#traefik.ingress.kubernetes.io/rewrite-target: /
|
||||
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
#kubernetes.io/ingress.allow-http: "false"
|
||||
#nginx.ingress.kubernetes.io/affinity: "cookie"
|
||||
##nginx.ingress.kubernetes.io/session-cookie-name: "SAUTHSESSION*"
|
||||
#nginx.ingress.kubernetes.io/proxy-body-size: "32m"
|
||||
#nginx.org/client-max-body-size: "32m"
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
#kubernetes.io/ingress.allow-http: "false"
|
||||
#nginx.ingress.kubernetes.io/affinity: "cookie"
|
||||
@ -18,16 +25,13 @@ metadata:
|
||||
#nginx.org/client-max-body-size: "32m"
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
ingressClassName: traefik
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
ingressClassName: nginx
|
||||
{{- end }}
|
||||
ingressClassName: {{ required ".Values.common.ingress.ingressClassName entry is required!" .Values.common.ingress.ingressClassName }}
|
||||
{{- if .Values.site.tls.enabled }}
|
||||
tls:
|
||||
- hosts:
|
||||
{{ required ".Values.site.ingress.pma.hosts entry is required!" .Values.site.ingress.pma.hosts }}
|
||||
secretName: secret-ingress-tls
|
||||
{{- end }}
|
||||
rules:
|
||||
- host: {{ required ".Values.site.urlPma entry is required!" .Values.site.urlPma }}
|
||||
http:
|
||||
@ -12,17 +12,19 @@ elastic:
|
||||
k3sStorageClassName: local-path
|
||||
kindStorageClassName: standard
|
||||
k8sStorageClassName: longhorn
|
||||
host: http://service-elasticsearch:9200
|
||||
|
||||
kibana:
|
||||
imageTag: 9.0.2
|
||||
username: kibana_system_user
|
||||
priorityClassName: system-node-critical
|
||||
host: http://service-elasticsearch:9200
|
||||
# host: http://service-elasticsearch:9200
|
||||
|
||||
mariadb:
|
||||
repository: mariadb
|
||||
pullPolicy: Always
|
||||
tag: "10.11.7"
|
||||
#tag: "11.2"
|
||||
databaseName: gmo_db
|
||||
innoDbBufferPoolSize: 256M
|
||||
queryCacheSize: 256M
|
||||
@ -44,6 +46,8 @@ mariadb:
|
||||
k8sStorageClassName: longhorn
|
||||
|
||||
site:
|
||||
tls:
|
||||
enabled: false
|
||||
host: wwwgmo.gmolab.net
|
||||
urlPma: wwwgmo-pma.gmolab.net
|
||||
urlKibana: wwwgmo-kibana.gmolab.net
|
||||
|
||||
124
helm/values-kind-configs.yaml
Normal file
124
helm/values-kind-configs.yaml
Normal file
@ -0,0 +1,124 @@
|
||||
common:
|
||||
pvc:
|
||||
# k3s
|
||||
#accessModes: [ "ReadWriteOnce" ]
|
||||
#storageClassName: local-path
|
||||
# kind
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: standard
|
||||
# k8s
|
||||
#accessModes: [ "ReadWriteMany" ]
|
||||
#storageClassName: longhorn
|
||||
ingress:
|
||||
#k3s
|
||||
#ingressClassName: traefik
|
||||
# kind
|
||||
ingressClassName: nginx
|
||||
# k8s
|
||||
#ingressClassName: nginx
|
||||
|
||||
# elasticsearch
|
||||
elastic:
|
||||
priorityClassName: system-cluster-critical
|
||||
imageTag: 9.0.2
|
||||
persistentVolumeClaim:
|
||||
storageRequest: 500M
|
||||
host: http://service-elasticsearch:9200
|
||||
|
||||
# kibana
|
||||
kibana:
|
||||
imageTag: 9.0.2
|
||||
username: kibana_system_user
|
||||
priorityClassName: system-node-critical
|
||||
# host: http://service-elasticsearch:9200
|
||||
|
||||
# mariadb
|
||||
mariadb:
|
||||
repository: mariadb
|
||||
pullPolicy: Always
|
||||
tag: "10.11.7"
|
||||
#tag: "11.2"
|
||||
databaseName: gmo_db
|
||||
innoDbBufferPoolSize: 256M
|
||||
queryCacheSize: 256M
|
||||
queryCacheLimit: 4M
|
||||
ressourceRequest:
|
||||
memory: 300Mi
|
||||
cpu: 100m
|
||||
ephemeralStorage: 128M
|
||||
ressourceLimit:
|
||||
memory: 1250Mi #1250 Mo RAM, au delà eviction
|
||||
cpu: 200m # 0.1 core de CPU, au delà CPU Throttle
|
||||
ephemeralStorage: 512M #512 Mo de storage non persistent (en + de ce qui est dans l'image), au delà éviction
|
||||
terminationGracePeriodSeconds: 60
|
||||
persistentVolumeClaim:
|
||||
storageRequest: 500M
|
||||
backupDbStorageRequest: 500M
|
||||
|
||||
# site
|
||||
site:
|
||||
tls:
|
||||
enabled: false
|
||||
host: wwwgmokind.gmolab.net
|
||||
urlPma: wwwgmokind-pma.gmolab.net
|
||||
urlKibana: wwwgmokind-kibana.gmolab.net
|
||||
replicas: 1
|
||||
priorityClassName: business-app-critical
|
||||
terminationGracePeriodSeconds: 60
|
||||
filesMountPath: /var/www/html/web
|
||||
sourcesMountPath: /var/www/html
|
||||
ssl_crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZmekNDQTJlZ0F3SUJBZ0lKQU5YN2sxZ3dnOVNkTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUcrTVFzd0NRWUQKVlFRR0V3SkRTREVOTUFzR0ExVUVDQk1FVm1GMVpERU5NQXNHQTFVRUJ4TUVUbmx2YmpFaE1COEdBMVVFQ2hNWQpSMDFQSUV4aFltOXlZWFJ2YVhKbElDaG5iVzlzWVdJcE1UQXdMZ1lEVlFRTEV5ZFRTVXhCUWlBb1UzbHpkR1Z0ClpTQnBibVp2Y20xaGRHbHZiaUJzWVdKdmNtRjBiMmx5WlNreEVqQVFCZ05WQkFNVENVZE5UMHhoWWlCRFFURW8KTUNZR0NTcUdTSWIzRFFFSkFSWVpaWGh3Ykc5cGRDNW5iVzkwWldOb1FHZHRZV2xzTG1OdmJUQWVGdzB5TlRBMApNalF3T0RBeU1qVmFGdzB5TmpBME1qUXdPREF5TWpWYU1JSEJNUXN3Q1FZRFZRUUdFd0pEU0RFTk1Bc0dBMVVFCkNCTUVWbUYxWkRFTk1Bc0dBMVVFQnhNRVRubHZiakVoTUI4R0ExVUVDaE1ZUjAxUElFeGhZbTl5WVhSdmFYSmwKSUNobmJXOXNZV0lwTVRBd0xnWURWUVFMRXlkVFNVeEJRaUFvVTNsemRHVnRaU0JwYm1admNtMWhkR2x2YmlCcwpZV0p2Y21GMGIybHlaU2t4RlRBVEJnTlZCQU1VRENvdVoyMXZiR0ZpTG01bGRERW9NQ1lHQ1NxR1NJYjNEUUVKCkFSWVpaWGh3Ykc5cGRDNW5iVzkwWldOb1FHZHRZV2xzTG1OdmJUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQUQKZ2dFUEFEQ0NBUW9DZ2dFQkFKVXY0S1NWcndpWGFGUWZvMTBvVTVzYkIzRFM4K0djWTB1NW51VjFSN2F5SFhoaAp0ejhxYUJGaTJ3NllOVmRNNGJtSmRsNDluMk5aMmEwQlp4MmM1SFF5blc2Nk90WFZIWUVVL3RlYVlHZmJlYnpsCjhFMFVvMmFqMVQvNGNFQ0dLakxyam1DS04xQWpyVFpLVThVWWh2STJUa3NWN1EveEd3WUZ4Z3I4SXI2NlQ5WGsKa3V6bTIyUXVSb2Z5bXFLbnh4NXg0TStXQWNvZG0zRHQrTE5XNWVpYUZOMmR4L0I1S2hPMndTSWxEdHhxK3ZKNwpwR0Jxb0xQWGcwQkltRm0vUEtlQlI3KzRMR01JbG5uZ05lTzI3WmZRa2VMQmgwTDhkTklKdzFId29JdXdNdGRYCnhWM040TDByZmdQcmdKditIeHJQdXlDRGhlZEU5UWQwZEU1bWk1OENBd0VBQWFON01Ia3dDUVlEVlIwVEJBSXcKQURBWEJnTlZIUkVFRURBT2dnd3FMbWR0YjJ4aFlpNXVaWFF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdFdwpIUVlEVlIwT0JCWUVGQ1AzTWRaYVBHKzdlZG5EUHA4eHhpdndtNEpKTUI4R0ExVWRJd1FZTUJhQUZPVVNUYWp0CnRranZWQUtqc3pQemJ3RExtL2x1TUEwR0NTcUdTSWIzRFFFQkN3VUFBNElDQVFDT0pCb2dEMFlPMHk0SGdscEkKaFAwVTFQbXdUMGFObFpGNW9teXFOeGZGVHFWN2FXRnhTM2I5VWVUL09GUUlkYThzdlhsYXFTN0xLZk5zamVYagpCeC9YOG1NWUJ1Zm50bkppUGZFVTZJdGRUVmhaY1RIYlJRTFkyVjVuZ21WV3ErQ2xVMklsWmtSeWtELzcvc0NaCnV6ODhKUEJXZ1R4MnV2QXh4NGI0bFAzUzF6L1VSVmczVzR5ei9pTUVNQ1M2Ymgra3FuUXdFM05HeFZmc2swOU0KdFhnWW1jQXZrZFpNcVNJOFVYRC9pTXNLV1NYd0JFeVFVQVJ4Wi9jMVJSSzhHVGpRMUNySHE5TFM0T1R3Qnc5aAorUWxoZU0yQUhKRzdES2tLbHJENkVzN0l5RkUrOG0zVzRtbkk2S2M4anRhYVZTeGdYeTBDZmtDQ2xMaTQxV3Y5CjJ5ZGY0N3ArOTUvN1ZvUUFKQXg5MWdxL3AxcWc1V0JCeVBMNWFMS2FzTnNkUjhrOUdLYk5BVVBZZGZvUzdiTGEKcjBzTGJlYitGUmgvVEt3dFh0RWJNVGRZN1hNTWE3ZG5CeEViRWs0ZHRjZTZvZkZJRnJVUzJ0eDU0cUdzaEd2QwpiNjV0N0U3MHFvWTRTck1KWVB5amlLVjNMYnoyMWgwRVh2dXBlekpsWHQ5SHZmME41OWdKSGZONmFIOGV0TU9xCmoxalptTHpCSDRGZ3cyQlZxSGR0b1JtYjB2UHVpcGtobjVHMzYzWXQxQU1pQzdtdVcvSVYreUNiRkYxN3p0c0wKUWxiQmxoSkZDQU5KQjd5WFo4cGFtMGVuVVZVUi81TUF2U0hIV1ZwcDI2RDZmUFJWbldmQTNhc09LdDNza0hOVApBb0Fpam5uZER6Y1diWmN1cWlJRHZLK3lNQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
nginxSite:
|
||||
repository: nginxinc/nginx-unprivileged
|
||||
pullPolicy: Always
|
||||
tag: "1.26"
|
||||
nginxConfigPath: /etc/nginx/conf.d/default.conf
|
||||
ressourceRequest:
|
||||
memory: 16M #16 Mo RAM
|
||||
cpu: 50m # 0.05 core de CPU
|
||||
ephemeralStorage: 512M #512 Mo de storage non persistent (en + de ce qui est dans l'image)
|
||||
ressourceLimit:
|
||||
memory: 128M #128 Mo RAM, au delà eviction
|
||||
cpu: 500m # 0.5 core de CPU, au delà CPU Throttle
|
||||
ephemeralStorage: 512M #512 Mo de storage non persistent (en + de ce qui est dans l'image), au delà éviction
|
||||
persistentVolumeClaim:
|
||||
storageRequest: 500M
|
||||
ingress:
|
||||
site:
|
||||
hosts:
|
||||
- wwwgmokind.gmolab.net
|
||||
pma:
|
||||
hosts:
|
||||
- wwwgmokind-pma.gmolab.net
|
||||
kibana:
|
||||
hosts:
|
||||
- wwwgmokind-kibana.gmolab.net
|
||||
# php-fpm
|
||||
phpfpmSite:
|
||||
repository: gmouchet/wwwgmo-php-fpm
|
||||
imageTag: 3.3.0
|
||||
pullPolicy: Always
|
||||
ressourceRequest:
|
||||
memory: 16M #16 Mo RAM
|
||||
cpu: 50m # 0.05 core de CPU
|
||||
ephemeralStorage: 512M #512 Mo de storage non persistent (en + de ce qui est dans l'image)
|
||||
ressourceLimit:
|
||||
memory: 128M #128 Mo RAM, au delà eviction
|
||||
cpu: 500m # 0.5 core de CPU, au delà CPU Throttle
|
||||
ephemeralStorage: 512M #512 Mo de storage non persistent (en + de ce qui est dans l'image), au delà éviction
|
||||
site:
|
||||
title: "Stack GMo - PHP-FPM - MariaDB - Elasticsearch"
|
||||
db:
|
||||
name: gmo_db
|
||||
user: gmo_db
|
||||
host: service-mariadb
|
||||
port: 3306
|
||||
tabl: tbl_email
|
||||
es:
|
||||
host: service-elasticsearch:9200
|
||||
user: elastic
|
||||
index: wwwgmo_index
|
||||
|
||||
|
||||
|
||||
|
||||
13
helm/values-kind-secrets.yaml
Normal file
13
helm/values-kind-secrets.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
site:
|
||||
ssl_key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ1ZMK0NrbGE4SWwyaFUKSDZOZEtGT2JHd2R3MHZQaG5HTkx1WjdsZFVlMnNoMTRZYmMvS21nUll0c09tRFZYVE9HNWlYWmVQWjlqV2RtdApBV2Nkbk9SME1wMXV1anJWMVIyQkZQN1htbUJuMjNtODVmQk5GS05tbzlVLytIQkFoaW95NjQ1Z2lqZFFJNjAyClNsUEZHSWJ5Tms1TEZlMFA4UnNHQmNZSy9DSyt1ay9WNUpMczV0dGtMa2FIOHBxaXA4Y2VjZURQbGdIS0hadHcKN2ZpelZ1WG9taFRkbmNmd2VTb1R0c0VpSlE3Y2F2cnllNlJnYXFDejE0TkFTSmhadnp5bmdVZS91Q3hqQ0paNQo0RFhqdHUyWDBKSGl3WWRDL0hUU0NjTlI4S0NMc0RMWFY4VmR6ZUM5SzM0RDY0Q2IvaDhhejdzZ2c0WG5SUFVICmRIUk9ab3VmQWdNQkFBRUNnZ0VBRGpNRXRrV0tQUGx1M3ZXS21VeEVTNGY5WTZTckVtSE4wRXIyTDdBWThrVHUKVkhpTElNSksyTDl5OVdiNzlMZUc2SFRkUlBKU2l2bmwrUGNnTnd1cGNYeDd5dFV1SjdvUUE5V0VFcHRKUDdsVApxSmRZdkNLOFpxd3Q3aTBabEsvd1c1dXdsOG12K0JCZE9tOUZSOXQzYjBoSFFyMloxdmJ1SG9XZ0tETFZUOEoxCis1MXlISC9VbGlIb0E4bzkxYUd3K3F5KzllWlJrdGpxSmlOWXpGMUMzVDB5Sk1yQXdxZXU4Z09ic2k3c2kwMWkKZ254d1VISDVVckNjaXZRTzQxWFBjUW5QYzQ5dTE4bkFRSGRxTTZNL05yTzZwWENUTnY0OHY2OEZsaDlmanBqawpvUDVJN0IwRThOTE5ZSDkyMEJUbE1hRFUvYVAwdE9HRXJHblgyWElBY1FLQmdRQzZRbFpJMjAyTmZMWnVSVmhoCjBMN3A4VklXWnE3UEl2US9tRHF4Rlc4WllkQk14RXY2UFZVTTl6VHVsVVh2TlM2SE9ncHAxUGxGVVNHM2dQRzUKZU1nR1piTTZjczJwM1dmOVVjZ1ZQYXM3Q2lGYnRZZXdJS0dYMWY5OHFMTU4ybkFTd2dVU0N5eUQrZER0UVpDWQp1VlRybnhZbUkrOHlBUFpOR3pSVUVNUnh1UUtCZ1FETkRBalpWTmNOeGJaVW1YWkZrMEp1YnNLUGZVMStYZVdrCkxCYkVCaDZwVG5hOWNVTE5Cc0F1ZjI4cEpFS1FQeXc4SVFEUDIwaDJXWUdIQ1FtNFZyMEM1TVN3TFVxNGdsalgKNHJ3bmxSZE44MDNqbjJTVUJjV1lGSGdNc1EyN2ZRT2lvNnVnUnUrZ0U2ZDMwa0lYTW1waUp4eHNuN3V5L0JBOAp1Z0pBc1R6MEZ3S0JnRE56ejlJZ2dyUHJGNW91bmRPbmpwV2hqRU9UNmdaZWFZcUh5dTdRTlBpV0JLeXdMU3piCmRIczRidTdaWFpCTzZLT0NiUTMvUHp6ZXhLbWtmU0gzTTRwUTNjbnZuTkNuME9veGhVd1kxUXhpS0FUbGlLNG0KMVh6VUtOZU80cWVaQ0F5bWZEQVgxaHcvRG0vOEJLMnJ4TUd5R0xSQWlQc1BPUHJqNFBpNENReEJBb0dBTmw0bgpob0N4V095QWtPUHo4VFMvbTRwd3VoMHVUQUJYb0hVMFFCdWpTNThMYXVXNklhVFZsajZoMmRYTWRIVGJwTUhYCmRrV2RiQXdGaFNtSFUwSmtjWHo1RGdHa1cxSHNmcW1XM0NQeS91OHhTdFo3azZnSUlXL2orUEdGUTU0OU5ZV1MKUHpndjExRCt5WTJOaXBzS2pDWDBxblNjRHpRNGxmRjRJWEVkU1ZjQ2dZRUF0VVQ2Y1BYdnYxamM4WVUvN3BjTApXR21WYldjcDNXbW5VK2FoaXBVM2l6YmxEUGkvWUR0aVhKSFVoYUl1TnZySVBKL2VIT1psejRPcVViS015YzFyCjV2blUrajg3MnZOdGViVkphemd1SHBHTzY3SVFrNHdJamp0TlQrQUYybUhjdnJUVTYyay9hWHJFSGN4ekZMMTMKcW1TcHNUVWtJYXFaZ2VMUERWMWNOZWc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
|
||||
|
||||
mariadb:
|
||||
rootPass: pa55w0rd
|
||||
dbPass: passw0rd
|
||||
|
||||
elastic:
|
||||
password: pa55w0rd
|
||||
|
||||
kibana:
|
||||
password: kibanaPass55w0rd
|
||||
|
||||
@ -1,16 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: configmap-kibana
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
app: site
|
||||
tier: kibana
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
# envFrom:
|
||||
# - secretRef:
|
||||
# name: secret-elasticsearch
|
||||
data:
|
||||
ELASTICSEARCH_HOSTS: "{{ required ".Values.kibana.host entry is required!" .Values.kibana.host }}"
|
||||
#ELASTICSEARCH_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
#KIBANA_USERNAME: "{{ required ".Values.kibana.username entry is required!" .Values.kibana.username }}"
|
||||
@ -1,57 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: pvc-mariadb-datadir
|
||||
labels:
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.k3sStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.k3sStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.kindStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.kindStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.k8sStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.k8sStorageClassName }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.site.marioadb.persistentVolumeClaim.storageRequest entry is required!" .Values.mariadb.persistentVolumeClaim.storageRequest }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
name: pvc-mariadb-datadir-bck
|
||||
labels:
|
||||
{{- include "site.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
spec:
|
||||
{{- if eq "k3s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.k3sStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.k3sStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "kind" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.kindStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.kindStorageClassName }}
|
||||
{{- end }}
|
||||
{{- if eq "k8s" $.Values.kube }}
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
storageClassName: {{ required ".Values.mariadb.persistentVolumeClaim.k8sStorageClassName entry is required!" .Values.mariadb.persistentVolumeClaim.k8sStorageClassName }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ required ".Values.mariadb.persistentVolumeClaim.backupdDbStorageRequest entry is required!" .Values.mariadb.persistentVolumeClaim.backupdDbStorageRequest }}
|
||||
Loading…
x
Reference in New Issue
Block a user