feat: add Descheduler Helm chart with templates and tests for deployment and cronjob
Signed-off-by: zhenyus <zhenyus@mathmast.com>
This commit is contained in:
parent
9b86aae0f6
commit
68a818de80
@ -0,0 +1,22 @@
|
|||||||
|
# Patterns to ignore when building packages.
|
||||||
|
# This supports shell glob matching, relative path matching, and
|
||||||
|
# negation (prefixed with !). Only one pattern per line.
|
||||||
|
.DS_Store
|
||||||
|
# Common VCS dirs
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
.bzr/
|
||||||
|
.bzrignore
|
||||||
|
.hg/
|
||||||
|
.hgignore
|
||||||
|
.svn/
|
||||||
|
# Common backup files
|
||||||
|
*.swp
|
||||||
|
*.bak
|
||||||
|
*.tmp
|
||||||
|
*~
|
||||||
|
# Various IDEs
|
||||||
|
.project
|
||||||
|
.idea/
|
||||||
|
*.tmproj
|
||||||
|
.vscode/
|
||||||
@ -0,0 +1,19 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
appVersion: 0.32.2
|
||||||
|
description: Descheduler for Kubernetes is used to rebalance clusters by evicting
|
||||||
|
pods that can potentially be scheduled on better nodes. In the current implementation,
|
||||||
|
descheduler does not schedule replacement of evicted pods but relies on the default
|
||||||
|
scheduler for that.
|
||||||
|
home: https://github.com/kubernetes-sigs/descheduler
|
||||||
|
icon: https://kubernetes.io/images/favicon.png
|
||||||
|
keywords:
|
||||||
|
- kubernetes
|
||||||
|
- descheduler
|
||||||
|
- kube-scheduler
|
||||||
|
maintainers:
|
||||||
|
- email: kubernetes-sig-scheduling@googlegroups.com
|
||||||
|
name: Kubernetes SIG Scheduling
|
||||||
|
name: descheduler
|
||||||
|
sources:
|
||||||
|
- https://github.com/kubernetes-sigs/descheduler
|
||||||
|
version: 0.32.2
|
||||||
@ -0,0 +1,91 @@
|
|||||||
|
# Descheduler for Kubernetes
|
||||||
|
|
||||||
|
[Descheduler](https://github.com/kubernetes-sigs/descheduler/) for Kubernetes is used to rebalance clusters by evicting pods that can potentially be scheduled on better nodes. In the current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
|
||||||
|
|
||||||
|
## TL;DR:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm repo add descheduler https://kubernetes-sigs.github.io/descheduler/
|
||||||
|
helm install my-release --namespace kube-system descheduler/descheduler
|
||||||
|
```
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
This chart bootstraps a [descheduler](https://github.com/kubernetes-sigs/descheduler/) cron job on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Kubernetes 1.14+
|
||||||
|
|
||||||
|
## Installing the Chart
|
||||||
|
|
||||||
|
To install the chart with the release name `my-release`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm install --namespace kube-system my-release descheduler/descheduler
|
||||||
|
```
|
||||||
|
|
||||||
|
The command deploys _descheduler_ on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
|
||||||
|
|
||||||
|
> **Tip**: List all releases using `helm list`
|
||||||
|
|
||||||
|
## Uninstalling the Chart
|
||||||
|
|
||||||
|
To uninstall/delete the `my-release` deployment:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
helm delete my-release
|
||||||
|
```
|
||||||
|
|
||||||
|
The command removes all the Kubernetes components associated with the chart and deletes the release.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
The following table lists the configurable parameters of the _descheduler_ chart and their default values.
|
||||||
|
|
||||||
|
| Parameter | Description | Default |
|
||||||
|
| ----------------------------------- | --------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- |
|
||||||
|
| `kind` | Use as CronJob or Deployment | `CronJob` |
|
||||||
|
| `image.repository` | Docker repository to use | `registry.k8s.io/descheduler/descheduler` |
|
||||||
|
| `image.tag` | Docker tag to use | `v[chart appVersion]` |
|
||||||
|
| `image.pullPolicy` | Docker image pull policy | `IfNotPresent` |
|
||||||
|
| `imagePullSecrets` | Docker repository secrets | `[]` |
|
||||||
|
| `nameOverride` | String to partially override `descheduler.fullname` template (will prepend the release name) | `""` |
|
||||||
|
| `fullnameOverride` | String to fully override `descheduler.fullname` template | `""` |
|
||||||
|
| `namespaceOverride` | Override the deployment namespace; defaults to .Release.Namespace | `""` |
|
||||||
|
| `cronJobApiVersion` | CronJob API Group Version | `"batch/v1"` |
|
||||||
|
| `schedule` | The cron schedule to run the _descheduler_ job on | `"*/2 * * * *"` |
|
||||||
|
| `startingDeadlineSeconds` | If set, configure `startingDeadlineSeconds` for the _descheduler_ job | `nil` |
|
||||||
|
| `timeZone` | configure `timeZone` for CronJob | `nil` |
|
||||||
|
| `successfulJobsHistoryLimit` | If set, configure `successfulJobsHistoryLimit` for the _descheduler_ job | `3` |
|
||||||
|
| `failedJobsHistoryLimit` | If set, configure `failedJobsHistoryLimit` for the _descheduler_ job | `1` |
|
||||||
|
| `ttlSecondsAfterFinished` | If set, configure `ttlSecondsAfterFinished` for the _descheduler_ job | `nil` |
|
||||||
|
| `deschedulingInterval` | If using kind:Deployment, sets time between consecutive descheduler executions. | `5m` |
|
||||||
|
| `replicas` | The replica count for Deployment | `1` |
|
||||||
|
| `leaderElection` | The options for high availability when running replicated components | _see values.yaml_ |
|
||||||
|
| `cmdOptions` | The options to pass to the _descheduler_ command | _see values.yaml_ |
|
||||||
|
| `priorityClassName` | The name of the priority class to add to pods | `system-cluster-critical` |
|
||||||
|
| `rbac.create` | If `true`, create & use RBAC resources | `true` |
|
||||||
|
| `resources` | Descheduler container CPU and memory requests/limits | _see values.yaml_ |
|
||||||
|
| `serviceAccount.create` | If `true`, create a service account for the cron job | `true` |
|
||||||
|
| `serviceAccount.name` | The name of the service account to use, if not set and create is true a name is generated using the fullname template | `nil` |
|
||||||
|
| `serviceAccount.annotations` | Specifies custom annotations for the serviceAccount | `{}` |
|
||||||
|
| `podAnnotations` | Annotations to add to the descheduler Pods | `{}` |
|
||||||
|
| `podLabels` | Labels to add to the descheduler Pods | `{}` |
|
||||||
|
| `nodeSelector` | Node selectors to run the descheduler cronjob/deployment on specific nodes | `nil` |
|
||||||
|
| `service.enabled` | If `true`, create a service for deployment | `false` |
|
||||||
|
| `serviceMonitor.enabled` | If `true`, create a ServiceMonitor for deployment | `false` |
|
||||||
|
| `serviceMonitor.namespace` | The namespace where Prometheus expects to find service monitors | `nil` |
|
||||||
|
| `serviceMonitor.additionalLabels` | Add custom labels to the ServiceMonitor resource | `{}` |
|
||||||
|
| `serviceMonitor.interval` | The scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
|
||||||
|
| `serviceMonitor.honorLabels` | Keeps the scraped data's labels when labels are on collisions with target labels. | `true` |
|
||||||
|
| `serviceMonitor.insecureSkipVerify` | Skip TLS certificate validation when scraping | `true` |
|
||||||
|
| `serviceMonitor.serverName` | Name of the server to use when validating TLS certificate | `nil` |
|
||||||
|
| `serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples after scraping, but before ingestion | `[]` |
|
||||||
|
| `serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping | `[]` |
|
||||||
|
| `affinity` | Node affinity to run the descheduler cronjob/deployment on specific nodes | `nil` |
|
||||||
|
| `topologySpreadConstraints` | Topology Spread Constraints to spread the descheduler cronjob/deployment across the cluster | `[]` |
|
||||||
|
| `tolerations` | tolerations to run the descheduler cronjob/deployment on specific nodes | `nil` |
|
||||||
|
| `suspend` | Set spec.suspend in descheduler cronjob | `false` |
|
||||||
|
| `commonLabels` | Labels to apply to all resources | `{}` |
|
||||||
|
| `livenessProbe` | Liveness probe configuration for the descheduler container | _see values.yaml_ |
|
||||||
@ -0,0 +1,12 @@
|
|||||||
|
Descheduler installed as a {{ .Values.kind }}.
|
||||||
|
|
||||||
|
{{- if eq .Values.kind "Deployment" }}
|
||||||
|
{{- if eq (.Values.replicas | int) 1 }}
|
||||||
|
WARNING: You set replica count as 1 and workload kind as Deployment however leaderElection is not enabled. Consider enabling Leader Election for HA mode.
|
||||||
|
{{- end}}
|
||||||
|
{{- if .Values.leaderElection }}
|
||||||
|
{{- if and (hasKey .Values.cmdOptions "dry-run") (eq (get .Values.cmdOptions "dry-run") true) }}
|
||||||
|
WARNING: You enabled DryRun mode, you can't use Leader Election.
|
||||||
|
{{- end}}
|
||||||
|
{{- end}}
|
||||||
|
{{- end}}
|
||||||
@ -0,0 +1,104 @@
|
|||||||
|
{{/* vim: set filetype=mustache: */}}
|
||||||
|
{{/*
|
||||||
|
Expand the name of the chart.
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.name" -}}
|
||||||
|
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create a default fully qualified app name.
|
||||||
|
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||||
|
If release name contains chart name it will be used as a full name.
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.fullname" -}}
|
||||||
|
{{- if .Values.fullnameOverride -}}
|
||||||
|
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||||
|
{{- if contains $name .Release.Name -}}
|
||||||
|
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Expand the namespace of the release.
|
||||||
|
Allows overriding it for multi-namespace deployments in combined charts.
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.namespace" -}}
|
||||||
|
{{- default .Release.Namespace .Values.namespaceOverride | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create chart name and version as used by the chart label.
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.chart" -}}
|
||||||
|
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Common labels
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.labels" -}}
|
||||||
|
app.kubernetes.io/name: {{ include "descheduler.name" . }}
|
||||||
|
helm.sh/chart: {{ include "descheduler.chart" . }}
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
{{- if .Chart.AppVersion }}
|
||||||
|
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||||
|
{{- end }}
|
||||||
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||||
|
{{- if .Values.commonLabels}}
|
||||||
|
{{ toYaml .Values.commonLabels }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Selector labels
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.selectorLabels" -}}
|
||||||
|
app.kubernetes.io/name: {{ include "descheduler.name" . }}
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create the name of the service account to use
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.serviceAccountName" -}}
|
||||||
|
{{- if .Values.serviceAccount.create -}}
|
||||||
|
{{ default (include "descheduler.fullname" .) .Values.serviceAccount.name }}
|
||||||
|
{{- else -}}
|
||||||
|
{{ default "default" .Values.serviceAccount.name }}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Leader Election
|
||||||
|
*/}}
|
||||||
|
{{- define "descheduler.leaderElection"}}
|
||||||
|
{{- if .Values.leaderElection -}}
|
||||||
|
- --leader-elect={{ .Values.leaderElection.enabled }}
|
||||||
|
{{- if .Values.leaderElection.leaseDuration }}
|
||||||
|
- --leader-elect-lease-duration={{ .Values.leaderElection.leaseDuration }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.leaderElection.renewDeadline }}
|
||||||
|
- --leader-elect-renew-deadline={{ .Values.leaderElection.renewDeadline }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.leaderElection.retryPeriod }}
|
||||||
|
- --leader-elect-retry-period={{ .Values.leaderElection.retryPeriod }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.leaderElection.resourceLock }}
|
||||||
|
- --leader-elect-resource-lock={{ .Values.leaderElection.resourceLock }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.leaderElection.resourceName }}
|
||||||
|
- --leader-elect-resource-name={{ .Values.leaderElection.resourceName }}
|
||||||
|
{{- end }}
|
||||||
|
{{/* resource namespace value starts with a typo so keeping resourceNamescape for backwards compatibility */}}
|
||||||
|
{{- $resourceNamespace := default .Values.leaderElection.resourceNamespace .Values.leaderElection.resourceNamescape -}}
|
||||||
|
{{- if $resourceNamespace -}}
|
||||||
|
- --leader-elect-resource-namespace={{ $resourceNamespace }}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,44 @@
|
|||||||
|
{{- if .Values.rbac.create -}}
|
||||||
|
kind: ClusterRole
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
rules:
|
||||||
|
- apiGroups: ["events.k8s.io"]
|
||||||
|
resources: ["events"]
|
||||||
|
verbs: ["create", "update"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["nodes"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["namespaces"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["pods"]
|
||||||
|
verbs: ["get", "watch", "list", "delete"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["pods/eviction"]
|
||||||
|
verbs: ["create"]
|
||||||
|
- apiGroups: ["scheduling.k8s.io"]
|
||||||
|
resources: ["priorityclasses"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
- apiGroups: ["policy"]
|
||||||
|
resources: ["poddisruptionbudgets"]
|
||||||
|
verbs: ["get", "watch", "list"]
|
||||||
|
{{- if .Values.leaderElection.enabled }}
|
||||||
|
- apiGroups: ["coordination.k8s.io"]
|
||||||
|
resources: ["leases"]
|
||||||
|
verbs: ["create", "update"]
|
||||||
|
- apiGroups: ["coordination.k8s.io"]
|
||||||
|
resources: ["leases"]
|
||||||
|
resourceNames: ["{{ .Values.leaderElection.resourceName | default "descheduler" }}"]
|
||||||
|
verbs: ["get", "patch", "delete"]
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.deschedulerPolicy .Values.deschedulerPolicy.metricsCollector .Values.deschedulerPolicy.metricsCollector.enabled }}
|
||||||
|
- apiGroups: ["metrics.k8s.io"]
|
||||||
|
resources: ["pods", "nodes"]
|
||||||
|
verbs: ["get", "list"]
|
||||||
|
{{- end }}
|
||||||
|
{{- end -}}
|
||||||
@ -0,0 +1,16 @@
|
|||||||
|
{{- if .Values.rbac.create -}}
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: {{ template "descheduler.serviceAccountName" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
{{- end -}}
|
||||||
@ -0,0 +1,14 @@
|
|||||||
|
{{- if .Values.deschedulerPolicy }}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
data:
|
||||||
|
policy.yaml: |
|
||||||
|
apiVersion: "{{ .Values.deschedulerPolicyAPIVersion }}"
|
||||||
|
kind: "DeschedulerPolicy"
|
||||||
|
{{ toYaml .Values.deschedulerPolicy | trim | indent 4 }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,111 @@
|
|||||||
|
{{- if eq .Values.kind "CronJob" }}
|
||||||
|
apiVersion: {{ .Values.cronJobApiVersion | default "batch/v1" }}
|
||||||
|
kind: CronJob
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
schedule: {{ .Values.schedule | quote }}
|
||||||
|
{{- if .Values.suspend }}
|
||||||
|
suspend: {{ .Values.suspend }}
|
||||||
|
{{- end }}
|
||||||
|
concurrencyPolicy: "Forbid"
|
||||||
|
{{- if .Values.startingDeadlineSeconds }}
|
||||||
|
startingDeadlineSeconds: {{ .Values.startingDeadlineSeconds }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if ne .Values.successfulJobsHistoryLimit nil }}
|
||||||
|
successfulJobsHistoryLimit: {{ .Values.successfulJobsHistoryLimit }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if ne .Values.failedJobsHistoryLimit nil }}
|
||||||
|
failedJobsHistoryLimit: {{ .Values.failedJobsHistoryLimit }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.timeZone }}
|
||||||
|
timeZone: {{ .Values.timeZone }}
|
||||||
|
{{- end }}
|
||||||
|
jobTemplate:
|
||||||
|
spec:
|
||||||
|
{{- if .Values.ttlSecondsAfterFinished }}
|
||||||
|
ttlSecondsAfterFinished: {{ .Values.ttlSecondsAfterFinished }}
|
||||||
|
{{- end }}
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
annotations:
|
||||||
|
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||||
|
{{- if .Values.podAnnotations }}
|
||||||
|
{{- .Values.podAnnotations | toYaml | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.selectorLabels" . | nindent 12 }}
|
||||||
|
{{- if .Values.podLabels }}
|
||||||
|
{{- .Values.podLabels | toYaml | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
{{- with .Values.nodeSelector }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- toYaml . | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.affinity }}
|
||||||
|
affinity:
|
||||||
|
{{- toYaml . | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.topologySpreadConstraints }}
|
||||||
|
topologySpreadConstraints:
|
||||||
|
{{- toYaml . | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.dnsConfig }}
|
||||||
|
dnsConfig:
|
||||||
|
{{- .Values.dnsConfig | toYaml | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.tolerations }}
|
||||||
|
tolerations:
|
||||||
|
{{- toYaml . | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.priorityClassName }}
|
||||||
|
priorityClassName: {{ .Values.priorityClassName }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: {{ template "descheduler.serviceAccountName" . }}
|
||||||
|
restartPolicy: "Never"
|
||||||
|
{{- with .Values.imagePullSecrets }}
|
||||||
|
imagePullSecrets:
|
||||||
|
{{- toYaml . | nindent 10 }}
|
||||||
|
{{- end }}
|
||||||
|
containers:
|
||||||
|
- name: {{ .Chart.Name }}
|
||||||
|
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "v%s" .Chart.AppVersion) }}"
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
{{- toYaml .Values.command | nindent 16 }}
|
||||||
|
args:
|
||||||
|
- --policy-config-file=/policy-dir/policy.yaml
|
||||||
|
{{- range $key, $value := .Values.cmdOptions }}
|
||||||
|
{{- if ne $value nil }}
|
||||||
|
- {{ printf "--%s=%s" $key (toString $value) }}
|
||||||
|
{{- else }}
|
||||||
|
- {{ printf "--%s" $key }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
livenessProbe:
|
||||||
|
{{- toYaml .Values.livenessProbe | nindent 16 }}
|
||||||
|
ports:
|
||||||
|
{{- toYaml .Values.ports | nindent 16 }}
|
||||||
|
resources:
|
||||||
|
{{- toYaml .Values.resources | nindent 16 }}
|
||||||
|
{{- if .Values.securityContext }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.securityContext | nindent 16 }}
|
||||||
|
{{- end }}
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /policy-dir
|
||||||
|
name: policy-volume
|
||||||
|
{{- if .Values.podSecurityContext }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.podSecurityContext | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
volumes:
|
||||||
|
- name: policy-volume
|
||||||
|
configMap:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,100 @@
|
|||||||
|
{{- if eq .Values.kind "Deployment" }}
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
{{- if gt (.Values.replicas | int) 1 }}
|
||||||
|
{{- if not .Values.leaderElection.enabled }}
|
||||||
|
{{- fail "You must set leaderElection to use more than 1 replica"}}
|
||||||
|
{{- end}}
|
||||||
|
replicas: {{ required "leaderElection required for running more than one replica" .Values.replicas }}
|
||||||
|
{{- else }}
|
||||||
|
replicas: 1
|
||||||
|
{{- end }}
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
{{- include "descheduler.selectorLabels" . | nindent 6 }}
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.selectorLabels" . | nindent 8 }}
|
||||||
|
{{- if .Values.podLabels }}
|
||||||
|
{{- .Values.podLabels | toYaml | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
annotations:
|
||||||
|
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
|
||||||
|
{{- if .Values.podAnnotations }}
|
||||||
|
{{- .Values.podAnnotations | toYaml | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
{{- if .Values.dnsConfig }}
|
||||||
|
dnsConfig:
|
||||||
|
{{- .Values.dnsConfig | toYaml | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.priorityClassName }}
|
||||||
|
priorityClassName: {{ .Values.priorityClassName }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: {{ template "descheduler.serviceAccountName" . }}
|
||||||
|
{{- with .Values.imagePullSecrets }}
|
||||||
|
imagePullSecrets:
|
||||||
|
{{- toYaml . | nindent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
containers:
|
||||||
|
- name: {{ .Chart.Name }}
|
||||||
|
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (printf "v%s" .Chart.AppVersion) }}"
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
{{- toYaml .Values.command | nindent 12 }}
|
||||||
|
args:
|
||||||
|
- --policy-config-file=/policy-dir/policy.yaml
|
||||||
|
- --descheduling-interval={{ required "deschedulingInterval required for running as Deployment" .Values.deschedulingInterval }}
|
||||||
|
{{- range $key, $value := .Values.cmdOptions }}
|
||||||
|
{{- if ne $value nil }}
|
||||||
|
- {{ printf "--%s=%s" $key (toString $value) }}
|
||||||
|
{{- else }}
|
||||||
|
- {{ printf "--%s" $key }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- include "descheduler.leaderElection" . | nindent 12 }}
|
||||||
|
ports:
|
||||||
|
{{- toYaml .Values.ports | nindent 12 }}
|
||||||
|
livenessProbe:
|
||||||
|
{{- toYaml .Values.livenessProbe | nindent 12 }}
|
||||||
|
resources:
|
||||||
|
{{- toYaml .Values.resources | nindent 12 }}
|
||||||
|
{{- if .Values.securityContext }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.securityContext | nindent 12 }}
|
||||||
|
{{- end }}
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /policy-dir
|
||||||
|
name: policy-volume
|
||||||
|
{{- if .Values.podSecurityContext }}
|
||||||
|
securityContext:
|
||||||
|
{{- toYaml .Values.podSecurityContext | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
volumes:
|
||||||
|
- name: policy-volume
|
||||||
|
configMap:
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
{{- with .Values.nodeSelector }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.affinity }}
|
||||||
|
affinity:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.topologySpreadConstraints }}
|
||||||
|
topologySpreadConstraints:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- with .Values.tolerations }}
|
||||||
|
tolerations:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,27 @@
|
|||||||
|
{{- if eq .Values.kind "Deployment" }}
|
||||||
|
{{- if eq .Values.service.enabled true }}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
name: {{ template "descheduler.fullname" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
spec:
|
||||||
|
clusterIP: None
|
||||||
|
{{- if .Values.service.ipFamilyPolicy }}
|
||||||
|
ipFamilyPolicy: {{ .Values.service.ipFamilyPolicy }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.service.ipFamilies }}
|
||||||
|
ipFamilies: {{ toYaml .Values.service.ipFamilies | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
ports:
|
||||||
|
- name: http-metrics
|
||||||
|
port: 10258
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: 10258
|
||||||
|
selector:
|
||||||
|
{{- include "descheduler.selectorLabels" . | nindent 4 }}
|
||||||
|
type: ClusterIP
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,12 @@
|
|||||||
|
{{- if .Values.serviceAccount.create -}}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.serviceAccountName" . }}
|
||||||
|
namespace: {{ include "descheduler.namespace" . }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
{{- if .Values.serviceAccount.annotations }}
|
||||||
|
annotations: {{ toYaml .Values.serviceAccount.annotations | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end -}}
|
||||||
@ -0,0 +1,44 @@
|
|||||||
|
{{- if eq .Values.kind "Deployment" }}
|
||||||
|
{{- if eq .Values.serviceMonitor.enabled true }}
|
||||||
|
apiVersion: monitoring.coreos.com/v1
|
||||||
|
kind: ServiceMonitor
|
||||||
|
metadata:
|
||||||
|
name: {{ template "descheduler.fullname" . }}-servicemonitor
|
||||||
|
namespace: {{ .Values.serviceMonitor.namespace | default .Release.Namespace }}
|
||||||
|
labels:
|
||||||
|
{{- include "descheduler.labels" . | nindent 4 }}
|
||||||
|
{{- if .Values.serviceMonitor.additionalLabels }}
|
||||||
|
{{- toYaml .Values.serviceMonitor.additionalLabels | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
jobLabel: jobLabel
|
||||||
|
namespaceSelector:
|
||||||
|
matchNames:
|
||||||
|
- {{ include "descheduler.namespace" . }}
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
{{- include "descheduler.selectorLabels" . | nindent 6 }}
|
||||||
|
endpoints:
|
||||||
|
- honorLabels: {{ .Values.serviceMonitor.honorLabels | default true }}
|
||||||
|
port: http-metrics
|
||||||
|
{{- if .Values.serviceMonitor.interval }}
|
||||||
|
interval: {{ .Values.serviceMonitor.interval }}
|
||||||
|
{{- end }}
|
||||||
|
scheme: https
|
||||||
|
tlsConfig:
|
||||||
|
{{- if eq .Values.serviceMonitor.insecureSkipVerify true }}
|
||||||
|
insecureSkipVerify: true
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.serviceMonitor.serverName }}
|
||||||
|
serverName: {{ .Values.serviceMonitor.serverName }}
|
||||||
|
{{- end}}
|
||||||
|
{{- if .Values.serviceMonitor.metricRelabelings }}
|
||||||
|
metricRelabelings:
|
||||||
|
{{ tpl (toYaml .Values.serviceMonitor.metricRelabelings | indent 4) . }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.serviceMonitor.relabelings }}
|
||||||
|
relabelings:
|
||||||
|
{{ tpl (toYaml .Values.serviceMonitor.relabelings | indent 4) . }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
@ -0,0 +1,17 @@
|
|||||||
|
suite: Test Descheduler CronJob
|
||||||
|
|
||||||
|
templates:
|
||||||
|
- "*.yaml"
|
||||||
|
|
||||||
|
release:
|
||||||
|
name: descheduler
|
||||||
|
|
||||||
|
set:
|
||||||
|
kind: CronJob
|
||||||
|
|
||||||
|
tests:
|
||||||
|
- it: creates CronJob when kind is set
|
||||||
|
template: templates/cronjob.yaml
|
||||||
|
asserts:
|
||||||
|
- isKind:
|
||||||
|
of: CronJob
|
||||||
@ -0,0 +1,49 @@
|
|||||||
|
suite: Test Descheduler Deployment
|
||||||
|
|
||||||
|
templates:
|
||||||
|
- "*.yaml"
|
||||||
|
|
||||||
|
release:
|
||||||
|
name: descheduler
|
||||||
|
|
||||||
|
set:
|
||||||
|
kind: Deployment
|
||||||
|
|
||||||
|
tests:
|
||||||
|
- it: creates Deployment when kind is set
|
||||||
|
template: templates/deployment.yaml
|
||||||
|
asserts:
|
||||||
|
- isKind:
|
||||||
|
of: Deployment
|
||||||
|
|
||||||
|
- it: enables leader-election
|
||||||
|
set:
|
||||||
|
leaderElection:
|
||||||
|
enabled: true
|
||||||
|
template: templates/deployment.yaml
|
||||||
|
asserts:
|
||||||
|
- contains:
|
||||||
|
path: spec.template.spec.containers[0].args
|
||||||
|
content: --leader-elect=true
|
||||||
|
|
||||||
|
- it: support leader-election resourceNamespace
|
||||||
|
set:
|
||||||
|
leaderElection:
|
||||||
|
enabled: true
|
||||||
|
resourceNamespace: test
|
||||||
|
template: templates/deployment.yaml
|
||||||
|
asserts:
|
||||||
|
- contains:
|
||||||
|
path: spec.template.spec.containers[0].args
|
||||||
|
content: --leader-elect-resource-namespace=test
|
||||||
|
|
||||||
|
- it: support legacy leader-election resourceNamescape
|
||||||
|
set:
|
||||||
|
leaderElection:
|
||||||
|
enabled: true
|
||||||
|
resourceNamescape: typo
|
||||||
|
template: templates/deployment.yaml
|
||||||
|
asserts:
|
||||||
|
- contains:
|
||||||
|
path: spec.template.spec.containers[0].args
|
||||||
|
content: --leader-elect-resource-namespace=typo
|
||||||
252
cluster/manifests/freeleaps-infra-system/descheduler/values.yaml
Normal file
252
cluster/manifests/freeleaps-infra-system/descheduler/values.yaml
Normal file
@ -0,0 +1,252 @@
|
|||||||
|
# Default values for descheduler.
|
||||||
|
# This is a YAML-formatted file.
|
||||||
|
# Declare variables to be passed into your templates.
|
||||||
|
|
||||||
|
# CronJob or Deployment
|
||||||
|
kind: CronJob
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: registry.k8s.io/descheduler/descheduler
|
||||||
|
# Overrides the image tag whose default is the chart version
|
||||||
|
tag: ""
|
||||||
|
pullPolicy: IfNotPresent
|
||||||
|
|
||||||
|
imagePullSecrets:
|
||||||
|
# - name: container-registry-secret
|
||||||
|
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 500m
|
||||||
|
memory: 256Mi
|
||||||
|
limits:
|
||||||
|
cpu: 500m
|
||||||
|
memory: 256Mi
|
||||||
|
|
||||||
|
ports:
|
||||||
|
- containerPort: 10258
|
||||||
|
protocol: TCP
|
||||||
|
|
||||||
|
securityContext:
|
||||||
|
allowPrivilegeEscalation: false
|
||||||
|
capabilities:
|
||||||
|
drop:
|
||||||
|
- ALL
|
||||||
|
privileged: false
|
||||||
|
readOnlyRootFilesystem: true
|
||||||
|
runAsNonRoot: true
|
||||||
|
runAsUser: 1000
|
||||||
|
|
||||||
|
# podSecurityContext -- [Security context for pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
|
||||||
|
podSecurityContext: {}
|
||||||
|
# fsGroup: 1000
|
||||||
|
|
||||||
|
nameOverride: ""
|
||||||
|
fullnameOverride: ""
|
||||||
|
|
||||||
|
# -- Override the deployment namespace; defaults to .Release.Namespace
|
||||||
|
namespaceOverride: ""
|
||||||
|
|
||||||
|
# labels that'll be applied to all resources
|
||||||
|
commonLabels: {}
|
||||||
|
|
||||||
|
cronJobApiVersion: "batch/v1"
|
||||||
|
schedule: "*/2 * * * *"
|
||||||
|
suspend: false
|
||||||
|
# startingDeadlineSeconds: 200
|
||||||
|
# successfulJobsHistoryLimit: 3
|
||||||
|
# failedJobsHistoryLimit: 1
|
||||||
|
# ttlSecondsAfterFinished 600
|
||||||
|
# timeZone: Etc/UTC
|
||||||
|
|
||||||
|
# Required when running as a Deployment
|
||||||
|
deschedulingInterval: 5m
|
||||||
|
|
||||||
|
# Specifies the replica count for Deployment
|
||||||
|
# Set leaderElection if you want to use more than 1 replica
|
||||||
|
# Set affinity.podAntiAffinity rule if you want to schedule onto a node
|
||||||
|
# only if that node is in the same zone as at least one already-running descheduler
|
||||||
|
replicas: 1
|
||||||
|
|
||||||
|
# Specifies whether Leader Election resources should be created
|
||||||
|
# Required when running as a Deployment
|
||||||
|
# NOTE: Leader election can't be activated if DryRun enabled
|
||||||
|
leaderElection: {}
|
||||||
|
# enabled: true
|
||||||
|
# leaseDuration: 15s
|
||||||
|
# renewDeadline: 10s
|
||||||
|
# retryPeriod: 2s
|
||||||
|
# resourceLock: "leases"
|
||||||
|
# resourceName: "descheduler"
|
||||||
|
# resourceNamespace: "kube-system"
|
||||||
|
|
||||||
|
command:
|
||||||
|
- "/bin/descheduler"
|
||||||
|
|
||||||
|
cmdOptions:
|
||||||
|
v: 3
|
||||||
|
|
||||||
|
# Recommended to use the latest Policy API version supported by the Descheduler app version
|
||||||
|
deschedulerPolicyAPIVersion: "descheduler/v1alpha2"
|
||||||
|
|
||||||
|
# deschedulerPolicy contains the policies the descheduler will execute.
|
||||||
|
# To use policies stored in an existing configMap use:
|
||||||
|
# NOTE: The name of the cm should comply to {{ template "descheduler.fullname" . }}
|
||||||
|
# deschedulerPolicy: {}
|
||||||
|
deschedulerPolicy:
|
||||||
|
# nodeSelector: "key1=value1,key2=value2"
|
||||||
|
# maxNoOfPodsToEvictPerNode: 10
|
||||||
|
# maxNoOfPodsToEvictPerNamespace: 10
|
||||||
|
# metricsCollector:
|
||||||
|
# enabled: true
|
||||||
|
# ignorePvcPods: true
|
||||||
|
# evictLocalStoragePods: true
|
||||||
|
# evictDaemonSetPods: true
|
||||||
|
# tracing:
|
||||||
|
# collectorEndpoint: otel-collector.observability.svc.cluster.local:4317
|
||||||
|
# transportCert: ""
|
||||||
|
# serviceName: ""
|
||||||
|
# serviceNamespace: ""
|
||||||
|
# sampleRate: 1.0
|
||||||
|
# fallbackToNoOpProviderOnError: true
|
||||||
|
profiles:
|
||||||
|
- name: default
|
||||||
|
pluginConfig:
|
||||||
|
- name: DefaultEvictor
|
||||||
|
args:
|
||||||
|
ignorePvcPods: true
|
||||||
|
evictLocalStoragePods: true
|
||||||
|
- name: RemoveDuplicates
|
||||||
|
- name: RemovePodsHavingTooManyRestarts
|
||||||
|
args:
|
||||||
|
podRestartThreshold: 100
|
||||||
|
includingInitContainers: true
|
||||||
|
- name: RemovePodsViolatingNodeAffinity
|
||||||
|
args:
|
||||||
|
nodeAffinityType:
|
||||||
|
- requiredDuringSchedulingIgnoredDuringExecution
|
||||||
|
- name: RemovePodsViolatingNodeTaints
|
||||||
|
- name: RemovePodsViolatingInterPodAntiAffinity
|
||||||
|
- name: RemovePodsViolatingTopologySpreadConstraint
|
||||||
|
- name: LowNodeUtilization
|
||||||
|
args:
|
||||||
|
thresholds:
|
||||||
|
cpu: 20
|
||||||
|
memory: 20
|
||||||
|
pods: 20
|
||||||
|
targetThresholds:
|
||||||
|
cpu: 65
|
||||||
|
memory: 65
|
||||||
|
pods: 50
|
||||||
|
plugins:
|
||||||
|
balance:
|
||||||
|
enabled:
|
||||||
|
- RemoveDuplicates
|
||||||
|
- RemovePodsViolatingTopologySpreadConstraint
|
||||||
|
- LowNodeUtilization
|
||||||
|
deschedule:
|
||||||
|
enabled:
|
||||||
|
- RemovePodsHavingTooManyRestarts
|
||||||
|
- RemovePodsViolatingNodeTaints
|
||||||
|
- RemovePodsViolatingNodeAffinity
|
||||||
|
- RemovePodsViolatingInterPodAntiAffinity
|
||||||
|
|
||||||
|
priorityClassName: system-cluster-critical
|
||||||
|
|
||||||
|
nodeSelector: {}
|
||||||
|
# foo: bar
|
||||||
|
|
||||||
|
affinity: {}
|
||||||
|
# nodeAffinity:
|
||||||
|
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
# nodeSelectorTerms:
|
||||||
|
# - matchExpressions:
|
||||||
|
# - key: kubernetes.io/e2e-az-name
|
||||||
|
# operator: In
|
||||||
|
# values:
|
||||||
|
# - e2e-az1
|
||||||
|
# - e2e-az2
|
||||||
|
# podAntiAffinity:
|
||||||
|
# requiredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
# - labelSelector:
|
||||||
|
# matchExpressions:
|
||||||
|
# - key: app.kubernetes.io/name
|
||||||
|
# operator: In
|
||||||
|
# values:
|
||||||
|
# - descheduler
|
||||||
|
# topologyKey: "kubernetes.io/hostname"
|
||||||
|
topologySpreadConstraints: []
|
||||||
|
# - maxSkew: 1
|
||||||
|
# topologyKey: kubernetes.io/hostname
|
||||||
|
# whenUnsatisfiable: DoNotSchedule
|
||||||
|
# labelSelector:
|
||||||
|
# matchLabels:
|
||||||
|
# app.kubernetes.io/name: descheduler
|
||||||
|
tolerations: []
|
||||||
|
# - key: 'management'
|
||||||
|
# operator: 'Equal'
|
||||||
|
# value: 'tool'
|
||||||
|
# effect: 'NoSchedule'
|
||||||
|
|
||||||
|
rbac:
|
||||||
|
# Specifies whether RBAC resources should be created
|
||||||
|
create: true
|
||||||
|
|
||||||
|
serviceAccount:
|
||||||
|
# Specifies whether a ServiceAccount should be created
|
||||||
|
create: true
|
||||||
|
# The name of the ServiceAccount to use.
|
||||||
|
# If not set and create is true, a name is generated using the fullname template
|
||||||
|
name:
|
||||||
|
# Specifies custom annotations for the serviceAccount
|
||||||
|
annotations: {}
|
||||||
|
|
||||||
|
podAnnotations: {}
|
||||||
|
|
||||||
|
podLabels: {}
|
||||||
|
|
||||||
|
dnsConfig: {}
|
||||||
|
|
||||||
|
livenessProbe:
|
||||||
|
failureThreshold: 3
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 10258
|
||||||
|
scheme: HTTPS
|
||||||
|
initialDelaySeconds: 3
|
||||||
|
periodSeconds: 10
|
||||||
|
|
||||||
|
service:
|
||||||
|
enabled: false
|
||||||
|
# @param service.ipFamilyPolicy [string], support SingleStack, PreferDualStack and RequireDualStack
|
||||||
|
#
|
||||||
|
ipFamilyPolicy: ""
|
||||||
|
# @param service.ipFamilies [array] List of IP families (e.g. IPv4, IPv6) assigned to the service.
|
||||||
|
# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/
|
||||||
|
# E.g.
|
||||||
|
# ipFamilies:
|
||||||
|
# - IPv6
|
||||||
|
# - IPv4
|
||||||
|
ipFamilies: []
|
||||||
|
|
||||||
|
serviceMonitor:
|
||||||
|
enabled: false
|
||||||
|
# The namespace where Prometheus expects to find service monitors.
|
||||||
|
# namespace: ""
|
||||||
|
# Add custom labels to the ServiceMonitor resource
|
||||||
|
additionalLabels: {}
|
||||||
|
# prometheus: kube-prometheus-stack
|
||||||
|
interval: ""
|
||||||
|
# honorLabels: true
|
||||||
|
insecureSkipVerify: true
|
||||||
|
serverName: null
|
||||||
|
metricRelabelings: []
|
||||||
|
# - action: keep
|
||||||
|
# regex: 'descheduler_(build_info|pods_evicted)'
|
||||||
|
# sourceLabels: [__name__]
|
||||||
|
relabelings: []
|
||||||
|
# - sourceLabels: [__meta_kubernetes_pod_node_name]
|
||||||
|
# separator: ;
|
||||||
|
# regex: ^(.*)$
|
||||||
|
# targetLabel: nodename
|
||||||
|
# replacement: $1
|
||||||
|
# action: replace
|
||||||
File diff suppressed because it is too large
Load Diff
@ -8,3 +8,4 @@ openebs,https://openebs.github.io/openebs,force-update
|
|||||||
azure-blob-csi-driver,https://raw.githubusercontent.com/kubernetes-sigs/blob-csi-driver/master/charts,force-update
|
azure-blob-csi-driver,https://raw.githubusercontent.com/kubernetes-sigs/blob-csi-driver/master/charts,force-update
|
||||||
godaddy-webhook,https://snowdrop.github.io/godaddy-webhook,force-update
|
godaddy-webhook,https://snowdrop.github.io/godaddy-webhook,force-update
|
||||||
azure-disk-csi-driver,https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts,force-update
|
azure-disk-csi-driver,https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts,force-update
|
||||||
|
descheduler,https://kubernetes-sigs.github.io/descheduler/,force-update
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user