Prev Next

Cloud / HELM Interview Questions

1. What is Helm and why was it created for Kubernetes? 2. Explain the core components of Helm architecture: Tiller (v2) vs Helm v3 controller pattern. 3. What is a Helm Chart? Explain its standard directory structure. 4. What is a Helm Release and how does Helm manage release state? 5. How do you install, upgrade, and rollback a Helm chart with real examples? 6. Explain Helm template syntax: Go templates, values injection, and pipeline functions with examples. 7. What are built-in Helm objects and their typical use cases? 8. How do you manage Helm chart dependencies and subcharts? Explain the library chart pattern. 9. What is the difference between 'helm upgrade --install' and separate install/upgrade commands? 10. How do you create conditionals and loops in Helm templates? Provide practical examples. 11. What are Helm hooks and how do you use them for database migrations and pre-install jobs? 12. How do you write Helm tests and integrate them into CI/CD pipelines? 13. How do you debug Helm charts and troubleshoot rendering issues? 14. What is the three-way strategic merge patch and why is it important for Helm upgrades? 15. How do you manage multiple environments (dev, staging, prod) with Helm? 16. What are CRDs in Helm and best practices for managing them? 17. How do you use the 'lookup' function in Helm templates for advanced conditional logic? 18. How do you validate Helm values with JSON Schema? 19. What is Helm OCI Registry support and how do you use it? 20. Explain Helm security best practices: RBAC, pod security, and secrets management. 21. What is Helmfile and how does it extend Helm for managing multiple releases? 22. How does ArgoCD integrate with Helm for GitOps deployment patterns? 23. How do you create custom Helm plugins and when should you use them? 24. What are the best practices for structuring large Helm charts for microservices? 25. How do you implement zero-downtime deployments with Helm? 26. How do you migrate from Helm v2 to Helm v3? 27. What are Helm release lifecycle policies and how do you manage release history? 28. How do you use Helm with service meshes (Istio, Linkerd) for canary deployments? 29. How do you implement Helm chart testing with Terratest and other tools? 30. What are the common Helm anti-patterns and how to avoid them? 31. How do you optimize Helm chart performance for large-scale deployments? 32. How do you manage Helm RBAC permissions for different team roles? 33. How do you use Helm with Terraform for infrastructure as code integration? 34. What are Helm provenance files and how do you sign charts? 35. How do you implement custom validation admission webhooks with Helm? 36. What are the upcoming features in Helm and the roadmap? 37. How do you implement Blue-Green and Canary deployments with Helm? 38. How do you manage Helm charts for stateful applications (databases, Kafka)? 39. How do you implement resource quotas and limit ranges with Helm?
Could not find what you were looking for? send us the question and we would be happy to answer your question.

1. What is Helm and why was it created for Kubernetes?

Helm is the package manager for Kubernetes, often called "the apt-get/yum of Kubernetes." It was created to solve the fundamental challenge of managing complex Kubernetes applications that consist of multiple interconnected resources (Deployments, Services, ConfigMaps, Secrets, Ingress rules, etc.). Without Helm, deploying a typical microservices application requires manually creating and maintaining dozens of separate YAML files, each with environment-specific values hardcoded inside them.

Helm introduces the concept of "charts" - packaged collections of pre-configured Kubernetes resources that can be easily installed, upgraded, rolled back, and shared. A single Helm chart might contain templates for a web frontend, backend API, database, cache layer, and all the supporting services like load balancers and persistent volumes. When you install a chart, Helm renders these templates with your specific configuration values (like database passwords, domain names, replica counts) and applies them to your cluster.

The three core problems Helm solves are: 1) Complexity management - bundling dozens of YAML files into a single deployable unit; 2) Reusability - sharing application configurations across teams and environments via public/private repositories; 3) Release management - tracking what was deployed, enabling atomic upgrades and reliable rollbacks. Since its creation in 2015, Helm has become the CNCF standard for Kubernetes package management, with over 70% of Kubernetes users adopting it according to CNCF surveys.

What primary problem does Helm solve in Kubernetes?
2. Explain the core components of Helm architecture: Tiller (v2) vs Helm v3 controller pattern.

The most significant architectural difference between Helm v2 and v3 is the removal of Tiller, the server-side component. Helm v2 architecture consisted of two parts: the Helm client (CLI) and Tiller (server-side component running inside the Kubernetes cluster). Tiller managed releases, tracked deployment history, and executed operations within the cluster. While functional, Tiller had major security drawbacks - it required cluster-admin privileges to function, creating a privileged service account that could modify any resource, which many organizations considered unacceptable for production environments.

Helm v3 completely eliminates Tiller, moving to a client-only architecture with direct Kubernetes API communication through kubeconfig credentials. Each Helm operation (install, upgrade, rollback) now uses the same RBAC permissions as the user executing the command - following the principle of least privilege. Release information that Tiller stored in ConfigMaps/secrets within the cluster is now stored exclusively in Secrets (improved over v2's mixed approach) within the namespace where the release is installed.

The v3 controller pattern introduced several improvements: 1) Three-way strategic merge patch for upgrades (compares current state, previous release state, and user-specified changes); 2) Improved upgrade logic that prevents unnecessary pod restarts; 3) Chart dependencies stored in charts/ directory rather than requirements.yaml; 4) OCI registry support for storing charts in container registries. This client-only architecture makes Helm more secure, simpler to debug, and compatible with standard Kubernetes RBAC workflows.

What was the primary reason Helm v3 removed Tiller?
3. What is a Helm Chart? Explain its standard directory structure.

A Helm Chart is the packaging format for Kubernetes applications - essentially a collection of templates, default configuration values, metadata, and dependencies that together describe a deployable application. Think of a chart as a blueprint that Helm uses to generate and manage Kubernetes manifests. Charts are versioned, can be shared via repositories, and support environment-specific customizations through values files.

The standard Helm chart directory structure follows a convention that Helm expects:

  • Chart.yaml - Metadata about the chart: name, version, description, maintainers, type (application/library), and keywords. This file is required.
  • values.yaml - Default configuration values that can be overridden during installation. Contains all configurable parameters with sensible defaults.
  • templates/ - Directory containing Kubernetes YAML templates with Go template directives. When Helm renders the chart, it combines templates with values to produce manifests.
  • templates/NOTES.txt - Optional post-install notes displayed to users after installation.
  • templates/_helpers.tpl - Reusable template partials (named with underscores) for DRY chart definitions.
  • charts/ - Directory for dependency charts (subcharts). Can contain .tgz files or unpacked chart directories.
  • .helmignore - File patterns to exclude when packaging the chart (similar to .gitignore).
  • crds/ - Custom Resource Definition YAML files that install before the chart renders.
  • README.md - Documentation explaining chart usage, configuration options, and examples.

A minimal chart requires only Chart.yaml, values.yaml, and a templates/ directory with at least one template. Tools like helm create generate a starter chart with examples of each component.

Which file in a Helm chart contains default configuration values?
4. What is a Helm Release and how does Helm manage release state?

A Helm Release is a specific instance of a chart running in a Kubernetes cluster. When you install a chart with a unique release name (e.g., helm install my-nginx bitnami/nginx), Helm creates a release named "my-nginx" that contains all the resources generated from that chart plus metadata about the deployment. This release concept is what enables Helm's powerful lifecycle management features.

Helm v3 manages release state using Kubernetes Secrets stored in the same namespace as the release. Each release creates a secret named sh.helm.release.v1.<release-name>.v<revision-number>. These secrets contain the complete state of the release including all rendered manifests, chart metadata, values used, and status information. The secrets are versioned - each install, upgrade, or rollback creates a new secret revision, allowing Helm to maintain a complete history of changes.

The release management workflow works as follows:

  • Install - Creates revision 1 of the release secret with status=deployed
  • Upgrade - Creates revision 2 (or higher) with status=deployed
  • Rollback - Creates a new revision that reuses manifests from a previous revision
  • Uninstall - Marks release as uninstalled (can keep history with --keep-history)
How does Helm v3 store release revision history?
5. How do you install, upgrade, and rollback a Helm chart with real examples?

Helm provides intuitive commands for the complete application lifecycle. Here are concrete examples using the popular Bitnami Nginx chart:

Installation: helm install my-web bitnami/nginx --namespace web-apps --create-namespace --set service.type=LoadBalancer,replicaCount=3. This installs a release named "my-web" using the bitnami/nginx chart. Helm creates revision 1.

Upgrade with values file: Create custom-values.yaml then run helm upgrade my-web bitnami/nginx -f custom-values.yaml --namespace web-apps. This creates revision 2, applies changes, and only updates modified resources using three-way strategic merge.

Rollback: helm history my-web -n web-apps shows revisions. helm rollback my-web 1 -n web-apps reverts to revision 1, creating a new revision (3) that reproduces revision 1's manifests.

Uninstall: helm uninstall my-web -n web-apps removes all resources. Add --keep-history to retain records.

Upgrade with --install: helm upgrade --install my-web bitnami/nginx -n web-apps performs install if release doesn't exist, upgrade if it does - ideal for CI/CD.

What command rolls back a Helm release to revision 3?
6. Explain Helm template syntax: Go templates, values injection, and pipeline functions with examples.

Helm uses Go templates enhanced with Sprig functions (over 60+ functions) to generate Kubernetes manifests. Templates live in the templates/ directory.

Basic Values Injection: In values.yaml: replicaCount: 3. In deployment.yaml: spec: replicas: {{ .Values.replicaCount }}. The dot (.) represents the root context.

Control Structures: {{- if .Values.persistence.enabled }}...{{- end }} with hyphens (-) trimming whitespace.

Pipelines and Functions: image: "{{ .Values.image.repository | default "nginx" }}:{{ .Values.image.tag | quote }}". Common functions: quote, default, required, toYaml, nindent.

nindent X is critical - it adds a newline then X spaces. Example: {{ include "myapp.labels" . | nindent 2 }}.

Variables: {{- $replicas := .Values.replicaCount | int }} capture values. Sprig provides eq/ne/lt/le/gt/ge comparisons.

Lookup Function: queries Kubernetes API during rendering for conditional logic based on cluster state.

How do you access 'replicaCount' from values.yaml in a template?
7. What are built-in Helm objects and their typical use cases?

Helm provides several built-in objects available in all templates:

.Values - Most frequently used. Contains configuration values from values.yaml, --set flags, and --values files with specific precedence.

.Chart - Metadata from Chart.yaml: .Chart.Name, .Chart.Version, .Chart.AppVersion, etc. Use for labeling: app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}.

.Release - Release information: .Release.Name, .Release.Namespace, .Release.IsUpgrade, .Release.Revision. Critical for naming: name: {{ .Release.Name }}-configmap.

.Files - Access non-template files: .Files.Get, .Files.Glob, .Files.AsSecrets. Example: {{ (.Files.Glob "config/*.yaml").AsConfig | indent 2 }}.

.Capabilities - Cluster capabilities: .Capabilities.KubeVersion, .Capabilities.APIVersions.Has for conditional API versioning.

.Template - Current template info: .Template.Name, .Template.BasePath for debugging.

Which built-in object provides Kubernetes cluster version and API capabilities?

8. How do you manage Helm chart dependencies and subcharts? Explain the library chart pattern.

Helm chart dependencies allow composing complex applications from smaller, reusable components. Since Helm v3, dependencies are managed in Chart.yaml under dependencies.

Defining Dependencies: dependencies: - name: postgresql version: "10.x.x" repository: "https://charts.bitnami.com/bitnami" condition: postgresql.enabled

Key fields: repository (HTTPS, OCI, or local path), condition (conditional inclusion), tags (batch enabling), alias (multiple instances).

Managing Dependencies: Run helm dependency update to download .tgz files to charts/ and generate Chart.lock.

Library Charts: Special chart type (type: library) containing only templates and helpers, no resources. Define helpers in templates/_macros.tpl with {{- define "mylib.deployment" -}}. Usage: {{ include "mylib.deployment" . }}. Bitnami's Common Library Chart is a popular example.

Global Values: Pass configuration to all subcharts: global: imageRegistry: myregistry.com. Subcharts access via .Values.global.imageRegistry.

Where are Helm chart dependencies declared in Helm v3?
9. What is the difference between 'helm upgrade --install' and separate install/upgrade commands?

helm upgrade --install (or helm upgrade -i) is an idempotent Helm operation that installs if the release doesn't exist, or upgrades if it does. Essential for CI/CD pipelines where jobs run repeatedly.

Behavior comparison:

  • Separate helm install: Fails with "already exists" if release exists
  • Separate helm upgrade: Fails with "release: not found" if release doesn't exist
  • helm upgrade --install: Checks existence - installs (revision 1) if not found, upgrades if found

Critical differences: upgrade --install merges new values with existing values from the last release, while install only uses provided values. This affects idempotency - omitted --set flags preserve old values rather than reverting to defaults.

Best practices for CI/CD: helm upgrade --install my-app ./mychart --namespace prod --create-namespace --wait --atomic --history-max 10 --atomic ensures rollback on failure.

Reset values when needed: helm upgrade --install --reset-values discards previous values - useful for major version changes.

What makes 'helm upgrade --install' useful for CI/CD pipelines?
10. How do you create conditionals and loops in Helm templates? Provide practical examples.

Helm templates support powerful control structures for dynamic manifest generation.

If/Else Conditionals: {{- if .Values.ingress.enabled }}...{{- else }}...{{- end }} Conditional operators: eq, ne, lt, gt, and, or, not.

Range Loops (Iteration): Loop over arrays: {{- range .Values.nodeSelector }}- {{ . }}{{- end }}. With index: {{- range $index, $service := .Values.services }}. Loop over maps: {{- range $key, $value := .Values.annotations }}.

Practical patterns:

  • Conditional resource creation: Entire file omitted if condition false using {{- if .Values.serviceAccount.create }}
  • Loop with conditional filter: {{- range .Values.containers }}{{- if not .disabled }}- name: {{ .name }}{{- end }}{{- end }}
  • Nested loops for multi-dimensional data

Performance note: Complex loops with hundreds of iterations may slow helm template rendering. For large datasets, consider pre-processing.

What syntax iterates over a list in Helm templates?
11. What are Helm hooks and how do you use them for database migrations and pre-install jobs?

Helm hooks allow containers to run at specific points during a release's lifecycle. Hooks are Kubernetes Job resources with special annotations that Helm recognizes.

Hook types available: pre-install, post-install, pre-upgrade, post-upgrade, pre-rollback, post-rollback, pre-delete, post-delete, test (for custom testing).

Database migration example: apiVersion: batch/v1 kind: Job metadata: name: {{ .Release.Name }}-db-migrate annotations: "helm.sh/hook": pre-upgrade "helm.sh/hook-weight": "5" "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded spec: template: spec: containers: - name: migrate image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" command: ["python", "manage.py", "migrate"] restartPolicy: OnFailure

Hook weights determine execution order (lower numbers run first). Delete policies control cleanup: before-hook-creation (delete previous), hook-succeeded (clean after success), hook-failed (keep for debugging).

Hook resources are not released with the chart - they persist unless specifically deleted or managed with delete policies. For critical hooks like database migrations, test thoroughly in staging first.

Which Helm hook runs before an upgrade for database migrations?
12. How do you write Helm tests and integrate them into CI/CD pipelines?

Helm tests are pod definitions that run custom validation after a release is installed. They are defined in the templates/tests/ directory (must start with test- prefix).

Example test definition (test-connection.yaml): apiVersion: v1 kind: Pod metadata: name: "{{ .Release.Name }}-test-connection" annotations: "helm.sh/hook": test spec: containers: - name: test image: curlimages/curl command: ["sh", "-c"] args: - "curl -f http://{{ .Release.Name }}/health && echo 'Test passed' && exit 0" restartPolicy: Never

Running tests: helm test RELEASE_NAME executes all test pods and collects results. Test passes if pod exits with code 0, fails on any other exit code.

CI/CD integration patterns:

  • ArgoCD: helm test my-release --logs in post-deployment hooks
  • GitLab CI: helm upgrade --install ... && helm test my-release
  • Jenkins: Parallel test execution across multiple releases
  • GitHub Actions: timeout 5m helm test my-release || exit 1

Test templates access the same .Values and .Release objects as regular templates. Common tests include connectivity checks, data validation, schema verification, and smoke tests.

Best practice: Keep tests idempotent and fast (<30 seconds). Use --timeout flag to prevent hanging tests.

What command runs Helm tests after installation?
13. How do you debug Helm charts and troubleshoot rendering issues?

Helm provides multiple debugging tools to identify issues before and after deployment.

Template rendering debugging:

  • helm template RELEASE_NAME CHART_PATH - Renders templates without installing, shows exact Kubernetes YAML that would be applied
  • helm template --debug - Shows template execution details and any Go template errors
  • helm get manifest RELEASE_NAME - Shows what was actually deployed for an existing release
  • helm get values RELEASE_NAME - Shows values used for a release (including defaults and overrides)
  • helm get notes RELEASE_NAME - Shows NOTES.txt output (helpful for connection info)
  • helm get all RELEASE_NAME - Combined output of everything

Syntax validation: helm lint CHART_PATH validates Chart.yaml, values.yaml, and template syntax. Returns warnings and errors with line numbers.

Dry run with diff: helm upgrade --install RELEASE CHART --dry-run --debug shows what would change without applying. Add --dry-run=server (Helm v3.11+) for server-side validation.

Common debugging patterns:

  • Add {{- fail "Debug: value is " .Values.somevalue }} to stop rendering and print values
  • Use {{ .Values | toYaml | nindent 2 }} to dump all values during debugging
  • Set HELM_DEBUG=1 environment variable for verbose client logs
  • Check release status: helm status RELEASE_NAME --show-resources
  • View failed resource creation: kubectl get events --all-namespaces | grep RELEASE_NAME

Remote debugging: For CI failures, use helm history RELEASE_NAME to find problematic revision, then helm get values --revision N to see what changed.

Which command shows rendered templates without installing to Kubernetes?
14. What is the three-way strategic merge patch and why is it important for Helm upgrades?

The three-way strategic merge patch is Helm v3's intelligent algorithm for determining exactly what changed during an upgrade, minimizing unnecessary pod restarts and resource updates.

How it works: Helm compares three versions of each resource:

  • Current state - What's actually running in the cluster (live manifests)
  • Previous release state - What was last deployed (saved in release secret)
  • New state - What the current chart + values renders to

Why three-way matters: Two-way merge (Helm v2) only compared previous state vs new state, missing manual changes made to live resources. With three-way merge, Helm can detect:

  • Changes made manually in the cluster (external modifications)
  • Values that were removed from values.yaml (should be reverted)
  • Fields that should not be touched (preserve cluster-specific settings)

Strategic merge patch fields: $patch: delete removes fields that would otherwise be retained. $retainKeys: [...] specifies which fields should be kept when merging.

Use with annotations: helm.sh/resource-policy: keep prevents Helm from deleting a resource during upgrade/uninstall (useful for PVCs, namespaces, CRDs).

Performance impact: Three-way merge reduces unnecessary churn - only fields that truly differ trigger updates. For deployments, this prevents rolling restarts when only labels or annotations change on non-pod template fields.

What three states does Helm v3 compare during the three-way strategic merge patch?
15. How do you manage multiple environments (dev, staging, prod) with Helm?

Managing multiple environments with Helm requires a combination of strategies for values separation, release organization, and environment-specific configurations.

1. Values file organization: values/ common.yaml # Shared across all environments dev.yaml # Dev-specific overrides staging.yaml # Staging-specific overrides prod.yaml # Prod-specific overrides Deploy with: helm upgrade --install myapp ./chart -f values/common.yaml -f values/dev.yaml

2. Folder-based environment separation: environments/ dev/ Chart.yaml # Can override dependencies values.yaml # Dev values (extends base) staging/ values.yaml prod/ values.yaml Use environment as Helm working directory: helm upgrade --install myapp ./environments/dev -f ./environments/dev/values.yaml

3. Release naming convention: - Dev: myapp-dev (namespace: dev) - Staging: myapp-staging (namespace: staging) - Prod: myapp-prod (namespace: prod)

4. Template conditionals by environment: In values.yaml: environment: dev In template: {{- if eq .Values.environment "prod" }} replicas: 5 {{- else }} replicas: 1 {{- end }}

5. CI/CD multi-env pipeline (GitLab example): deploy-dev: script: helm upgrade --install myapp ./chart -f values/dev.yaml --namespace dev only: - dev deploy-prod: script: helm upgrade --install myapp ./chart -f values/prod.yaml --namespace prod only: - main

6. Helmfile for advanced env management: environments: dev: values: - values/dev.yaml prod: values: - values/prod.yaml

Best practices: Keep environment values in Git (not secrets), use CI/CD variables for secrets, validate values with JSON Schema per environment, and consider tools like Terragrunt for complex multi-env setups.

How do you apply multiple values files in Helm for environment layering?
16. What are CRDs in Helm and best practices for managing them?

Custom Resource Definitions (CRDs) extend Kubernetes API with custom resources. Helm has special handling for CRDs because they must exist before custom resource instances are created.

CRD directory structure: Place CRD YAML files in crds/ directory at chart root (not in templates/). Helm installs all *.yaml files in crds/ BEFORE rendering any templates - this ensures CRDs exist for any Custom Resource instances defined in templates.

Example CRD (crds/crontab-crd.yaml): apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: group: stable.example.com names: kind: CronTab plural: crontabs scope: Namespaced versions: - name: v1 served: true storage: true

CRD limitations in Helm:

  • CRDs are not upgraded or deleted by Helm (by design - prevents data loss)
  • CRDs cannot be templated (no Go template support in crds/)
  • CRDs are installed only on install, NOT on upgrade
  • If CRD changes, you must manually apply or use hooks

Best practices:

  • Use separate "crd-chart" that only contains CRDs (library chart pattern)
  • Version CRDs separately from application charts
  • For upgrades requiring CRD changes: kubectl apply -f crds/ manually, then helm upgrade
  • Consider using Helm hooks for complex CRD upgrade workflows: annotations: "helm.sh/hook": pre-install,pre-upgrade
  • Use --skip-crds flag if CRDs are managed externally
  • Test CRD upgrades in staging first - CRD changes can be irreversible
Where should CRD files be placed in a Helm chart?
17. How do you use the 'lookup' function in Helm templates for advanced conditional logic?

The lookup function queries the Kubernetes API server during template rendering, enabling charts to adapt based on actual cluster state rather than just values.

Syntax: {{ lookup "apiVersion" "resource" "namespace" "name" }} Returns resource object or nil if not found.

Common use cases with examples:

1. Conditional namespace creation: {{- if not (lookup "v1" "Namespace" "" "my-namespace") }} apiVersion: v1 kind: Namespace metadata: name: my-namespace {{- end }}

2. Check if storage class exists before using it: {{- if (lookup "storage.k8s.io/v1" "StorageClass" "" "fast-storage") }} storageClassName: fast-storage {{- else }} storageClassName: standard {{- end }}

3. Retrieve existing configmap for data merging: {{- $existing := lookup "v1" "ConfigMap" .Release.Namespace "app-config" }} {{- if $existing }} {{- $existingData := $existing.data }} # Merge with existing, preserving user modifications {{- end }}

4. Certificate checking before creating secrets: {{- if not (lookup "cert-manager.io/v1" "Certificate" .Release.Namespace "tls-cert") }} # Create certificate only if missing {{- end }}

Limitations and considerations:

  • lookup only works during helm upgrade --install (not with helm template or helm lint)
  • Requires RBAC permissions to read the resources being queried
  • Can slow down rendering for many lookups (cache is not cluster-wide)
  • Results may change between dry-run and actual install (race conditions)
  • Cannot mutate state - read-only operation

Debugging lookup: Use {{- $result := lookup "v1" "Pod" .Release.Namespace "my-pod" }} {{- $result | toYaml | nindent 0 }} to inspect what lookup returns.

What does the lookup function return if a resource is not found?
18. How do you validate Helm values with JSON Schema?

Helm supports JSON Schema validation for values.yaml, helping catch configuration errors early before deployment. Create values.schema.json in chart root.

Basic schema example: { "$schema": "https://json-schema.org/draft-07/schema", "properties": { "replicaCount": { "type": "integer", "minimum": 1, "maximum": 100, "default": 1 }, "image": { "type": "object", "properties": { "repository": {"type": "string", "pattern": "^[a-z0-9-/]+$"}, "tag": {"type": "string", "minLength": 1}, "pullPolicy": { "type": "string", "enum": ["Always", "Never", "IfNotPresent"], "default": "IfNotPresent" } }, "required": ["repository", "tag"] }, "resources": { "type": "object", "properties": { "limits": { "type": "object", "patternProperties": { "^(cpu|memory)$": {"type": "string", "pattern": "^[0-9]+(Mi|Gi|m|)$"} } } } } }, "required": ["image"], "additionalProperties": false }

Conditional validation with if/then: { "if": { "properties": {"environment": {"const": "prod"}} }, "then": { "properties": { "replicaCount": {"minimum": 3}, "resources": {"required": ["limits"]} } } }

Custom error messages: Use errorMessage keyword: "errorMessage": "replicaCount must be between 1 and 100" (requires additional library).

Validation workflow: helm lint automatically validates against schema. helm template --validate (v3.3+) performs server-side validation. Schema validation occurs BEFORE template rendering - invalid values fail fast.

Best practices:

  • Keep schema synchronized with values.yaml defaults
  • Use pattern properties for regex validation
  • Define required fields clearly
  • Test schema with helm lint and invalid test values
  • Document schema in README for chart users
Where should values.schema.json be placed in a Helm chart?
19. What is Helm OCI Registry support and how do you use it?

Helm v3 added support for storing charts in OCI (Open Container Initiative) registries, treating Helm charts as container artifacts alongside container images.

Enabling OCI support: OCI is experimental in early v3 but became stable in v3.8. Configure registry authentication: export HELM_EXPERIMENTAL_OCI=1 # v3.7 and earlier helm registry login myregistry.azurecr.io --username $REGISTRY_USERNAME --password $REGISTRY_PASSWORD

Saving chart to OCI registry: helm package ./mychart # creates mychart-0.1.0.tgz helm push mychart-0.1.0.tgz oci://myregistry.azurecr.io/helm

Installing chart from OCI registry: helm install myrelease oci://myregistry.azurecr.io/helm/mychart --version 0.1.0 Or using shorthand with dependency in Chart.yaml: dependencies: - name: mychart version: 0.1.0 repository: "oci://myregistry.azurecr.io/helm"

OCI vs HTTP repository comparison: - Authentication: OCI uses standard container registry auth (docker login) - Storage: OCI charts stored alongside images in the same registry - Versioning: OCI uses digest-based verification (sha256) - Layer caching: OCI supports layer caching for chart dependencies - Registry support: All major registries (ACR, ECR, GCR, Harbor, Docker Hub) support OCI artifacts

Listing and pulling charts: helm pull oci://myregistry.azurecr.io/helm/mychart --version 0.1.0 helm show chart oci://myregistry.azurecr.io/helm/mychart helm show values oci://myregistry.azurecr.io/helm/mychart

Best practices: Use OCI for organizations already using container registries. Keep chart versions unique (semver). Use registry lifecycle policies to clean old chart versions. For public charts, HTTP repositories (ArtifactHub) remain popular.

What command pushes a Helm chart to an OCI registry?
20. Explain Helm security best practices: RBAC, pod security, and secrets management.

Helm security requires attention at multiple levels: chart content, deployment permissions, and runtime security.

RBAC for Helm v3 (no Tiller): Each Helm operation uses client credentials. Create service accounts with minimal permissions: apiVersion: v1 kind: ServiceAccount metadata: name: helm-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: helm-deployer rules: - apiGroups: [""apps""] resources: [""deployments""] verbs: [""get"", ""list"", ""create"", ""update"", ""patch"", ""delete""] - apiGroups: [""""] resources: [""services"", ""configmaps"", ""secrets""] verbs: [""get"", ""list"", ""create"", ""update"", ""delete""]

Pod Security Standards in charts: securityContext: runAsNonRoot: true runAsUser: 1001 capabilities: drop: [""ALL""] readOnlyRootFilesystem: true allowPrivilegeEscalation: false

Secrets management patterns:

  • NEVER store secrets in values.yaml. Use external secrets managers: helm secrets plugin (sops), SealedSecrets, External Secrets Operator, or HashiCorp Vault via vault-helm
  • Encrypted secrets with helm-secrets + sops: helm secrets upgrade myapp ./chart -f secrets.yaml
  • Use Kubernetes native Secrets with RBAC restrictions

Chart security scanning: helm lint # Basic validation helm template . | kubesec scan # Kubernetes security checks checkov -d ./mychart # Infrastructure as code scanning trivy image --severity HIGH,CRITICAL myapp:latest

Additional best practices:

  • Use --dry-run and --dry-run=server before actual deployment
  • Implement admission control (OPA/Gatekeeper) to enforce helm policies
  • Sign charts with provenance files: helm package --sign --key mykey
  • Scan base images in CI/CD
  • Regularly update Helm and Kubernetes versions
  • Use network policies to limit pod communication
Why are secrets safer in external secret management rather than values.yaml?
21. What is Helmfile and how does it extend Helm for managing multiple releases?

Helmfile is a declarative spec for deploying multiple Helm charts together, improving Helm for complex microservices environments. It acts as a Helm orchestration layer.

Helmfile.yaml example: repositories: - name: bitnami url: https://charts.bitnami.com/bitnami - name: stable url: https://kubernetes-charts.storage.googleapis.com environments: dev: values: - values/dev.yaml prod: values: - values/prod.yaml releases: - name: postgresql namespace: database chart: bitnami/postgresql version: 12.1.0 values: - postgresql-values.yaml - postgresql-{{ .Environment.Name }}.yaml secrets: - secrets/postgresql-{{ .Environment.Name }}.yaml - name: redis namespace: cache chart: bitnami/redis version: 17.0.0 needs: - database/postgresql # Wait for dependency - name: myapp namespace: default chart: ./myapp-chart values: - myapp-values.yaml - name: ingress-nginx namespace: ingress chart: ingress-nginx/ingress-nginx version: 4.4.2 installed: {{ .Environment.Name | eq "prod" }} hooks: pre-install: "kubectl create namespace ingress --dry-run=client -o yaml | kubectl apply -f -"

Key Helmfile commands: helmfile diff # Show changes before applying helmfile apply # Apply changes (helm upgrade --install) helmfile sync # Sync releases to desired state helmfile status # Show release statuses helmfile destroy # Delete all releases helmfile template # Render templates without applying helmfile list # List all managed releases

Advanced features:

  • Templating: Helmfile supports Go templates in the spec itself
  • Hooks for pre/post operations in any shell
  • Needs for release ordering and dependencies
  • Secrets support via helm-secrets plugin integration
  • Layered values with environment-specific overrides
  • Selectors to filter releases: helmfile apply --selector name=postgresql

Use cases: Platform teams managing shared infrastructure, CI/CD pipelines deploying microservices, environment promotion workflows, disaster recovery (state as code).

What does the 'helmfile sync' command do?
22. How does ArgoCD integrate with Helm for GitOps deployment patterns?

ArgoCD supports Helm natively as a configuration management tool, enabling GitOps workflows where cluster state is declared in Git and automatically synchronized.

ArgoCD Helm configuration in Application spec: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: myapp namespace: argocd spec: source: repoURL: https://github.com/myorg/myrepo targetRevision: main path: helm/myapp helm: valueFiles: - values.yaml - values-{{ .Values.spec.destination.namespace }}.yaml parameters: - name: image.tag value: v1.2.3 - name: replicaCount value: "3" releaseName: myapp-helm-release values: | ingress: enabled: true hosts: - myapp.example.com destination: server: https://kubernetes.default.svc namespace: production syncPolicy: automated: prune: true selfHeal: true allowEmpty: false syncOptions: - Validate=true - CreateNamespace=true - PrunePropagationPolicy=foreground

Helm value handling in ArgoCD:

  • valueFiles - List of value files to merge (relative to chart path)
  • parameters - Individual parameter overrides (equivalent to --set)
  • values - Inline YAML values (highest precedence)
  • fileParameters - Read parameters from file

Multiple sources configuration (helm + kustomize): spec: source: repoURL: https://github.com/myorg/myrepo path: base/helm helm: valueFiles: - $values/environments/prod/values.yaml sources: - repoURL: https://github.com/myorg/env-configs targetRevision: main ref: values

ArgoCD Helm specific features:

  • Auto-helm version detection (v2 or v3)
  • Helm hooks support with sync-wave annotations
  • Values from ConfigMap plugins for external sources
  • Parameter overrides without modifying Git

Best practices: Store environment-specific values in separate directories, use ApplicationSets for multi-environment deployment, enable selfHeal for configuration drift correction, and use syncPolicy automated with prune to remove orphaned resources.

What feature of ArgoCD automatically corrects configuration drift from Git?
23. How do you create custom Helm plugins and when should you use them?

Helm plugins extend Helm CLI functionality with custom commands. They are written as scripts (bash, Python, Go) and placed in $(helm home)/plugins/.

Basic plugin structure: ~/.local/share/helm/plugins/myplugin/ plugin.yaml # Plugin metadata myplugin.sh # Executable script README.md # Documentation LICENSE

plugin.yaml example: name: "myplugin" version: "0.1.0" usage: "Run custom pre-deployment validation" description: |- This plugin validates Helm charts against custom rules before deployment. command: "$HELM_PLUGIN_DIR/validate.sh" ignoreFlags: false useTunnel: false hooks: install: "echo Installing myplugin" update: "echo Updating myplugin"

Plugin script example (validate.sh): #!/bin/bash set -e CHART_PATH=$1 NAMESPACE=$2 echo "Running custom validations..." # Check for disallowed image registries if grep -r "image:.*docker.io" $CHART_PATH/templates/; then echo "ERROR: Docker Hub images not allowed in production" exit 1 fi # Validate all resources have resource limits if ! grep -r "resources:" $CHART_PATH/templates/; then echo "ERROR: Missing resource limits" exit 1 fi echo "All validations passed" exit 0

Installing and using plugins: helm plugin install https://github.com/myorg/helm-myplugin helm myplugin validate ./mychart production helm plugin list helm plugin update myplugin helm plugin uninstall myplugin

Popular community plugins:

  • helm-diff - Show diff between releases
  • helm-secrets - Manage encrypted secrets
  • helm-unittest - Unit testing for charts
  • helm-github - Deploy from GitHub releases
  • helm-schema-gen - Generate JSON Schema from values.yaml

When to create plugins: Custom validation rules, integration with internal tooling, complex multi-step workflows, custom templating engines, generating documentation, or auditing deployments.

Which file defines a Helm plugin's metadata and entry point?
24. What are the best practices for structuring large Helm charts for microservices?

Large microservices deployments require careful chart organization to maintain sanity. Here are proven patterns:

1. Umbrella chart pattern (parent with subcharts): myapp/ Chart.yaml # Dependency declarations values.yaml # Global values charts/ service-a/ Chart.yaml values.yaml templates/ service-b/ Chart.yaml values.yaml templates/ common/ # Library chart Chart.yaml templates/_helpers.tpl

2. Shared values structure: # Global values propagate to all subcharts global: imageRegistry: myregistry.com imagePullSecrets: [regcred] monitoring: enabled: true tracing: enabled: true service-a: replicaCount: 2 resources: {...} service-b: replicaCount: 3 resources: {...}

3. Template organization patterns: templates/ _helpers.tpl # Global helpers _deployment.tpl # Reusable deployment template _service.tpl # Reusable service template _configmap.tpl # Reusable configmap template service-a-deployment.yaml # Service-specific (uses templates) service-a-service.yaml service-b-deployment.yaml service-b-service.yaml

4. Values separation by environment: values/ common.yaml # Shared across all dev.yaml staging.yaml prod.yaml region/ us-east.yaml eu-west.yaml

5. Naming conventions for consistency: # _helpers.tpl {{- define "myapp.name" -}} {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} {{- end }} {{- define "myapp.labels" -}} helm.sh/chart: {{ include "myapp.chart" . }} {{ include "myapp.selectorLabels" . }} app.kubernetes.io/managed-by: {{ .Release.Service }} {{- end }}

6. Configuration patterns: - ConfigMaps for non-sensitive config - Secrets for sensitive data (never in values.yaml) - External configuration via Helm hooks or init containers

7. Testing strategy: tests/ test-connection.yaml test-database.yaml test-service-mesh.yaml

8. Documentation requirements: - README.md with values table - values.schema.json for validation - Examples directory with sample configurations

What pattern should be used for reusable template blocks across microservices?
25. How do you implement zero-downtime deployments with Helm?

Zero-downtime deployments with Helm require combining Kubernetes features with Helm-specific strategies.

1. RollingUpdate strategy in deployment: spec: strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 0 # Critical for zero-downtime minReadySeconds: 10 revisionHistoryLimit: 10 maxUnavailable: 0 ensures old pods keep running until new pods are ready.

2. Readiness and liveness probes: readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 3 successThreshold: 1 livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 periodSeconds: 10

3. PodDisruptionBudget for voluntary disruptions: apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: {{ include "myapp.fullname" . }} spec: minAvailable: 2 selector: matchLabels: {{- include "myapp.selectorLabels" . | nindent 6 }}

4. Helm upgrade flags for safety: helm upgrade --install myapp ./chart \ --wait \ --timeout 5m \ --atomic \ --cleanup-on-fail - --wait: Wait for all resources to be ready - --atomic: Rollback on failure (v3 feature) - --cleanup-on-fail: Remove failed resources

5. Pre-stop hooks for graceful shutdown: spec: containers: - name: app lifecycle: preStop: exec: command: ["sh", "-c", "sleep 15 && nginx -s quit"]

6. Database migration strategies: - Pre-upgrade hooks for migrations before new version starts - Backward compatible schema changes only (add columns, don't drop) - Blue-green deployment pattern for major schema changes

7. Progressive delivery with Flagger/Argo Rollouts: apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: myapp spec: strategy: canary: steps: - setWeight: 25 - pause: {duration: 1m} - setWeight: 50 - pause: {duration: 1m} - setWeight: 100

8. Monitoring during deployment: Monitor during rollout: kubectl rollout status deployment/myapp --watch Check for errors: helm ls --all-namespaces | grep -E "failed|pending"

Which Helm upgrade flag ensures rollback on deployment failure?
26. How do you migrate from Helm v2 to Helm v3?

Migrating from Helm v2 to v3 requires careful planning due to architectural changes (removal of Tiller).

Prerequisites: Helm v3 client installed, kubectl access, backup important releases.

Step 1: Install Helm v3 alongside v2 # Download Helm v3 binary wget https://get.helm.sh/helm-v3.12.0-linux-amd64.tar.gz tar -zxvf helm-v3.12.0-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/helm3 helm3 version

Step 2: Install helm-2to3 plugin helm3 plugin install https://github.com/helm/helm-2to3

Step 3: Migrate configuration helm3 2to3 move config # Migrates helm v2 config (repositories, plugins) to v3

Step 4: Migrate releases (dry-run first) # List v2 releases helm3 2to3 list # Dry-run migration for a release helm3 2to3 convert my-release --dry-run # Convert release (moves to v3, deletes from Tiller) helm3 2to3 convert my-release # Convert all releases with label filter helm3 2to3 convert --label-filter="app=myapp" --all

Step 5: Clean up Tiller (after all releases migrated) # Remove Tiller deployment kubectl delete deployment tiller-deploy -n kube-system kubectl delete service tiller-deploy -n kube-system kubectl delete clusterrolebinding tiller kubectl delete clusterrole tiller

Step 6: Chart compatibility updates # Update Chart.yaml apiVersion: v2 # Changed from v1 dependencies: # Moved from requirements.yaml - name: redis version: 16.x.x repository: https://charts.bitnami.com/bitnami # Remove requirements.yaml # Update template references (no architectural changes needed for most charts)

Common migration issues:

  • CRDs: v3 installs crds/ before templates; ensure CRDs not in templates/
  • Hooks: Job restartPolicy changed (OnFailure recommended)
  • Values precedence: v3 merges differently; test with --dry-run
  • Release storage: v3 uses Secrets; gets converted automatically

Validation after migration: helm3 list --all-namespaces helm3 history my-release helm3 get values my-release --all helm3 test my-release

Rollback plan: Keep Helm v2 client and Tiller until all releases verified. If issues occur, helm-2to3 plugin can revert: helm3 2to3 revert my-release

Which plugin enables migration from Helm v2 to v3?
27. What are Helm release lifecycle policies and how do you manage release history?

Helm v3 stores release history as Secrets, each revision containing complete state. Managing this history is important for etcd performance and compliance.

Viewing release history: helm history my-release helm history my-release --max 20 helm list --all-namespaces --date # Show all releases sorted by date helm list --deployed # Currently deployed releases only helm list --failed # Failed releases helm list --pending # Pending releases

Limiting history with --history-max: # During install helm install my-release ./chart --history-max 10 # During upgrade helm upgrade my-release ./chart --history-max 10 # Configure default in helm CLI export HELM_HISTORY_MAX=10 helm upgrade my-release ./chart

Cleaning up old revisions manually: # Find old revisions kubectl get secrets -n my-namespace | grep "sh.helm.release.v1.my-release" | grep -v "v[0-9]\+$" # Delete old secrets (keep last 10) kubectl get secrets -n my-namespace -o name | grep "sh.helm.release.v1.my-release" | \ head -n -10 | xargs kubectl delete -n my-namespace

Setting global history limit in Helm: # Environment variable export HELM_HISTORY_MAX=10 # In values (for CI/CD) helm upgrade --install myapp ./chart --set global.historyMax=10 # Chart default in values.yaml historyMax: 10

Release cleanup automation with CronJob: apiVersion: batch/v1 kind: CronJob metadata: name: helm-cleanup spec: schedule: "0 0 * * 0" # Weekly jobTemplate: spec: template: spec: containers: - name: cleanup image: alpine/helm:3.12.0 command: - sh - -c - | for ns in $(kubectl get ns -o name | cut -d/ -f2); do for release in $(helm list -n $ns -q); do helm history $release -n $ns --max $HELM_HISTORY_MAX done done

Compliance and auditing: - Keep minimum 5 revisions for rollback capability - Retain failed release history for debugging - Implement retention policies based on environment: - Dev: 10 revisions - Staging: 20 revisions - Production: 30 revisions - Export release state to external audit system periodically

Disabling history (not recommended): --history-max=0 disables history entirely - prevents rollback.

What flag limits Helm release history to prevent unlimited secret accumulation?
28. How do you use Helm with service meshes (Istio, Linkerd) for canary deployments?

Helm integrates with service meshes to enable sophisticated traffic management patterns beyond basic Kubernetes rollout strategies.

Helm chart with Istio VirtualService: # templates/virtualservice.yaml {{- if .Values.istio.enabled }} apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: {{ include "myapp.fullname" . }} spec: hosts: - {{ .Values.istio.host }} gateways: - {{ .Values.istio.gateway }} http: - match: - headers: version: exact: v2 route: - destination: host: {{ include "myapp.fullname" . }} subset: v2 weight: {{ .Values.canary.weight }} - route: - destination: host: {{ include "myapp.fullname" . }} subset: v1 weight: 100 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: {{ include "myapp.fullname" . }} spec: host: {{ include "myapp.fullname" . }} subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 {{- end }}

Canary deployment with weights: # values.yaml canary: enabled: true weight: 10 # 10% traffic to v2 istio: enabled: true host: myapp.example.com # CI/CD progressive weight increase helm upgrade myapp ./chart --set canary.weight=25 helm upgrade myapp ./chart --set canary.weight=50 helm upgrade myapp ./chart --set canary.weight=100

Linkerd integration: # Enable mesh injection in namespace kubectl label namespace myapp istio-injection=enabled # Istio kubectl annotate namespace myapp linkerd.io/inject=enabled # Linkerd # In Helm chart templates, add annotations annotations: {{- if .Values.linkerd.enabled }} linkerd.io/inject: "enabled" {{- end }}

Traffic splitting with Flagger + Helm: apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: myapp spec: targetRef: apiVersion: apps/v1 kind: Deployment name: myapp progressDeadlineSeconds: 60 canaryAnalysis: interval: 30s threshold: 10 stepWeight: 10 metrics: - name: istio_requests_total threshold: 99 webhooks: - name: load-test url: http://flagger-loadtester.test/ timeout: 5s metadata: cmd: "hey -z 1m -q 10 -c 2 http://myapp.test"

Helm chart structure for mesh deployments: templates/ deployment.yaml # With version labels service.yaml # Standard service virtualservice.yaml # Istio traffic routing destinationrule.yaml # Subset definitions authorizationpolicy.yaml # Security policies

Best practices: Separate service mesh configuration into optional components (enable with conditionals). Use Helm hooks for mesh injection readiness checks. Monitor with Kiali/Jaeger for visualization.

Which Istio resource defines traffic routing percentages between service versions?
29. How do you implement Helm chart testing with Terratest and other tools?

Chart testing ensures reliability before production deployment. Multiple tools provide different testing approaches.

1. Helm unittest (native Helm testing): # tests/deployment_test.yaml suite: test deployment templates: - deployment.yaml tests: - it: should create deployment with proper labels asserts: - isKind: of: Deployment - hasDocuments: count: 1 - equal: path: metadata.labels.app value: myapp - equal: path: spec.replicas value: 3 - matchRegex: path: spec.template.spec.containers[0].image pattern: "myapp:.*" set: replicaCount: 3 image.tag: latest Run: helm unittest ./mychart

2. Terratest (Go-based real cluster testing): package test import ( "testing" "github.com/gruntwork-io/terratest/modules/helm" "github.com/gruntwork-io/terratest/modules/k8s" ) func TestHelmChart(t *testing.T) { helmOptions := &helm.Options{ SetValues: map[string]string{ "replicaCount": "3", "image.tag": "test", }, } releaseName := "test-myapp" namespaceName := "test-namespace" // Install chart helm.Install(t, helmOptions, "./mychart", releaseName) // Verify deployment deployment := k8s.GetDeployment(t, kubectlOptions, releaseName) assert.Equal(t, int32(3), *deployment.Spec.Replicas) // Test connectivity pod := k8s.GetPod(t, kubectlOptions, releaseName) tunnel := k8s.NewTunnel(t, kubectlOptions, k8s.ResourceTypePod, pod.Name, 8080, 80) tunnel.ForwardPort(t) resp, err := http.Get("http://localhost:8080/health") assert.NoError(t, err) assert.Equal(t, 200, resp.StatusCode) // Cleanup helm.Delete(t, helmOptions, releaseName, true) }

3. Chart Testing (ct) tool: # ct.yaml config target-branch: main validate-maintainers: false chart-dirs: - charts helm-extra-args: --timeout 300s check-version-increment: true # Run tests ct lint --config ct.yaml ct install --config ct.yaml --namespace ct-test

4. Goss integration for validation: # goss.yaml http: http://myapp-service:8080/health: status: 200 body: ["OK"] timeout: 1000 command: kubectl get pods -l app=myapp: exit-status: 0 stdout: - Running

5. CI/CD testing pipeline (GitHub Actions): name: Test Helm Charts on: [pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: helm/kind-action@v1 - name: Run chart tests run: | helm unittest ./charts/*/ ct install --config ct.yaml

Which tool provides native Helm template testing without a Kubernetes cluster?
30. What are the common Helm anti-patterns and how to avoid them?

Recognizing Helm anti-patterns helps maintain production-grade charts.

1. Anti-pattern: Hardcoding values in templates # BAD image: nginx:1.21 replicas: 3 # GOOD image: {{ .Values.image.repository }}:{{ .Values.image.tag }} replicas: {{ .Values.replicaCount }}

2. Anti-pattern: Storing secrets in values.yaml # BAD - values.yaml database_password: "SuperSecret123" # GOOD - Use external secrets manager env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: {{ .Values.dbSecretName }} key: password

3. Anti-pattern: Ignoring .helmignore # BAD - Large files, binaries, test data packaged in chart # GOOD - .helmignore should exclude: .git/ .gitignore *.tgz *.swp *~ tests/ venv/ __pycache__/ *.log

4. Anti-pattern: Overly complex conditionals # BAD - Nested conditionals deep {{- if .Values.features.advanced }} {{- if .Values.features.advanced.monitoring }} {{- if .Values.features.advanced.monitoring.prometheus }} # Only here # GOOD - Use helper functions {{- define "monitoring.enabled" -}} {{- and .Values.features.advanced .Values.features.advanced.monitoring .Values.features.advanced.monitoring.prometheus }} {{- end }} {{- if include "monitoring.enabled" . }}

5. Anti-pattern: Not handling required values # BAD - fails silently when value missing host: {{ .Values.database.host }} # GOOD host: {{ required "database.host is required" .Values.database.host }}

6. Anti-pattern: Using latest tag for images # BAD image: tag: latest # GOOD image: tag: {{ .Values.image.tag | default "1.2.3" }}

7. Anti-pattern: Ignoring resource limits # GOOD - always include resources: limits: cpu: {{ .Values.resources.limits.cpu | default "500m" }} memory: {{ .Values.resources.limits.memory | default "512Mi" }} requests: cpu: {{ .Values.resources.requests.cpu | default "250m" }} memory: {{ .Values.resources.requests.memory | default "256Mi" }}

8. Anti-pattern: No upgrade strategy defined # GOOD spec: strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 0

9. Anti-pattern: Chart name collisions # Use naming helpers consistently: {{ include "myapp.fullname" . }}

10. Anti-pattern: Missing documentation # Include README.md with all configurable values and examples

Why is using "latest" as image tag an anti-pattern?
31. How do you optimize Helm chart performance for large-scale deployments?

Large-scale Helm usage requires optimization across chart design, rendering, and deployment strategies.

1. Template rendering optimization: # Use named templates for repeated logic {{- define "myapp.selectorLabels" -}} app.kubernetes.io/name: {{ .Chart.Name }} app.kubernetes.io/instance: {{ .Release.Name }} {{- end }} # Avoid deep conditional nesting # Use fail-fast validation early in templates {{- required "Valid .Values.environment required" .Values.environment }}

2. Reduce rendered manifest size: # Compress labels using helpers labels: {{- include "myapp.labels" . | indent 4 }} # Avoid generating empty resources {{- if .Values.extraResources }} {{ .Values.extraResources | toYaml | nindent 0 }} {{- end }}

3. Chart size optimization: # .helmignore patterns *.tar.gz *.zip *.log temp/ tests/fixtures/ **/testdata/ # Minify values.yaml (remove comments in CI) helm-values-minifier values.yaml > values.min.yaml

4. Parallel deployment strategies: # Deploy multiple releases in parallel (CI/CD) for release in frontend backend database; do helm upgrade --install $release ./charts/$release --wait & done wait

5. Use library charts for common patterns: # Single library chart included in all microservices dependencies: - name: common-lib version: 1.2.0 repository: file://../common-lib

6. Remote chart caching: # Cache dependencies locally helm dependency update # Use cached charts in CI helm dependency build --verify

7. Resource request tuning: # Right-size resource requests based on monitoring resources: requests: cpu: {{ .Values.resources.requests.cpu | default "100m" }} memory: {{ .Values.resources.requests.memory | default "128Mi" }} limits: cpu: {{ .Values.resources.limits.cpu | default "500m" }} memory: {{ .Values.resources.limits.memory | default "512Mi" }}

8. Profile deployment performance: # Time Helm operations time helm upgrade --install myapp ./chart # Measure API call count kubectl get --raw /metrics | grep helm # Use --wait flag wisely (can slow CI) helm upgrade --install myapp ./chart --wait --timeout 5m

9. Split large charts: - Single >10MB chart → split into micro-charts - Separate data plane vs control plane charts - Use helmfile for managing multiple charts together

10. Performance monitoring with Prometheus metrics: # Track Helm operation duration as metric helm_operation_duration_seconds{operation="install",status="success"}

What technique reduces repetitive template code in Helm charts?
32. How do you manage Helm RBAC permissions for different team roles?

Implementing least-privilege RBAC for Helm operations requires careful permission design across teams.

1. Role-based access by team: # Developer role (can deploy to dev namespace) apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: dev name: helm-developer rules: - apiGroups: ["apps", "extensions"] resources: ["deployments", "statefulsets", "daemonsets"] verbs: ["get", "list", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["services", "configmaps", "secrets", "persistentvolumeclaims"] verbs: ["get", "list", "create", "update", "patch", "delete"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["get", "list", "create", "update", "patch", "delete"] - apiGroups: ["batch"] resources: ["jobs", "cronjobs"] verbs: ["get", "list", "create", "update", "patch", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: namespace: dev name: helm-developer-binding subjects: - kind: Group name: developers apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: helm-developer apiGroup: rbac.authorization.k8s.io

2. Platform team (full cluster access): apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: helm-platform-engineer rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - nonResourceURLs: ["/metrics", "/healthz"] verbs: ["get"]

3. Read-only auditor: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: helm-auditor rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get", "list", "watch"] - apiGroups: ["helm.cattle.io"] resources: ["helmchartconfigs"] verbs: ["get", "list"]

4. Service account for CI/CD: apiVersion: v1 kind: ServiceAccount metadata: name: helm-cicd namespace: pipelines --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: helm-cicd-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: helm-cicd-deployer subjects: - kind: ServiceAccount name: helm-cicd namespace: pipelines

5. Namespace isolation with Helm: # Deploy with specific namespace permissions kubectl create namespace team-a kubectl create rolebinding helm-deployer-team-a \ --clusterrole=helm-deployer \ --serviceaccount=team-a:default \ --namespace=team-a

6. Fine-grained resource permissions: # Allow specific operations only rules: - apiGroups: ["apps"] resources: ["deployments/scale"] verbs: ["get", "patch"] # Allow scaling but not full deployment updates

7. Audit RBAC usage: # Check effective permissions kubectl auth can-i create deployments --as=system:serviceaccount:dev:default kubectl auth can-i get secrets --as=jane.doe # Audit existing RBAC kubectl get clusterrole,clusterrolebinding,role,rolebinding -A

Which role typically has permissions to deploy to a single namespace but not cluster-wide?
33. How do you use Helm with Terraform for infrastructure as code integration?

Combining Helm with Terraform enables infrastructure and application deployment in the same IaC workflow.

Terraform Helm provider example: # providers.tf terraform { required_providers { helm = { source = "hashicorp/helm" version = "~> 2.9" } kubernetes = { source = "hashicorp/kubernetes" version = "~> 2.20" } } } # Configure Kubernetes provider provider "kubernetes" { config_path = "~/.kube/config" } # Configure Helm provider provider "helm" { kubernetes { config_path = "~/.kube/config" } }

Deploy Helm chart with Terraform: # helm_release resource resource "helm_release" "nginx" { name = "nginx-ingress" repository = "https://kubernetes.github.io/ingress-nginx" chart = "ingress-nginx" version = "4.7.1" namespace = "ingress-nginx" create_namespace = true values = [ <<-EOT controller: replicaCount: 2 service: type: LoadBalancer resources: requests: cpu: 100m memory: 128Mi EOT ] set { name = "controller.metrics.enabled" value = "true" } set_string { name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/aws-load-balancer-type" value = "nlb" } depends_on = [ kubernetes_namespace.ingress ] } # Deploy local chart resource "helm_release" "myapp" { name = "myapp" chart = "${path.module}/charts/myapp" namespace = "production" values = [ file("${path.module}/environments/prod.yaml") ] timeout = 300 # Lifecycle rules lifecycle { ignore_changes = [ set # Ignore individual set changes if using values file ] } }

Deploy from OCI registry: resource "helm_release" "oci_app" { name = "oci-app" repository = "oci://myregistry.azurecr.io/helm" chart = "myapp" version = "1.2.3" namespace = "default" verify = true # Verify signature }

Multiple environments with Terraform workspaces: # main.tf locals { environment = terraform.workspace values_file = "${path.module}/environments/${local.environment}.yaml" } resource "helm_release" "myapp" { name = "myapp-${local.environment}" chart = "./myapp" namespace = local.environment values = [ file(local.values_file) ] } # Usage: # terraform workspace new dev # terraform workspace new prod # terraform apply -var-file=environments/prod.tfvars

Data sources for existing releases: data "helm_release" "existing" { name = "nginx-ingress" namespace = "ingress-nginx" } output "nginx_version" { value = data.helm_release.existing.version }

Best practices: Use terraform state locking with remote backend, manage secrets with Vault provider, use depends_on for chart ordering, and implement drift detection with terraform plan.

Which Terraform resource manages Helm chart deployments?
34. What are Helm provenance files and how do you sign charts?

Provenance files provide cryptographic verification that Helm charts come from trusted sources and haven't been tampered with.

Generating GPG key for signing: # Generate GPG key gpg --full-generate-key # Select RSA and RSA, 4096 bits, no expiry # Export public key gpg --export --armor "Helm Maintainer" > helm-public.key # Configure Helm to use GPG key export HELM_KEY_NAME="Helm Maintainer" export HELM_KEY_PASSPHRASE_FILE=~/helm-passphrase.txt

Signing a chart during packaging: # Package and sign chart helm package --sign --key "Helm Maintainer" --keyring ~/.gnupg/pubring.gpg ./mychart # Results: # mychart-1.2.3.tgz # mychart-1.2.3.tgz.prov # Provenance file # Verify provenance helm verify mychart-1.2.3.tgz # Verify with specific keyring helm verify --keyring ~/.gnupg/pubring.gpg mychart-1.2.3.tgz

Provenance file contents (PROV file): -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 apiVersion: v1 description: A Helm chart for Kubernetes name: mychart version: 1.2.3 ... files: Chart.yaml: sha256:abc123... values.yaml: sha256:def456... templates/deployment.yaml: sha256:ghi789... -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABCgAGBQJfghkLAAoJEL... -----END PGP SIGNATURE-----

Repository with signed charts: # Index with provenance helm repo index --url https://myrepo.github.io/charts --merge index.yaml . # Client verification on install helm install myrepo/mychart --verify # Configure Helm to always verify export HELM_VERIFY=always

CI/CD signing automation: # GitHub Actions signing - name: Import GPG key run: | echo "${{ secrets.GPG_PRIVATE_KEY }}" | gpg --import echo "${{ secrets.GPG_PASSPHRASE }}" > passphrase.txt - name: Package and sign chart run: | helm package --sign --key "${{ secrets.GPG_KEY_NAME }}" \ --passphrase-file passphrase.txt \ ./charts/mychart - name: Verify signature run: | helm verify charts/mychart-*.tgz

Trust management: # Add trusted keys to keyring gpg --import trusted-maintainer.key helm repo add --keyring ~/.gnupg/pubring.gpg myrepo https://myrepo.github.io/charts # Verify chart dependencies are signed helm dependency update --verify

Limitations: Provenance doesn't verify the contents of external charts referenced by dependencies. OCI registries support Cosign signatures as alternative.

What file extension indicates a Helm provenance file?
35. How do you implement custom validation admission webhooks with Helm?

Admission webhooks enforce custom policies on Kubernetes resources. Helm can deploy them but requires special handling for certificate management.

ValidatingWebhookConfiguration with Helm # templates/validatingwebhook.yaml {{- if .Values.webhook.enabled }} apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: {{ include "myapp.fullname" . }}-webhook annotations: cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "myapp.fullname" . }}-webhook-tls webhooks: - name: validate.myapp.example.com clientConfig: service: name: {{ include "myapp.fullname" . }}-webhook namespace: {{ .Release.Namespace }} path: /validate caBundle: {{ .Values.webhook.caBundle }} # Or injected by cert-manager rules: - operations: ["CREATE", "UPDATE"] apiGroups: ["apps"] apiVersions: ["v1"] resources: ["deployments"] failurePolicy: Fail admissionReviewVersions: ["v1"] sideEffects: None timeoutSeconds: 5 {{- end }}

Webhook service and deployment # templates/webhook-deployment.yaml apiVersion: v1 kind: Service metadata: name: {{ include "myapp.fullname" . }}-webhook spec: selector: app: {{ include "myapp.name" . }} component: webhook ports: - port: 443 targetPort: 8443 --- apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "myapp.fullname" . }}-webhook spec: replicas: 2 template: spec: containers: - name: webhook image: {{ .Values.webhook.image.repository }}:{{ .Values.webhook.image.tag }} ports: - containerPort: 8443 volumeMounts: - name: webhook-tls mountPath: /certs volumes: - name: webhook-tls secret: secretName: {{ include "myapp.fullname" . }}-webhook-tls

cert-manager integration for TLS # templates/certificate.yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: {{ include "myapp.fullname" . }}-webhook-tls spec: secretName: {{ include "myapp.fullname" . }}-webhook-tls dnsNames: - {{ include "myapp.fullname" . }}-webhook.{{ .Release.Namespace }}.svc - {{ include "myapp.fullname" . }}-webhook.{{ .Release.Namespace }}.svc.cluster.local issuerRef: name: {{ .Values.certManager.issuer.name }} kind: ClusterIssuer

Helm hook for certificate readiness # templates/ensure-cert.yaml apiVersion: batch/v1 kind: Job metadata: name: {{ include "myapp.fullname" . }}-cert-check annotations: "helm.sh/hook": post-install,post-upgrade "helm.sh/hook-weight": "5" "helm.sh/hook-delete-policy": hook-succeeded spec: template: spec: containers: - name: check image: bitnami/kubectl:latest command: - sh - -c - | until kubectl get secret {{ include "myapp.fullname" . }}-webhook-tls; do sleep 2 done restartPolicy: OnFailure

Testing webhook locally # Generate CA and cert for testing openssl genrsa -out ca.key 2048 openssl req -x509 -new -nodes -key ca.key -subj "/CN=webhook" -days 10000 -out ca.crt # Configure webhook with caBundle from ca.crt

Which Kubernetes resource defines admission webhook configuration?
36. What are the upcoming features in Helm and the roadmap?

Helm continues to evolve with community-driven features. Key roadmap items include:

1. Helm OCI GA improvements (v3.12+) - Complete OCI registry support stable - Cosign integration for signature verification - Registry fallback mechanisms

2. Helm v4 planning (targeting 2025) - Remove deprecated features (tiller remnants, old API versions) - Improved error messages and debugging - Better Windows support - Reduced binary size

3. Enhanced validation # Planned features helm lint --strict # Stricter validation helm template --validate-schema # Native JSON Schema validation helm verify --cosign # Cosign signature verification

4. Improved multi-chart management - Native dependency management without helmfile - Better atomic operations across charts - Transactional rollbacks for multiple charts

5. Security enhancements - SBOM (Software Bill of Materials) generation: helm sbom mychart - Vulnerability scanning integration - Supply chain attestations

6. Performance improvements - Parallel template rendering - Lazy loading of dependencies - Incremental upgrades (only changed resources)

7. Better Kubernetes integration - Server-side apply native support - Dynamic client for CRDs - Improved dry-run capabilities

8. Helm ecosystem growth - Helm Dashboard GA - Helm Controller for Kubernetes (like ArgoCD but native) - Enhanced IDE plugins (VSCode, IntelliJ)

Try experimental features: export HELM_EXPERIMENTAL_OCI=1 export HELM_FEATURE_GATES="AllAlpha=true" helm plugin install https://github.com/helm/community # Preview v4 constructs helm template --v4 ./mychart

Community involvement: - SIG-Helm meetings (bi-weekly) - Contributing guide: https://github.com/helm/community - Roadmap: https://github.com/orgs/helm/projects

Migration planning: Most v3 charts will work with v4 with minimal changes. Plan for: - Update deprecated APIs in templates - Remove v2-specific features - Test with latest v3 before v4 release

Which Helm version is currently under active planning for future release?
37. How do you implement Blue-Green and Canary deployments with Helm?

Advanced deployment patterns with Helm require careful release management and service routing.

Blue-Green deployment pattern: # values.yaml blue: enabled: true replicaCount: 3 image: tag: blue-1.0 green: enabled: false replicaCount: 3 image: tag: green-2.0 service: selectorVersion: blue # templates/deployment-blue.yaml {{- if .Values.blue.enabled }} apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "myapp.fullname" . }}-blue labels: version: blue spec: replicas: {{ .Values.blue.replicaCount }} selector: matchLabels: app: {{ include "myapp.name" . }} version: blue template: metadata: labels: version: blue spec: containers: - name: app image: {{ .Values.image.repository }}:{{ .Values.blue.image.tag }} {{- end }} # Same for deployment-green.yaml # Service toggles between versions apiVersion: v1 kind: Service metadata: name: {{ include "myapp.fullname" . }} spec: selector: version: {{ .Values.service.selectorVersion }}

Blue-Green deployment procedure: # Deploy blue (current) helm upgrade --install myapp ./chart \ --set blue.enabled=true,green.enabled=false,service.selectorVersion=blue # Deploy green alongside helm upgrade --install myapp ./chart \ --set blue.enabled=true,green.enabled=true,service.selectorVersion=blue # Test green kubectl port-forward deployment/myapp-green 8080:80 # Switch traffic to green helm upgrade --install myapp ./chart \ --set blue.enabled=true,green.enabled=true,service.selectorVersion=green # Remove blue after verification helm upgrade --install myapp ./chart \ --set blue.enabled=false,green.enabled=true,service.selectorVersion=green

Canary deployment with Istio weights: # templates/virtualservice.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: {{ include "myapp.fullname" . }} spec: hosts: - {{ include "myapp.fullname" . }} http: - route: - destination: host: {{ include "myapp.fullname" . }} subset: stable weight: {{ sub 100 .Values.canary.weight }} - destination: host: {{ include "myapp.fullname" . }} subset: canary weight: {{ .Values.canary.weight }}

Automated canary deployment script: #!/bin/bash WEIGHT=10 STEP=10 MAX_WEIGHT=100 while [ $WEIGHT -le $MAX_WEIGHT ]; do helm upgrade myapp ./chart --set canary.weight=$WEIGHT # Monitor metrics ERROR_RATE=$(curl -s prometheus:9090/api/v1/query?query=error_rate) if [ $(echo "$ERROR_RATE > 0.01" | bc) -eq 1 ]; then echo "Error rate too high, rolling back" helm rollback myapp break fi sleep 300 # Wait 5 minutes WEIGHT=$((WEIGHT + STEP)) done

Argo Rollouts with Helm: apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: myapp spec: strategy: canary: steps: - setWeight: 20 - pause: {duration: 1m} - setWeight: 40 - pause: {duration: 1m} - setWeight: 60 - pause: {duration: 1m} - setWeight: 80 - pause: {duration: 1m} - setWeight: 100 template: metadata: labels: app: myapp spec: containers: - name: app image: {{ .Values.image.repository }}:{{ .Values.image.tag }}

In Blue-Green deployment, what switches traffic between versions?
38. How do you manage Helm charts for stateful applications (databases, Kafka)?

Stateful applications require special handling for persistent storage, ordering, and discovery.

StatefulSet configuration: # templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: {{ include "myapp.fullname" . }} spec: serviceName: {{ include "myapp.fullname" . }}-headless replicas: {{ .Values.replicaCount }} podManagementPolicy: OrderedReady # or Parallel updateStrategy: type: RollingUpdate rollingUpdate: partition: 0 # For canary updates selector: matchLabels: {{- include "myapp.selectorLabels" . | nindent 6 }} template: metadata: labels: {{- include "myapp.selectorLabels" . | nindent 8 }} spec: containers: - name: app image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" volumeMounts: - name: data mountPath: /var/lib/app volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: {{ .Values.persistence.storageClass }} resources: requests: storage: {{ .Values.persistence.size }}

Headless service for discovery: # templates/service-headless.yaml apiVersion: v1 kind: Service metadata: name: {{ include "myapp.fullname" . }}-headless spec: clusterIP: None selector: {{- include "myapp.selectorLabels" . | nindent 4 }} ports: - port: {{ .Values.service.port }} name: app

Pod identity script: # In container startup #!/bin/bash export POD_NAME=$(hostname) export POD_INDEX=${POD_NAME##*-} # For clustered apps (Kafka, ZooKeeper) if [ $POD_INDEX -eq 0 ]; then # This is the first pod /entrypoint.sh --bootstrap else # Wait for first pod until nslookup {{ include "myapp.fullname" . }}-0; do sleep 2; done /entrypoint.sh --join {{ include "myapp.fullname" . }}-0.$(POD_NAMESPACE).svc.cluster.local fi

Persistent Volume management: # Prevent PVC deletion with annotation annotations: "helm.sh/resource-policy": keep # values.yaml persistence: enabled: true size: 10Gi storageClass: fast existingClaim: "" # Conditional PVC {{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} apiVersion: v1 kind: PersistentVolumeClaim metadata: name: {{ include "myapp.fullname" . }}-data spec: accessModes: - ReadWriteOnce resources: requests: storage: {{ .Values.persistence.size }} {{- end }}

Ordered readiness probes: readinessProbe: exec: command: - sh - -c - | # For Cassandra nodetool status | grep "^UN" | grep $(hostname) initialDelaySeconds: 30 periodSeconds: 10

Backup and restore hooks: # Pre-upgrade backup hookannotations: "helm.sh/hook": pre-upgrade "helm.sh/hook-weight": "1" --- apiVersion: batch/v1 kind: Job metadata: name: {{ .Release.Name }}-backup spec: template: spec: containers: - name: backup image: bitnami/postgresql:latest command: - pg_dump - -h {{ .Release.Name }}-postgresql - -U postgres - mydb volumeMounts: - name: backup mountPath: /backup volumes: - name: backup persistentVolumeClaim: claimName: backup-pvc

Which Kubernetes resource is designed for stateful applications with persistent storage?
39. How do you implement resource quotas and limit ranges with Helm?

Resource quotas and limit ranges enforce resource constraints at namespace level, critical for multi-tenant clusters.

Resource Quota template: # templates/resourcequota.yaml {{- if .Values.resourceQuota.enabled }} apiVersion: v1 kind: ResourceQuota metadata: name: {{ include "myapp.fullname" . }}-quota spec: hard: requests.cpu: {{ .Values.resourceQuota.requests.cpu }} requests.memory: {{ .Values.resourceQuota.requests.memory }} limits.cpu: {{ .Values.resourceQuota.limits.cpu }} limits.memory: {{ .Values.resourceQuota.limits.memory }} persistentvolumeclaims: {{ .Values.resourceQuota.pvcs | default "10" }} pods: {{ .Values.resourceQuota.pods | default "20" }} services: {{ .Values.resourceQuota.services | default "10" }} secrets: {{ .Values.resourceQuota.secrets | default "50" }} configmaps: {{ .Values.resourceQuota.configmaps | default "50" }} {{- end }}

Limit Range for default requests: # templates/limitrange.yaml {{- if .Values.limitRange.enabled }} apiVersion: v1 kind: LimitRange metadata: name: {{ include "myapp.fullname" . }}-limits spec: limits: - type: Container default: cpu: {{ .Values.limitRange.default.cpu }} memory: {{ .Values.limitRange.default.memory }} defaultRequest: cpu: {{ .Values.limitRange.defaultRequest.cpu }} memory: {{ .Values.limitRange.defaultRequest.memory }} max: cpu: {{ .Values.limitRange.max.cpu }} memory: {{ .Values.limitRange.max.memory }} min: cpu: {{ .Values.limitRange.min.cpu }} memory: {{ .Values.limitRange.min.memory }} - type: Pod max: cpu: {{ .Values.limitRange.podMax.cpu }} memory: {{ .Values.limitRange.podMax.memory }} {{- end }}

Per-namespace quotas with values: # environments/dev/values.yaml resourceQuota: enabled: true requests: cpu: "2" memory: "4Gi" limits: cpu: "4" memory: "8Gi" pods: 10 # environments/prod/values.yaml resourceQuota: enabled: true requests: cpu: "10" memory: "20Gi" limits: cpu: "20" memory: "40Gi" pods: 50

Template validation with quota: # Check if quota allows new resources {{- $currentPods := lookup "v1" "Pod" .Release.Namespace "" | len -}} {{- $quotaPods := .Values.resourceQuota.pods | int -}} {{- if ge $currentPods $quotaPods }} {{- fail "Pod quota would be exceeded" -}} {{- end }}

Monitoring quota usage: # Get quota status kubectl get resourcequota -n mynamespace # Watch quota during deployment kubectl get resourcequota myapp-quota -n mynamespace -w # Alert on quota near limits (Prometheus) kubectl resourcequota used > 80%

Multi-tenant quota strategy: # Shared quota for teams team-a-quota: hard: pods: "50" requests.cpu: "10" requests.memory: 20Gi # Per-application quotas within team app-quota: hard: pods: "10" requests.cpu: "2"

Best practices: Always set LimitRange to provide default resource requests, set ResourceQuota high enough for rolling updates (2x peak usage), test quota exhaustion scenarios, and monitor quota usage with dashboards.

Which Kubernetes resource sets default resource requests for pods without them?
«
»
Pivotal Cloud foundry (PCF) interview questions

Comments & Discussions