In this post, I will show how to access smb shares outside the cluster from a Kubernetes Pod. The example is backing up the etcd cluster in my Talos k8s cluster to a share, but you can use this for any service (like Plex or Jellyfin) that need access to files on a NAS.

This is part four of my Homelab Kubernetes series.

The Kubernetes project has created a standardized API specification to make it possible for third parties to write plugins that allow clusters to interact with file or block based storage systems without having to have code merged into the core project.

In this article, we’re going to:

  • Install the SMB CSI Driver
  • Create a StorageClass, PersistentVolume and PersistentVolumeClaim
  • Used a demo Deployment to confirm things are working
  • Set up a backups PersistentVolume and PersistentVolumeClaim
  • Create a CronJob that backs up the talos etcd cluster every night at 1:11 am

Prerequisites

  • A working kubernetes cluster. I’m using Talos for mine, but regular kubernetes or k3s clusters will work too. If you need to set up a new cluster, or configure an existing one to use Cilum, read part one of this series.
  • helm & kubectl - if you don’t want to brew install them, install instructions are at helm.sh and kubectl.
  • A SMB share on a NAS
  • A separate user on the NAS that the cluster will use to access the share. Don’t just use your main NAS account, you want to be able to restrict what the cluster can do and which shares it can access.

Software Versions

Here are the versions of the software I used while writing this post. Later versions should work, but this is what these instructions were tested with.

Software Version
helm 4.0.1
kubectl 1.34
kubernetes 1.34.1
talos 1.11.5
SMB CSI Driver for Kubernetes 1.19.1

Installation

Install with helm

We’re going to use helm to install the SMB CSI Driver for Kubernetes from kubernetes-csi/csi-driver-smb.

# Install the helm repository and CSI driver
helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version 1.19.1

Confirm pods are running

kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/name=csi-driver-smb"

You should see one copy of the csi-smb-controller pod, and one for each node in your cluster. On my homelab main talos cluster it looks like this:

NAME                                  READY   STATUS    RESTARTS        AGE
csi-smb-controller-66588dccff-4lfkt   4/4     Running   3 (7d6h ago)    7d16h
csi-smb-node-5q22s                    3/3     Running   3 (7d16h ago)   7d16h
csi-smb-node-hzrss                    3/3     Running   0               6d5h
csi-smb-node-ms4gh                    3/3     Running   0               6d6h
csi-smb-node-nhk84                    3/3     Running   0               7d16h

Configuration

Create a namespace

We’re going to put all the resources used in this post into a separate backups namespace, so create it now

kubectl create namespace backups

Create a secret

We need to store login credentials for the server we’re going to connect to. We’re going to use sops (see part three - Secret Management with SOPS for install instructions) to create a secret containing the username and password.

Using SOPS

# backups-smb-sopssecret.yaml
apiVersion: isindir.github.com/v1alpha3
kind: SopsSecret
metadata:
    name: backups-smb-sopssecret
    namespace: backups
spec:
    secretTemplates:
        - name: backups-smb-secret
          stringData:
            username: k8s
            password: connect-to-nas

Encrypt it, then apply it

sops encrypt -i backups-smb-sopssecret.yaml && \
  kubectl apply -f backups-smb-sopssecret.yaml`

Insecure Alternative

Alternatively we can define login credentials using a Secret manifest file. To do that, we’re going to specify the username and password as stringData fields so we don’t have to encode them ourselves with base64.

# insecure-secret.yaml
apiVersion: v1
kind: Secret
metadata:
    name: backups-smb-secret
    namespace: backups
type: Opaque
stringData:
    username: k8s
    password: your-password-here

and add it by running kubectl apply -f insecure-secret.yaml

Using the SMB Storage Class

Now that we have our server credentials in a secret, we can set up a storageClass that connects to our server.

# smb-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: smb-csi
provisioner: smb.csi.k8s.io
parameters:
  source: "//your.servers.fqdn.or.ip.address/blog-demo"  # Define Samba share
  csi.storage.k8s.io/provisioner-secret-name: "backups-smb-secret"
  csi.storage.k8s.io/provisioner-secret-namespace: "backups"
  createSubDir: "true" # Creates subdirectories for each PersistentVolumeClaim
  csi.storage.k8s.io/node-stage-secret-name: "backups-smb-secret" # Define Samba credential secret
  csi.storage.k8s.io/node-stage-secret-namespace: "backups" # Define Samba credential secret namespace
# Uncomment these two lines to make this the default storage class
# for your cluster
# annotations:
#   storageclass.kubernetes.io/is-default-class: "true"
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - mfsymlinks
  - vers=3.0  # Define Samba version
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

Create a PersistentVolumeClaim

This manifest will create a PVC with a 1Gi quota

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: smb-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: smb-csi
  resources:
    requests:
      storage: 1Gi

Create a pod that uses your pvc

apiVersion: v1
kind: Pod
metadata:
  name: smb-test-pod
spec:
  containers:
  - name: app
    image: busybox
    # talos requires resources specified instead of letting k8s YOLO them
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "64Mi"
    command:
    - /bin/sh
    - -c
    - |
      echo 'Demo smb-csi' > /mnt/smb/csi-demo.txt
      date >> /mnt/smb/csi-demo.txt
      echo 'Sleeping 300 seconds so it stays visible in kubectl get pods'
      sleep 300
    volumeMounts:
    - mountPath: "/mnt/smb"
      name: smb-volume
  volumes:
  - name: smb-volume
    persistentVolumeClaim:
      claimName: smb-pvc

You can now see the pvc and pod in the cluster

kubectl get pod,pvc
NAME               READY   STATUS    RESTARTS   AGE
pod/smb-test-pod   1/1     Running   0          34s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/smb-pvc      Bound    pvc-5e3acec4-af9e-4cbd-8613-10c64ad7ebaa   1Gi        RWX            smb-csi        <unset>                 36s

And if I log into the NAS, I can see the pvc subdirectory in my blog-demo share

$ cd /volume2/blog-demo
$ ls -lah
Permissions Size User  Date Modified Name
drwxrwx---     - admin 20 Jan 00:30  #recycle
drwxrwxrwx@    - root   6 Jan 23:22  @eaDir
drwxrwxrwx     - talos 25 Jan 13:51  pvc-5e3acec4-af9e-4cbd-8613-10c64ad7ebaa
$ cat pvc-5e3acec4-af9e-4cbd-8613-10c64ad7ebaa/csi-demo.txt
Demo smb-csi
Sun Jan 25 20:56:51 UTC 2026

Practical Usage: Back up Talos’ etcd cluster

Now that we’ve confirmed our configuration can connect to our NAS, let’s set up backups for the etcd cluster. These examples are Talos-specific, but still a good example of using SMB storage in cluster cron jobs.

Set up the PersistentVolue and PersistentVolumeClaim to use for backups

First, create a PersistentVolume that connects to our backups share.

IMPORTANT: You must set persistentVolumeReclaimPolicy to Retain so the CSI doesn’t delete the pvc’s remote directory if we delete the pvc or we could accidentally wipe our backups volume!

Since we’re doing static provisioning, we also need to set storageClassName to "" in both the PersistentVolume and the PersistentVolumeClaim that uses it.

# backups.smb.setup.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-backups-share
  namespace: backups
spec:
  capacity:
    storage: 10Gi # Set an appropriate size
  accessModes:
    - ReadWriteMany # Allows the volume to be mounted by many nodes
  # IMPORTANT: Set persistentVolumeReclaimPolicy to Retain so the CSI doesn't
  # delete the pvc's remote directory if we delete the pvc or we could
  # accidentally wipe our backups volume!
  persistentVolumeReclaimPolicy: Retain
  storageClassName: "" # Must be an empty string for static provisioning
  csi:
    driver: smb.csi.k8s.io
    readOnly: false
    volumeHandle: k8s-backups-smb-unseen # A unique name for the volume handle
    volumeAttributes:
      source: "\\\\unseen.miniclusters.rocks\\k8s-backups" # Use escaped backslashes or forward slashes
    nodeStageSecretRef:
      name: backups-smb-secret
      namespace: backups # The namespace where you created the secret
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-backups-share
  namespace: backups
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: "" # Must match the empty string in the PV
  volumeName: pv-backups-share
---

Create a talosconfig secret

The backup script needs to run talosctl to extract a backup from the cluster’s etcd, so let’s create a secret

kubectl create secret generic backups-talosconfig \
    --from-file talosconfig \
    --namespace backups

Our CronJob will mount that secret into the backup pod as a file at /etc/talos/talosconfig.

Create a CronJob

We’re going to use my unixorn/talosctl-backup image to run my tal-backup-etcd script to backup etcd. Source for both the docker image and tal-backup-etcd can be found on Github at unixorn/talosctl-etcd-backups.

By default, after a successful backup, tal-backup-etcd deletes any backups in the backup directory older than $KEEP_DAYS old.

# etcd-backup.cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: cron-backup-void-etcd
  namespace: backups
spec:
  # Standard cron syntax: minute hour day-of-month month day-of-week
  schedule: "11 1 * * *" 
  timeZone: "America/Denver"
  successfulJobsHistoryLimit: 5
  failedJobsHistoryLimit: 10
  concurrencyPolicy: Forbid # Prevents a new backup if the old one is still running
  jobTemplate:
    spec:
      backoffLimit: 4
      ttlSecondsAfterFinished: 604800 # Keep jobs for a week so I can examine their logs
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: app
            image: unixorn/talosctl-backup:latest
            env:
            - name: TZ
              value: "America/Denver"
            - name: BACKUP_D
              value: /mnt/backups/etcd-backups
            - name: KEEP_DAYS
              value: "30"
            - name: PRUNE_OK
              value: "true"
            - name: TALOSCONFIG
              value: "/etc/talos/talosconfig"
            - name: CONTROLPLANE_IP
              value: 10.9.8.7
            resources:
              requests:
                memory: "64Mi"
                cpu: "250m"
              limits:
                memory: "64Mi"
            command: [ "sh", "-c", "/usr/local/bin/tal-backup-etcd ; ls -lah /mnt/backups/void-etcd" ]
            volumeMounts:
            - name: backups-volume
              mountPath: "/mnt/backups"
            - name: talosconfig
              mountPath: "/etc/talos"
              readOnly: true      
          volumes:
          - name: backups-volume
            persistentVolumeClaim:
              claimName: pvc-backups-share
          - name: talosconfig
            secret:
              secretName: backups-talosconfig

You can now install the CronJob with kubectl apply -f etc-backup.cronjob.yaml

Bonus Job manifest for immediate backups

Having a cluster CronJob is great, but you probably don’t want to wait until 1 am to see if it works. It’s also handy to be able to run a backup immediately if you’re planning to do an experiment that might break your etcd.

Here’s a Job you can use to test or run backups at arbitrary times.

# immmediate-backup.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: backups-void-etcd
  namespace: backups
spec:
  ttlSecondsAfterFinished: 604800 # Keep results around for a week
  backoffLimit: 4
  template:
    spec:
      restartPolicy: OnFailure
      containers:
      - name: app
        image: unixorn/talosctl-backup
        env:
        - name: TZ
          value: "America/Denver"
        - name: BACKUP_D
          value: /mnt/backups/etcd-backups
        - name: KEEP_DAYS
          value: "30"
        - name: PRUNE_OK
          value: "true"
        - name: TALOSCONFIG
          value: "/etc/talosconfig/talosconfig"
        - name: CONTROLPLANE_IP
          value: 10.9.8.7
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "64Mi"
        command: [ "sh", "-c", "/usr/local/bin/tal-backup-etcd ; ls -lah /mnt/backups/etcd-backups" ]
        volumeMounts:
        - name: backups-volume
          mountPath: "/mnt/backups"
        - name: talosconfig
          mountPath: "/etc/talosconfig"
          readOnly: true      
      volumes:      # Moved inside pod spec
      - name: backups-volume
        persistentVolumeClaim:
          claimName: pvc-backups-share
      - name: talosconfig
        secret:
          secretName: backups-talosconfig

Use kubectl apply -f immediate-backup.yaml to run it.

Summary

You should now have:

  • Installed the SMB CSI driver into your cluster
  • Created a StorageClass that stores files on a NAS
  • Created a Deployment that shows how to keep a pod’s work files on an SMB share
  • Created a PersistentVolume that connects to a backups share and does not delete files after pvcs using it have been deleted from your cluster
  • Created a PersistentVolumeClaim that backup jobs can use to write backups to your SMB server
  • Created a CronJob and a Job that you can use to back up your etcd cluster both via cron and at arbitrary times.
  • Backed up your etcd