<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:hashnode="https://hashnode.com/rss"><channel><title><![CDATA[Technologist Creative]]></title><description><![CDATA[Technologist Creative]]></description><link>https://technologistcreative.hashnode.dev</link><generator>RSS for Node</generator><lastBuildDate>Tue, 03 Dec 2024 07:07:36 GMT</lastBuildDate><atom:link href="https://technologistcreative.hashnode.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="next" href="https://technologistcreative.hashnode.dev/rss.xml?page=2"/><atom:link rel="previous" href="https://technologistcreative.hashnode.dev/rss.xml"/><item><title><![CDATA[Host device administration on a Kubernetes node with no SSH access]]></title><description><![CDATA[Learn how to administer a host device on a Kubernetes node without SSH access using privileged pods and an Ubuntu container]]></description><link>https://technologistcreative.hashnode.dev/host-device-administration-on-a-kubernetes-node-with-no-ssh-access</link><guid isPermaLink="true">https://technologistcreative.hashnode.dev/host-device-administration-on-a-kubernetes-node-with-no-ssh-access</guid><category><![CDATA[piraeus]]></category><category><![CDATA[wipefs]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[talos-linux]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Rob Rakaric]]></dc:creator><pubDate>Wed, 08 May 2024 20:36:46 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;p&gt;Ive recently been building a Kubernetes lab environment to learn some AI and machine learning concepts on. The OS I chose was Talos Linux. Talos is an immutable OS that has no SSH access. When it came time to configure storage, I was in a bit of a pickle. After some research, I was really pleased to find that disk operations can be done through a K8s pod fairly easily. There are a few considerations, though.&lt;/p&gt;&lt;h1 id=&quot;heading-environment&quot;&gt;environment&lt;/h1&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Talos Linux 1.7.0&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Kubernetes 1.29.4&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;kubectl installed locally&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-make-sure-you-can-run-privileged-pods&quot;&gt;make sure you can run privileged pods&lt;/h1&gt;&lt;p&gt;Since were going to be working with the host system, youll want to be able to run privileged pods.&lt;/p&gt;&lt;h1 id=&quot;heading-storage-with-piraeus&quot;&gt;storage with Piraeus&lt;/h1&gt;&lt;p&gt;I wanted to use an existing nvme disk in my labs single worker node for &lt;a target=&quot;_blank&quot; href=&quot;https://piraeus.io/&quot;&gt;Piraeus&lt;/a&gt; storage, since it seems like a really simple way to run storage in an on-premise Kubernetes environment. I followed the &lt;a target=&quot;_blank&quot; href=&quot;https://www.talos.dev/v1.7/kubernetes-guides/configuration/storage/#piraeus--linstor&quot;&gt;Talos Linux documentation for configuring a storage pool&lt;/a&gt;, but there was an issue. I already had a partition on the nvme device that I wanted to use.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl linstor physical-storage create-device-pool --pool-name nvme_lvm_pool LVM talos-6o3-rkz /dev/nvme0n1 --storage-pool nvme_poolERROR:Description:    (Node: &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt;) Failed to pvcreate on device: /dev/nvme0n1Details:    Command &lt;span class=&quot;hljs-string&quot;&gt;&apos;pvcreate --config &apos;&lt;/span&gt;devices { filter=[&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;&apos;a|/dev/nvme0n1|&apos;&quot;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;,&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;&apos;r|.*|&apos;&quot;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;] }&lt;span class=&quot;hljs-string&quot;&gt;&apos; /dev/nvme0n1&apos;&lt;/span&gt; returned with exitcode 5.     Standard out:     Error message:       Cannot use /dev/nvme0n1: device is partitioned&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;My options are limited with no SSH access to the box, and no evident way to do destructive operations on a disk with &lt;code&gt;talosctl&lt;/code&gt;, Talos Linuxs CLI. I could flash a &lt;code&gt;gparted&lt;/code&gt; live USB, boot my worker node with that, and easily erase the partition. However, I wanted to accomplish this through Kubernetes instead. Heres what I did:&lt;/p&gt;&lt;h1 id=&quot;heading-introduce-a-namespace-with-privileged-annotations&quot;&gt;introduce a namespace with privileged annotations&lt;/h1&gt;&lt;p&gt;The following creates a namespace resource called &lt;code&gt;disk-utilities&lt;/code&gt; in your cluster where pods can run privileged.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --filename -apiVersion: v1kind: Namespacemetadata:  name: disk-utilities  labels:    pod-security.kubernetes.io/enforce: privileged    pod-security.kubernetes.io/audit: privileged    pod-security.kubernetes.io/warn: privilegedEOF&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-spin-up-a-pod-using-an-ubuntu-container&quot;&gt;spin up a pod using an ubuntu container&lt;/h1&gt;&lt;p&gt;Really, you can do this with any image that contains a number of disk utilities, but I found &lt;code&gt;wipefs&lt;/code&gt; really easy to use for this usecase.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --filename -apiVersion: v1kind: Podmetadata:  name: disk-partitioner  namespace: disk-utilitiesspec:  containers:  - name: ubuntu    image: ubuntu:latest    securityContext:      privileged: &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;    &lt;span class=&quot;hljs-built_in&quot;&gt;command&lt;/span&gt;: [&lt;span class=&quot;hljs-string&quot;&gt;&quot;sleep&quot;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&quot;infinity&quot;&lt;/span&gt;]    volumeMounts:    - name: dev      mountPath: /dev  volumes:  - name: dev    hostPath:      path: /dev  nodeSelector:    kubernetes.io/hostname: {{your_hostname_goes_here}}EOF&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Two important things to note here:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;mounting &lt;code&gt;/dev&lt;/code&gt; directory from the host into the container, using a &lt;code&gt;volume&lt;/code&gt; and &lt;code&gt;volumeMount&lt;/code&gt;. Thats how we associate the container to the hosts devices!&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;make sure you select the correct host using &lt;code&gt;nodeSelector&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I ended up using the node label &lt;code&gt;kubernetes.io/hostname&lt;/code&gt; to ensure my pod spins up on the correct host. To list out your node labels, simply:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get nodes --show-labels&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You dont have to use &lt;code&gt;kubernetes.io/hostname&lt;/code&gt;, but make sure whichever label you use is &lt;em&gt;unique to the host&lt;/em&gt;!&lt;/p&gt;&lt;h1 id=&quot;heading-exec-into-the-container&quot;&gt;exec into the container&lt;/h1&gt;&lt;p&gt;Next, youll exec into the container and &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/&quot;&gt;launch a shell&lt;/a&gt; &lt;code&gt;/bin/bash&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl &lt;span class=&quot;hljs-built_in&quot;&gt;exec&lt;/span&gt; --namespace disk-utilities -it disk-partitioner -- /bin/bash&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-list-block-storage-devices&quot;&gt;list block storage devices&lt;/h1&gt;&lt;p&gt;Use &lt;a target=&quot;_blank&quot; href=&quot;https://man7.org/linux/man-pages/man8/lsblk.8.html&quot;&gt;lsblk&lt;/a&gt; to find the device you want to erase&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;root@disk-partitioner:/&lt;span class=&quot;hljs-comment&quot;&gt;# lsblk&lt;/span&gt;Outputs list of devices&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-erase-disk-signatures&quot;&gt;erase disk signatures&lt;/h1&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;wipefs --all /dev/{your_device_name_from_lsblk}&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-exit-container&quot;&gt;exit container&lt;/h1&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;&lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now I can go back and provision my device pool!&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl linstor physical-storage create-device-pool --pool-name nvme_lvm_pool LVM talos-6o3-rkz /dev/nvme0n1 --storage-pool nvme_poolSUCCESS:    (talos-6o3-rkz) PV &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; device &lt;span class=&quot;hljs-string&quot;&gt;&apos;/dev/nvme0n1&apos;&lt;/span&gt; created.SUCCESS:    (talos-6o3-rkz) VG &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; devices [/dev/nvme0n1] with name &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_lvm_pool&apos;&lt;/span&gt; created.SUCCESS:    Successfully &lt;span class=&quot;hljs-built_in&quot;&gt;set&lt;/span&gt; property key(s): StorDriver/StorPoolNameSUCCESS:Description:    New storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt; on node &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt; registered.Details:    Storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt; on node &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt; UUID is: 7cde04eb-b266-4a59-8711-6731162b9f76SUCCESS:    (talos-6o3-rkz) Changes applied to storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Linstor (the underlying storage engine used by Piraeus) is happy, since it can use the newly de-partitioned device.&lt;/p&gt;&lt;h1 id=&quot;heading-cleaning-up&quot;&gt;cleaning up&lt;/h1&gt;&lt;p&gt;To clean up, simply delete the namespace that was created. This will delete the pod that was created in the namespace, preventing the namespace from being hijacked for privileged operations by a would-be attacker.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl delete namespaces disk-utilities&lt;/code&gt;&lt;/pre&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;p&gt;Ive recently been building a Kubernetes lab environment to learn some AI and machine learning concepts on. The OS I chose was Talos Linux. Talos is an immutable OS that has no SSH access. When it came time to configure storage, I was in a bit of a pickle. After some research, I was really pleased to find that disk operations can be done through a K8s pod fairly easily. There are a few considerations, though.&lt;/p&gt;&lt;h1 id=&quot;heading-environment&quot;&gt;environment&lt;/h1&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Talos Linux 1.7.0&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Kubernetes 1.29.4&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;kubectl installed locally&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-make-sure-you-can-run-privileged-pods&quot;&gt;make sure you can run privileged pods&lt;/h1&gt;&lt;p&gt;Since were going to be working with the host system, youll want to be able to run privileged pods.&lt;/p&gt;&lt;h1 id=&quot;heading-storage-with-piraeus&quot;&gt;storage with Piraeus&lt;/h1&gt;&lt;p&gt;I wanted to use an existing nvme disk in my labs single worker node for &lt;a target=&quot;_blank&quot; href=&quot;https://piraeus.io/&quot;&gt;Piraeus&lt;/a&gt; storage, since it seems like a really simple way to run storage in an on-premise Kubernetes environment. I followed the &lt;a target=&quot;_blank&quot; href=&quot;https://www.talos.dev/v1.7/kubernetes-guides/configuration/storage/#piraeus--linstor&quot;&gt;Talos Linux documentation for configuring a storage pool&lt;/a&gt;, but there was an issue. I already had a partition on the nvme device that I wanted to use.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl linstor physical-storage create-device-pool --pool-name nvme_lvm_pool LVM talos-6o3-rkz /dev/nvme0n1 --storage-pool nvme_poolERROR:Description:    (Node: &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt;) Failed to pvcreate on device: /dev/nvme0n1Details:    Command &lt;span class=&quot;hljs-string&quot;&gt;&apos;pvcreate --config &apos;&lt;/span&gt;devices { filter=[&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;&apos;a|/dev/nvme0n1|&apos;&quot;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;,&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&quot;&apos;r|.*|&apos;&quot;&lt;/span&gt;&lt;span class=&quot;hljs-string&quot;&gt;&apos;&quot;&apos;&lt;/span&gt;] }&lt;span class=&quot;hljs-string&quot;&gt;&apos; /dev/nvme0n1&apos;&lt;/span&gt; returned with exitcode 5.     Standard out:     Error message:       Cannot use /dev/nvme0n1: device is partitioned&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;My options are limited with no SSH access to the box, and no evident way to do destructive operations on a disk with &lt;code&gt;talosctl&lt;/code&gt;, Talos Linuxs CLI. I could flash a &lt;code&gt;gparted&lt;/code&gt; live USB, boot my worker node with that, and easily erase the partition. However, I wanted to accomplish this through Kubernetes instead. Heres what I did:&lt;/p&gt;&lt;h1 id=&quot;heading-introduce-a-namespace-with-privileged-annotations&quot;&gt;introduce a namespace with privileged annotations&lt;/h1&gt;&lt;p&gt;The following creates a namespace resource called &lt;code&gt;disk-utilities&lt;/code&gt; in your cluster where pods can run privileged.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --filename -apiVersion: v1kind: Namespacemetadata:  name: disk-utilities  labels:    pod-security.kubernetes.io/enforce: privileged    pod-security.kubernetes.io/audit: privileged    pod-security.kubernetes.io/warn: privilegedEOF&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-spin-up-a-pod-using-an-ubuntu-container&quot;&gt;spin up a pod using an ubuntu container&lt;/h1&gt;&lt;p&gt;Really, you can do this with any image that contains a number of disk utilities, but I found &lt;code&gt;wipefs&lt;/code&gt; really easy to use for this usecase.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --filename -apiVersion: v1kind: Podmetadata:  name: disk-partitioner  namespace: disk-utilitiesspec:  containers:  - name: ubuntu    image: ubuntu:latest    securityContext:      privileged: &lt;span class=&quot;hljs-literal&quot;&gt;true&lt;/span&gt;    &lt;span class=&quot;hljs-built_in&quot;&gt;command&lt;/span&gt;: [&lt;span class=&quot;hljs-string&quot;&gt;&quot;sleep&quot;&lt;/span&gt;, &lt;span class=&quot;hljs-string&quot;&gt;&quot;infinity&quot;&lt;/span&gt;]    volumeMounts:    - name: dev      mountPath: /dev  volumes:  - name: dev    hostPath:      path: /dev  nodeSelector:    kubernetes.io/hostname: {{your_hostname_goes_here}}EOF&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Two important things to note here:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;mounting &lt;code&gt;/dev&lt;/code&gt; directory from the host into the container, using a &lt;code&gt;volume&lt;/code&gt; and &lt;code&gt;volumeMount&lt;/code&gt;. Thats how we associate the container to the hosts devices!&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;make sure you select the correct host using &lt;code&gt;nodeSelector&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;I ended up using the node label &lt;code&gt;kubernetes.io/hostname&lt;/code&gt; to ensure my pod spins up on the correct host. To list out your node labels, simply:&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl get nodes --show-labels&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You dont have to use &lt;code&gt;kubernetes.io/hostname&lt;/code&gt;, but make sure whichever label you use is &lt;em&gt;unique to the host&lt;/em&gt;!&lt;/p&gt;&lt;h1 id=&quot;heading-exec-into-the-container&quot;&gt;exec into the container&lt;/h1&gt;&lt;p&gt;Next, youll exec into the container and &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/&quot;&gt;launch a shell&lt;/a&gt; &lt;code&gt;/bin/bash&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl &lt;span class=&quot;hljs-built_in&quot;&gt;exec&lt;/span&gt; --namespace disk-utilities -it disk-partitioner -- /bin/bash&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-list-block-storage-devices&quot;&gt;list block storage devices&lt;/h1&gt;&lt;p&gt;Use &lt;a target=&quot;_blank&quot; href=&quot;https://man7.org/linux/man-pages/man8/lsblk.8.html&quot;&gt;lsblk&lt;/a&gt; to find the device you want to erase&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;root@disk-partitioner:/&lt;span class=&quot;hljs-comment&quot;&gt;# lsblk&lt;/span&gt;Outputs list of devices&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-erase-disk-signatures&quot;&gt;erase disk signatures&lt;/h1&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;wipefs --all /dev/{your_device_name_from_lsblk}&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-exit-container&quot;&gt;exit container&lt;/h1&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;&lt;span class=&quot;hljs-built_in&quot;&gt;exit&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now I can go back and provision my device pool!&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl linstor physical-storage create-device-pool --pool-name nvme_lvm_pool LVM talos-6o3-rkz /dev/nvme0n1 --storage-pool nvme_poolSUCCESS:    (talos-6o3-rkz) PV &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; device &lt;span class=&quot;hljs-string&quot;&gt;&apos;/dev/nvme0n1&apos;&lt;/span&gt; created.SUCCESS:    (talos-6o3-rkz) VG &lt;span class=&quot;hljs-keyword&quot;&gt;for&lt;/span&gt; devices [/dev/nvme0n1] with name &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_lvm_pool&apos;&lt;/span&gt; created.SUCCESS:    Successfully &lt;span class=&quot;hljs-built_in&quot;&gt;set&lt;/span&gt; property key(s): StorDriver/StorPoolNameSUCCESS:Description:    New storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt; on node &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt; registered.Details:    Storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt; on node &lt;span class=&quot;hljs-string&quot;&gt;&apos;talos-6o3-rkz&apos;&lt;/span&gt; UUID is: 7cde04eb-b266-4a59-8711-6731162b9f76SUCCESS:    (talos-6o3-rkz) Changes applied to storage pool &lt;span class=&quot;hljs-string&quot;&gt;&apos;nvme_pool&apos;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Linstor (the underlying storage engine used by Piraeus) is happy, since it can use the newly de-partitioned device.&lt;/p&gt;&lt;h1 id=&quot;heading-cleaning-up&quot;&gt;cleaning up&lt;/h1&gt;&lt;p&gt;To clean up, simply delete the namespace that was created. This will delete the pod that was created in the namespace, preventing the namespace from being hijacked for privileged operations by a would-be attacker.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl delete namespaces disk-utilities&lt;/code&gt;&lt;/pre&gt;]]&gt;</hashnode:content></item><item><title><![CDATA[Using pi-hole as your external-dns provider in Kubernetes]]></title><description><![CDATA[Use Pi-Hole with external-dns in Kubernetes to map DNS records to your homelab cluster's ingress IP effortlessly]]></description><link>https://technologistcreative.hashnode.dev/using-pi-hole-as-your-external-dns-provider-in-kubernetes</link><guid isPermaLink="true">https://technologistcreative.hashnode.dev/using-pi-hole-as-your-external-dns-provider-in-kubernetes</guid><category><![CDATA[Pi-Hole]]></category><category><![CDATA[external-dns]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Homelab]]></category><dc:creator><![CDATA[Rob Rakaric]]></dc:creator><pubDate>Wed, 01 May 2024 20:30:09 GMT</pubDate><content:encoded>&lt;![CDATA[&lt;h1 id=&quot;heading-environment-assumptions&quot;&gt;environment assumptions&lt;/h1&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;A locally-running Kubernetes cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Pi-Hole on the network configured as the primary DNS&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-what-is-external-dns&quot;&gt;what is external-dns?&lt;/h1&gt;&lt;p&gt;When you build an &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/ingress/&quot;&gt;ingress&lt;/a&gt; (which is essentially a layer-7 host and path-based load-balancer) in Kubernetes to bring web traffic to a cluster, you specify a &lt;code&gt;host&lt;/code&gt; name. This creates a load-balancer entry in whatever load balancer you&apos;re using. If you&apos;re in public cloud, Kubernetes will call cloud load balancer APIs. In my home lab setup, I&apos;m using &lt;a target=&quot;_blank&quot; href=&quot;https://metallb.universe.tf&quot;&gt;&lt;code&gt;metallb&lt;/code&gt;&lt;/a&gt; for this purpose. The ingress below listens for requests to &lt;code&gt;openweb-ui.lan&lt;/code&gt; and sends them to the &lt;code&gt;http&lt;/code&gt; service (a web UI) called &lt;code&gt;openweb-ui&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;networking.k8s.io/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Ingress&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;open-webui&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;openweb-ui&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ingressClassName:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;nginx&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;host:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;openweb-ui.lan&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;backend:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;service:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;open-webui&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt;              &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;http&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;path:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;/&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;pathType:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Prefixyaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It reaches out to &lt;code&gt;metallb&lt;/code&gt; to assign an IP address from a reserved pool. The address below, &lt;code&gt;192.168.126.50&lt;/code&gt; was handed out by &lt;code&gt;metallb&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714591746843/f279ef68-d3be-4d02-8a4e-10669c1cd0a9.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Now that that&apos;s in place, devices on my network need to know that this hostname resolves to the &lt;code&gt;ingress ip&lt;/code&gt;. That&apos;s where &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/kubernetes-sigs/external-dns&quot;&gt;&lt;code&gt;external-dns&lt;/code&gt;&lt;/a&gt; comes in. Typically, &lt;code&gt;external-dns&lt;/code&gt; is leveraged to write DNS records to providers like Cloudflare, Route53, etc. However, it also supports writing DNS records to your locally-installed &lt;code&gt;Pi-Hole&lt;/code&gt;!&lt;/p&gt;&lt;h1 id=&quot;heading-what-is-pi-hole&quot;&gt;what is pi-hole?&lt;/h1&gt;&lt;p&gt;It&apos;s no secret that tech workers are amongst the biggest users of ad-blocking software, and one of the major players in this space is the venerable &lt;a target=&quot;_blank&quot; href=&quot;https://pi-hole.net&quot;&gt;Pi-Hole&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If you don&apos;t know what Pi-Hole is, it&apos;s a piece of software originally designed to run on a &lt;a target=&quot;_blank&quot; href=&quot;https://www.raspberrypi.com&quot;&gt;Raspberry Pi&lt;/a&gt; (although it runs anywhere now, even Docker and Kubernetes) that blocks ads at the DNS level. You configure the Pi-Hole to be your DNS provider. When an app or website calls out to a domain associated with the configured &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/topics/pihole-blocklists&quot;&gt;block list(s)&lt;/a&gt;, Pi-Hole simply responds that the advertising domain is unresolvable, and content from the advertising domain is not shown. All the while, the useful, good, content you were looking for is displayed!&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714590822512/15ae8897-d37a-4152-9844-0106c916b866.png&quot; alt=&quot;image showing a list of network resources from Safari Web Inspector, showing an advertising domain not loaded.&quot; class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Notice in the above image that &lt;code&gt;analytics.js&lt;/code&gt; from &lt;code&gt;google-analytics.com&lt;/code&gt; isn&apos;t loading, thanks to Pi-Hole!&lt;/p&gt;&lt;p&gt;However, the feature of Pi-Hole I&apos;ll be discussing in this article is &lt;code&gt;Local DNS&lt;/code&gt;. We&apos;ll be using it to map a DNS record to the ingress IP on a homelab Kubernetes cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714592083142/e105a185-3b4b-4ad4-930b-2cc071425672.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h1 id=&quot;heading-installing-external-dns&quot;&gt;installing external-dns&lt;/h1&gt;&lt;p&gt;The Helm chart is the best way to install external-dns.&lt;/p&gt;&lt;h2 id=&quot;heading-required-information&quot;&gt;required information&lt;/h2&gt;&lt;p&gt;Collect the following:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The IP address of your &lt;code&gt;Pi-Hole&lt;/code&gt; instance (referenced as &lt;code&gt;piholeipaddress&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;admin&lt;/code&gt; credentials for your &lt;code&gt;Pi-Hole&lt;/code&gt; instance (referenced as &lt;code&gt;piholeadminpassword&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-add-and-update-repo&quot;&gt;add and update repo&lt;/h2&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/helm update/&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-create-a-secret-for-pi-hole&quot;&gt;create a secret for pi-hole&lt;/h1&gt;&lt;p&gt;In this step, we&apos;ll create a Kubernetes &lt;code&gt;Secret&lt;/code&gt; for authenticating to the &lt;code&gt;Pi-Hole&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl create secret generic pihole-password \--namespace external-dns \--from-literal EXTERNAL_DNS_PIHOLE_PASSWORD={{piholeadminpassword}}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-create-valuesyaml&quot;&gt;create values.yaml&lt;/h2&gt;&lt;p&gt;Next, create a &lt;code&gt;values.yaml&lt;/code&gt; to pass configuration values to the Helm chart.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://github.com/kubernetes-sigs/external-dns/blob/master/charts/external-dns/README.md#values&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;provider:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;pihole&lt;/span&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/pihole.md#arguments&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;env:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# configured your pi-hole password and ip address&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_PASSWORD&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;valueFrom:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;secretKeyRef:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;pihole-password&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_PASSWORD&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_SERVER&lt;/span&gt;    &lt;span class=&quot;hljs-comment&quot;&gt;# make sure NOT to put a trailing slash, as external-dns adds its own&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;value:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;http://{{piholeipaddress}}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-install&quot;&gt;install&lt;/h2&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;helm upgrade --install external-dns external-dns/external-dns \--namespace external-dns \--create-namespace \--values values.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-make-sure-its-working&quot;&gt;make sure it&apos;s working&lt;/h2&gt;&lt;p&gt;If you check the deployment logs for &lt;code&gt;external-dns&lt;/code&gt; , you&apos;ll see that &lt;code&gt;external-dns&lt;/code&gt; has been hard at work creating a DNS record for your &lt;code&gt;ingress&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl logs --namespace external-dns deployments/external-dns external-dns&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=info msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;add openweb-ui.lan IN A -&amp;gt; 192.168.126.50&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=warning msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;Skipping unsupported endpoint openweb-ui.lan TXT \&quot;heritage=external-dns,external-dns/owner=default,external-dns/resource=ingress/openweb-ui/open-webui\&quot;&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=warning msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;Skipping unsupported endpoint a-openweb-ui.lan TXT \&quot;heritage=external-dns,external-dns/owner=default,external-dns/resource=ingress/openweb-ui/open-webui\&quot;&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:15:10Z&quot;&lt;/span&gt; level=info msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;All records are already up to date&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;That&apos;s it!&lt;/p&gt;&lt;p&gt;If you have trouble, again make sure you take out any trailing slashes on the &lt;code&gt;EXTERNAL_DNS_PIHOLE_SERVER&lt;/code&gt; environment variable. And also make sure the IP is correct! Don&apos;t ask me how I spent like 20 minutes wondering why &lt;code&gt;external-dns&lt;/code&gt; wouldn&apos;t connect.&lt;/p&gt;]]&gt;</content:encoded><hashnode:content>&lt;![CDATA[&lt;h1 id=&quot;heading-environment-assumptions&quot;&gt;environment assumptions&lt;/h1&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;A locally-running Kubernetes cluster&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Pi-Hole on the network configured as the primary DNS&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h1 id=&quot;heading-what-is-external-dns&quot;&gt;what is external-dns?&lt;/h1&gt;&lt;p&gt;When you build an &lt;a target=&quot;_blank&quot; href=&quot;https://kubernetes.io/docs/concepts/services-networking/ingress/&quot;&gt;ingress&lt;/a&gt; (which is essentially a layer-7 host and path-based load-balancer) in Kubernetes to bring web traffic to a cluster, you specify a &lt;code&gt;host&lt;/code&gt; name. This creates a load-balancer entry in whatever load balancer you&apos;re using. If you&apos;re in public cloud, Kubernetes will call cloud load balancer APIs. In my home lab setup, I&apos;m using &lt;a target=&quot;_blank&quot; href=&quot;https://metallb.universe.tf&quot;&gt;&lt;code&gt;metallb&lt;/code&gt;&lt;/a&gt; for this purpose. The ingress below listens for requests to &lt;code&gt;openweb-ui.lan&lt;/code&gt; and sends them to the &lt;code&gt;http&lt;/code&gt; service (a web UI) called &lt;code&gt;openweb-ui&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-attr&quot;&gt;apiVersion:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;networking.k8s.io/v1&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;kind:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Ingress&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;metadata:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;open-webui&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;namespace:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;openweb-ui&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;spec:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;ingressClassName:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;nginx&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;rules:&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;host:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;openweb-ui.lan&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;http:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;paths:&lt;/span&gt;      &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;backend:&lt;/span&gt;          &lt;span class=&quot;hljs-attr&quot;&gt;service:&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;open-webui&lt;/span&gt;            &lt;span class=&quot;hljs-attr&quot;&gt;port:&lt;/span&gt;              &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;http&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;path:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;/&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;pathType:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;Prefixyaml&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It reaches out to &lt;code&gt;metallb&lt;/code&gt; to assign an IP address from a reserved pool. The address below, &lt;code&gt;192.168.126.50&lt;/code&gt; was handed out by &lt;code&gt;metallb&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714591746843/f279ef68-d3be-4d02-8a4e-10669c1cd0a9.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Now that that&apos;s in place, devices on my network need to know that this hostname resolves to the &lt;code&gt;ingress ip&lt;/code&gt;. That&apos;s where &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/kubernetes-sigs/external-dns&quot;&gt;&lt;code&gt;external-dns&lt;/code&gt;&lt;/a&gt; comes in. Typically, &lt;code&gt;external-dns&lt;/code&gt; is leveraged to write DNS records to providers like Cloudflare, Route53, etc. However, it also supports writing DNS records to your locally-installed &lt;code&gt;Pi-Hole&lt;/code&gt;!&lt;/p&gt;&lt;h1 id=&quot;heading-what-is-pi-hole&quot;&gt;what is pi-hole?&lt;/h1&gt;&lt;p&gt;It&apos;s no secret that tech workers are amongst the biggest users of ad-blocking software, and one of the major players in this space is the venerable &lt;a target=&quot;_blank&quot; href=&quot;https://pi-hole.net&quot;&gt;Pi-Hole&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If you don&apos;t know what Pi-Hole is, it&apos;s a piece of software originally designed to run on a &lt;a target=&quot;_blank&quot; href=&quot;https://www.raspberrypi.com&quot;&gt;Raspberry Pi&lt;/a&gt; (although it runs anywhere now, even Docker and Kubernetes) that blocks ads at the DNS level. You configure the Pi-Hole to be your DNS provider. When an app or website calls out to a domain associated with the configured &lt;a target=&quot;_blank&quot; href=&quot;https://github.com/topics/pihole-blocklists&quot;&gt;block list(s)&lt;/a&gt;, Pi-Hole simply responds that the advertising domain is unresolvable, and content from the advertising domain is not shown. All the while, the useful, good, content you were looking for is displayed!&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714590822512/15ae8897-d37a-4152-9844-0106c916b866.png&quot; alt=&quot;image showing a list of network resources from Safari Web Inspector, showing an advertising domain not loaded.&quot; class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;p&gt;Notice in the above image that &lt;code&gt;analytics.js&lt;/code&gt; from &lt;code&gt;google-analytics.com&lt;/code&gt; isn&apos;t loading, thanks to Pi-Hole!&lt;/p&gt;&lt;p&gt;However, the feature of Pi-Hole I&apos;ll be discussing in this article is &lt;code&gt;Local DNS&lt;/code&gt;. We&apos;ll be using it to map a DNS record to the ingress IP on a homelab Kubernetes cluster.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;https://cdn.hashnode.com/res/hashnode/image/upload/v1714592083142/e105a185-3b4b-4ad4-930b-2cc071425672.png&quot; alt class=&quot;image--center mx-auto&quot; /&gt;&lt;/p&gt;&lt;h1 id=&quot;heading-installing-external-dns&quot;&gt;installing external-dns&lt;/h1&gt;&lt;p&gt;The Helm chart is the best way to install external-dns.&lt;/p&gt;&lt;h2 id=&quot;heading-required-information&quot;&gt;required information&lt;/h2&gt;&lt;p&gt;Collect the following:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The IP address of your &lt;code&gt;Pi-Hole&lt;/code&gt; instance (referenced as &lt;code&gt;piholeipaddress&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;code&gt;admin&lt;/code&gt; credentials for your &lt;code&gt;Pi-Hole&lt;/code&gt; instance (referenced as &lt;code&gt;piholeadminpassword&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 id=&quot;heading-add-and-update-repo&quot;&gt;add and update repo&lt;/h2&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/helm update/&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&quot;heading-create-a-secret-for-pi-hole&quot;&gt;create a secret for pi-hole&lt;/h1&gt;&lt;p&gt;In this step, we&apos;ll create a Kubernetes &lt;code&gt;Secret&lt;/code&gt; for authenticating to the &lt;code&gt;Pi-Hole&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl create secret generic pihole-password \--namespace external-dns \--from-literal EXTERNAL_DNS_PIHOLE_PASSWORD={{piholeadminpassword}}&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-create-valuesyaml&quot;&gt;create values.yaml&lt;/h2&gt;&lt;p&gt;Next, create a &lt;code&gt;values.yaml&lt;/code&gt; to pass configuration values to the Helm chart.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-yaml&quot;&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://github.com/kubernetes-sigs/external-dns/blob/master/charts/external-dns/README.md#values&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;provider:&lt;/span&gt;  &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;pihole&lt;/span&gt;&lt;span class=&quot;hljs-comment&quot;&gt;# https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/pihole.md#arguments&lt;/span&gt;&lt;span class=&quot;hljs-attr&quot;&gt;env:&lt;/span&gt; &lt;span class=&quot;hljs-comment&quot;&gt;# configured your pi-hole password and ip address&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_PASSWORD&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;valueFrom:&lt;/span&gt;      &lt;span class=&quot;hljs-attr&quot;&gt;secretKeyRef:&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;pihole-password&lt;/span&gt;        &lt;span class=&quot;hljs-attr&quot;&gt;key:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_PASSWORD&lt;/span&gt;  &lt;span class=&quot;hljs-bullet&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;hljs-attr&quot;&gt;name:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;EXTERNAL_DNS_PIHOLE_SERVER&lt;/span&gt;    &lt;span class=&quot;hljs-comment&quot;&gt;# make sure NOT to put a trailing slash, as external-dns adds its own&lt;/span&gt;    &lt;span class=&quot;hljs-attr&quot;&gt;value:&lt;/span&gt; &lt;span class=&quot;hljs-string&quot;&gt;http://{{piholeipaddress}}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-install&quot;&gt;install&lt;/h2&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;helm upgrade --install external-dns external-dns/external-dns \--namespace external-dns \--create-namespace \--values values.yaml&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&quot;heading-make-sure-its-working&quot;&gt;make sure it&apos;s working&lt;/h2&gt;&lt;p&gt;If you check the deployment logs for &lt;code&gt;external-dns&lt;/code&gt; , you&apos;ll see that &lt;code&gt;external-dns&lt;/code&gt; has been hard at work creating a DNS record for your &lt;code&gt;ingress&lt;/code&gt;.&lt;/p&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;kubectl logs --namespace external-dns deployments/external-dns external-dns&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code class=&quot;lang-bash&quot;&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=info msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;add openweb-ui.lan IN A -&amp;gt; 192.168.126.50&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=warning msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;Skipping unsupported endpoint openweb-ui.lan TXT \&quot;heritage=external-dns,external-dns/owner=default,external-dns/resource=ingress/openweb-ui/open-webui\&quot;&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:14:10Z&quot;&lt;/span&gt; level=warning msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;Skipping unsupported endpoint a-openweb-ui.lan TXT \&quot;heritage=external-dns,external-dns/owner=default,external-dns/resource=ingress/openweb-ui/open-webui\&quot;&quot;&lt;/span&gt;time=&lt;span class=&quot;hljs-string&quot;&gt;&quot;2024-05-01T20:15:10Z&quot;&lt;/span&gt; level=info msg=&lt;span class=&quot;hljs-string&quot;&gt;&quot;All records are already up to date&quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;That&apos;s it!&lt;/p&gt;&lt;p&gt;If you have trouble, again make sure you take out any trailing slashes on the &lt;code&gt;EXTERNAL_DNS_PIHOLE_SERVER&lt;/code&gt; environment variable. And also make sure the IP is correct! Don&apos;t ask me how I spent like 20 minutes wondering why &lt;code&gt;external-dns&lt;/code&gt; wouldn&apos;t connect.&lt;/p&gt;]]&gt;</hashnode:content><hashnode:coverImage>https://cdn.hashnode.com/res/hashnode/image/upload/v1714595272003/13ad7284-23a3-4d4f-a3ac-09f77482b524.png</hashnode:coverImage></item></channel></rss>