Jbn1233

$ kubectl get svc echo3-service 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo3-service ClusterIP 10.96.179.217 <none> 80/TCP 5d17h
$ kubectl get endpoints echo3-service
NAME ENDPOINTS AGE
echo3-service 10.169.198.255:8080,10.169.4.216:8080 5d17h
$ kubectl exec -it calico-node-2vnrk -c calico-node -n kube-system -- calico-node -bpf nat dump |grep -A 5 10.96.179.217
10.96.179.217 port 80 proto 6 id 116 count 2 local 0
116:0 10.169.198.255:8080
116:1 10.169.4.216:8080
192.168.0.13 port 32313 proto 6 id 24 count 1 local 1
24:0 192.168.0.13:443
10.96.0.1 port 443 proto 6 id 5 count 1 local 0

no more kube-proxy.

that’s all.

--

--

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
name: kong
spec:
controller: konghq.com/ingress-controller

The important part is “ingressclass.kubernetes.io/is-default-class: “true””

that’s.

--

--

To run Longhorn storage only on the specific nodes:

  • Download yaml manifest and update Longhorn config map “create-default-disk-labeled-nodes=true”
  • Apply yaml
  • Add label node.longhorn.io/create-default-disk=true to your dedicated storage nodes. Do nothing on another node.
  • Done
node 001–003 are storage nodes and others are compute node

For better result, you may apply taint to Longhorn nodes

That’s all

--

--

Sometime getting fact take long time to process

try this:

[defaults]gather_subset = min,network,!hardware

Next, looks wired ,but this can help me

ansible_ssh_common_args="-o ServerAliveInterval=15 -F /dev/null -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"

Hope this help.

--

--

Velero is a great tool to do full backup/restore your k8s cluster ,but sometime you need to clean install it this is how.

#!/bin/bashvelero uninstall
kubectl delete ns velero
kubectl delete ResticRepository -n velero $(kubectl get ResticRepository -n velero -o jsonpath='{.items[*].metadata.name}')
velero install \
-n velero \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./credentials-velero \
--default-volumes-to-restic=true \
--use-volume-snapshots=true \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=https://velero.myvelero.io \
--use-restic --wait -n velero ; kubectl logs -f deployment/velero -n velero

verify

kubectl get BackupStorageLocation default -n velero  -o yaml

done

--

--

VirtualBox is my best friend for my Ansible testing and I need to re-create VMs all the time this is how

cd C:\Program Files\Oracle\VirtualBoxVBoxManage clonevm s201 --name="s201c1"  --snapshot s1 --options=keepallmacs,Link  --registerVBoxManage clonevm s202 --name="s202c1"  --snapshot s1 --options=keepallmacs,Link  --registerVBoxManage clonevm s203 --name="s203c1"  --snapshot s1 --options=keepallmacs,Link  --registerVBoxManage clonevm w204 --name="w204c1"  --snapshot s1 --options=keepallmacs,Link  --registerVBoxManage startvm s201c1 --type headlessVBoxManage startvm s202c1 --type headlessVBoxManage startvm s203c1 --type headlessVBoxManage startvm w204c1 --type headless

that’s all

--

--

Jbn1233

Jbn1233

Very short and simple notes for CKA/SRE and may not works on your environment.