Partielle Internetprobleme mit Starlink

Wenn eBay, das Playstation Network und viele andere Dienste plötzlich den Zugriff verweigern:

Dann könnte das an einer schlechten Reputation der IP Adresse liegen, über welche man sich im Internet bewegt. In meinem Fall teilt man sich die IP Adresse dank Carrier-grade NAT (CGNAT) mit vielen anderen Nutzern des ISPs.

Aber schauen wir mal:

Hier also ganz klar das Akamai CDN, welches von vielen Diensten genutzt wird. Die Reputation der eigenen IP Adrese kann man unter https://www.akamai.com/us/en/clientrep-lookup/ prüfen. Und siehe da:

AREDN: Kein Channel -2 auf MikroTik RB912UAG-2HPnD

Auf meinen RB912UAG-2HPnD wollten sich partout nicht Frequenzen/Channels für 13cm zeigen, was sich wie folgt beheben lies:

Telnet auf das GW (user root), dann das File /www/cgi-bin/perlfunc.pm wie folgt ergänzen:

...
    },
    'Mikrotik RouterBOARD RB912UAG-2HPnD' => {
      'name'            => 'Mikrotik RouterBOARD RB912UAG-2HPnD',
      'comment'         => '',
      'supported'       => '1',
      'maxpower'        => '30',
      'pwroffset'       => '0',
      'usechains'       => 1,
      'rfband'          => '2400',
    },
...

Anschließend sind die Channels auch sichtbar. Das sollte so für jedes andere von der Firmware unterstützte Gerät ebenfalls funktionieren. Die passende Bezeichnung bekommt man aus dem Supportfile heraus.

Kubernetes Upgrade v1.9.4 -> v1.9.6 im Raspberry Cluster

Kubeadmin laden:

Pruefen, ob die Version passt:

Kubernetes v1.9.6 ausrollen:

[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.9.4
[upgrade/versions] kubeadm version: v1.9.6
[upgrade/versions] Latest stable version: v1.9.6
[upgrade/versions] Latest version in the v1.9 series: v1.9.6

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 5 x v1.9.4 v1.9.6

Upgrade to the latest version in the v1.9 series:

COMPONENT CURRENT AVAILABLE
API Server v1.9.4 v1.9.6
Controller Manager v1.9.4 v1.9.6
Scheduler v1.9.4 v1.9.6
Kube Proxy v1.9.4 v1.9.6
Kube DNS 1.14.7 1.14.7
Etcd 3.1.11 3.1.11

You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.9.6


[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.9.6"
[upgrade/versions] Cluster version: v1.9.4
[upgrade/versions] kubeadm version: v1.9.6
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.9.6"...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests090219554/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.9.6"
[upgrade/versions] Cluster version: v1.9.4
[upgrade/versions] kubeadm version: v1.9.6
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.9.6"...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests443390479/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests090219554/kube-apiserver.yaml"
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests090219554/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 0 Pods for label selector component=kube-controller-manager
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests090219554/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.9.6". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.

Im letzten Schritt werden die Pakete auf dem Master und den Nodes aktualisiert:

Wenn man ganz mutig ist, kann man das Upgrade der Nodes in einem Arbeitsschritt erledigen:

Nun sollte der Cluster auf v1.9.6 laufen:


NAME STATUS ROLES AGE VERSION
kubemaster1 Ready master 8d v1.9.6
kubenode1 Ready 8d v1.9.6
kubenode2 Ready 8d v1.9.6
kubenode3 NotReady 8d v1.9.6
kubenode4 NotReady 6d v1.9.6

Done.

EBS Volumes auf M5/C5 EC2 Instanzen

Beim Versuch EBS Volumes unter Kubernetes 1.8.3 zu mounten, steigt Kube leider mit „timeout expired waiting for volumes to attach/mount for pod xyz“ aus. Ursache dafuer ist, dass auf C5 und M5 Instanzen EBS Volumes als NVMe Block Devices bereitgestellt werden: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html

Nach aktuellem Stand gib es dafuer leider erst in Kubernetes Version 1.9 Support: https://github.com/kubernetes/kubernetes/pull/56607