RKE(Rancher Kubernetes Engine)のネットワークプラグインにweave-netを追加するPRをマージしてもらえました!これを使ってハイブリッドなk8sクラスターをデプロイしてみたいと思います。

※ 現行のv0.0.8-devではまだ利用できません。weaveのネットワークプラグインは次のリリースに含まれる予定です。

今回のゴールは「パブリックノードで動くコンテナと、プライベートノードで動くコンテナが互いに通信できること」です。

Kubernetesのクラスタは1Masterノード(public)と1Workerノード(private)というシンプルな構成にしたいと思います。以下、構成のイメージです。

nodes

事前準備

RKEやノードの初期設定についてはこちらの投稿をご覧ください。

使用するポート

今回使用するポートは下記のようになっています。必要に応じてパブリックノード側を調整してください。プライベートノード側の解放は特に必要ありません。

  • k8s => TCP: 6443, 10250, 10255, 30000 - 32767
  • weave-net => TCP: 6783, UDP: 6783-6784
  • etcd => TCP: 2379 - 2380
  • ssh => TCP: 22

設定ファイルの作成

RKEやノードの準備ができましたら、クラスタ作成用の設定ファイルを作成していきます。今回はこちらのノード情報を例として進めていきます。

node_config

Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: 
Number of Hosts [3]: 2
SSH Address of host (1) [none]: 54.65.229.228
SSH Private Key Path of host (54.65.229.228) [none]: 
SSH Private Key of host (54.65.229.228) [none]: 
SSH User of host (54.65.229.228) [ubuntu]: 
Is host (54.65.229.228) a control host (y/n)? [y]: y
Is host (54.65.229.228) a worker host (y/n)? [n]: n
Is host (54.65.229.228) an Etcd host (y/n)? [n]: y
Override Hostname of host (54.65.229.228) [none]: 
Internal IP of host (54.65.229.228) [none]: 
Docker socket path on host (54.65.229.228) [/var/run/docker.sock]: 
SSH Address of host (2) [none]: 172.16.0.25
SSH Private Key Path of host (172.16.0.25) [none]: 
SSH Private Key of host (172.16.0.25) [none]: 
SSH User of host (172.16.0.25) [ubuntu]: 
Is host (172.16.0.25) a control host (y/n)? [y]: n
Is host (172.16.0.25) a worker host (y/n)? [n]: y
Is host (172.16.0.25) an Etcd host (y/n)? [n]: n
Override Hostname of host (172.16.0.25) [none]: 
Internal IP of host (172.16.0.25) [none]: 
Docker socket path on host (172.16.0.25) [/var/run/docker.sock]: 
Network Plugin Type [flannel]: weave
Authentication Strategy [x509]: 
Etcd Docker Image [quay.io/coreos/etcd:latest]: 
Kubernetes Docker image [rancher/k8s:v1.8.3-rancher2]: 
Cluster domain [cluster.local]: 
Service Cluster IP Range [10.233.0.0/18]: 
Cluster Network CIDR [10.233.64.0/18]: 
Cluster DNS Service IP [10.233.0.3]: 
Infra Container image [gcr.io/google_containers/pause-amd64:3.0]: 

設定ファイルの編集

早速 rke up を実行したいところではありますが、設定ファイルの追加編集が必要です。
下記のように kube-apiextra_argsadvertise-address としてmasterノードのパブリックIPを追加してください。

kube-api:
  image: rancher/k8s:v1.8.3-rancher2
  extra_args: 
    advertise-address: 54.65.229.228
  service_cluster_ip_range: 10.233.0.0/18

編集は以上です。設定ファイル全体はこのようになります。

# If you intened to deploy Kubernetes in an air-gapped envrionment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 54.65.229.228
  internal_address: ""
  role:
  - controlplane
  - etcd
  hostname_override: ""
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ""
- address: 172.16.0.25
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: ubuntu
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ""
services:
  etcd:
    image: quay.io/coreos/etcd:latest
    extra_args: {}
  kube-api:
    image: rancher/k8s:v1.8.3-rancher2
    extra_args: 
      advertise-address: 54.65.229.228
    service_cluster_ip_range: 10.233.0.0/18
  kube-controller:
    image: rancher/k8s:v1.8.3-rancher2
    extra_args: {}
    cluster_cidr: 10.233.64.0/18
    service_cluster_ip_range: 10.233.0.0/18
  scheduler:
    image: rancher/k8s:v1.8.3-rancher2
    extra_args: {}
  kubelet:
    image: rancher/k8s:v1.8.3-rancher2
    extra_args: {}
    cluster_domain: cluster.local
    infra_container_image: gcr.io/google_containers/pause-amd64:3.0
    cluster_dns_server: 10.233.0.3
  kubeproxy:
    image: rancher/k8s:v1.8.3-rancher2
    extra_args: {}
network:
  plugin: weave
  options: {}
auth:
  strategy: x509
  options: {}
addons: ""
system_images: {}
ssh_key_path: ~/.ssh/id_rsa

デプロイ!

準備が整いましたので、デプロイしてみましょう!

$ rke up
INFO[0000] Building Kubernetes cluster                  
INFO[0000] [ssh] Setup tunnel for host [54.65.229.228]  
INFO[0000] [ssh] Setup tunnel for host [172.16.0.25]    
INFO[0000] [certificates] Generating kubernetes certificates 
INFO[0000] [certificates] Generating CA kubernetes certificates 
INFO[0000] [certificates] Generating Kubernetes API server certificates 
INFO[0000] [certificates] Generating Kube Controller certificates 
INFO[0001] [certificates] Generating Kube Scheduler certificates 
INFO[0001] [certificates] Generating Kube Proxy certificates 
INFO[0001] [certificates] Generating Node certificate   
INFO[0001] [certificates] Generating admin certificates and kubeconfig 
INFO[0001] [reconcile] Reconciling cluster state        
INFO[0001] [reconcile] This is newly generated cluster  
INFO[0001] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0017] Successfully Deployed local admin kubeconfig at [./.kube_config_cluster.yml] 
INFO[0017] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0017] [etcd] Building up Etcd Plane..              
INFO[0017] [etcd] Pulling Image on host [54.65.229.228] 
INFO[0019] [etcd] Successfully pulled [etcd] image on host [54.65.229.228] 
INFO[0019] [etcd] Successfully started [etcd] container on host [54.65.229.228] 
INFO[0019] [etcd] Successfully started Etcd Plane..     
INFO[0019] [controlplane] Building up Controller Plane.. 
INFO[0022] [controlplane] Pulling Image on host [54.65.229.228] 
INFO[0024] [controlplane] Successfully pulled [kube-api] image on host [54.65.229.228] 
INFO[0024] [controlplane] Successfully started [kube-api] container on host [54.65.229.228] 
INFO[0024] [controlplane] Pulling Image on host [54.65.229.228] 
INFO[0026] [controlplane] Successfully pulled [kube-controller] image on host [54.65.229.228] 
INFO[0027] [controlplane] Successfully started [kube-controller] container on host [54.65.229.228] 
INFO[0027] [controlplane] Pulling Image on host [54.65.229.228] 
INFO[0029] [controlplane] Successfully pulled [scheduler] image on host [54.65.229.228] 
INFO[0029] [controlplane] Successfully started [scheduler] container on host [54.65.229.228] 
INFO[0029] [controlplane] Successfully started Controller Plane.. 
INFO[0029] [worker] Building up Worker Plane..          
INFO[0029] [sidekick] Sidekick container already created on host [54.65.229.228] 
INFO[0029] [worker] Pulling Image on host [54.65.229.228] 
INFO[0031] [worker] Successfully pulled [kubelet] image on host [54.65.229.228] 
INFO[0032] [worker] Successfully started [kubelet] container on host [54.65.229.228] 
INFO[0032] [worker] Pulling Image on host [54.65.229.228] 
INFO[0034] [worker] Successfully pulled [kube-proxy] image on host [54.65.229.228] 
INFO[0034] [worker] Successfully started [kube-proxy] container on host [54.65.229.228] 
INFO[0034] [worker] Pulling Image on host [172.16.0.25] 
INFO[0036] [worker] Successfully pulled [nginx-proxy] image on host [172.16.0.25] 
INFO[0037] [worker] Successfully started [nginx-proxy] container on host [172.16.0.25] 
INFO[0039] [worker] Pulling Image on host [172.16.0.25] 
INFO[0041] [worker] Successfully pulled [kubelet] image on host [172.16.0.25] 
INFO[0041] [worker] Successfully started [kubelet] container on host [172.16.0.25] 
INFO[0041] [worker] Pulling Image on host [172.16.0.25] 
INFO[0044] [worker] Successfully pulled [kube-proxy] image on host [172.16.0.25] 
INFO[0044] [worker] Successfully started [kube-proxy] container on host [172.16.0.25] 
INFO[0044] [worker] Successfully started Worker Plane.. 
INFO[0044] [certificates] Save kubernetes certificates as secrets 
INFO[0062] [certificates] Successfuly saved certificates as kubernetes secret [k8s-certs] 
INFO[0062] [state] Saving cluster state to Kubernetes   
INFO[0062] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state 
INFO[0062] [network] Setting up network plugin: weave   
INFO[0062] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0062] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin 
INFO[0062] [addons] Executing deploy job..              
INFO[0067] [addons] Setting up KubeDNS                  
INFO[0067] [addons] Saving addon ConfigMap to Kubernetes 
INFO[0067] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon 
INFO[0067] [addons] Executing deploy job..              
INFO[0072] [addons] KubeDNS deployed successfully..     
INFO[0072] [addons] Setting up user addons..            
INFO[0072] [addons] No user addons configured..         
INFO[0072] Finished building Kubernetes cluster successfully 

クラスターの確認

デプロイが完了しましたので、kubectlでノードを確認してみます。

$ kubectl --kubeconfig .kube_config_cluster.yml get nodes -o wide
NAME            STATUS    AGE       VERSION           EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION
172.16.0.25     Ready     9m        v1.8.3-rancher1   <none>        Ubuntu 16.04.1 LTS   4.4.0-87-generic
54.65.229.228   Ready     10m       v1.8.3-rancher1   <none>        Ubuntu 16.04.1 LTS   4.4.0-1043-aws

両ノードともReadyとなっていますので、無事立ち上がっていますね。

weave-scopeのインストール

せっかく weave-net を使っていますので、weave-scope も使ってネットワークやコンテナの繋がりを可視化してみましょう。
こちら にあるように kubectl apply を使ってクラスターにインストールします。

$ kubectl --kubeconfig .kube_config_cluster.yml apply --namespace kube-system -f \
"https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')&k8s-service-type=NodePort"

serviceaccount "weave-scope" configured
clusterrole "weave-scope" configured
clusterrolebinding "weave-scope" configured
deployment "weave-scope-app" configured
service "weave-scope-app" configured
daemonset "weave-scope-agent" configured

GETパラメータに k8s-service-type=NodePort を追加し、サービスを公開しました。 ポートを確認してみると下記のようになりました。

NAME              CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns          10.233.0.3    <none>        53/UDP,53/TCP   37m
weave-scope-app   10.233.0.15   <nodes>       80:31128/TCP    58s

今回の場合、http://172.16.0.25:31128を開くと、下記のようなweave-scopeのWebUIが表示されます。

ws

疎通確認用にデプロイメントを作成

今回のゴールは 「パブリックノードで動くコンテナと、プライベートノードで動くコンテナが互いに通信できること」 なので、疎通確認用に replicas: 2 でbusyboxのデプロイメントを作成してみます。

$ vi busybox.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: busybox-deployment
  labels:
    app: busybox
spec:
  replicas: 2
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        stdin: true
        tty: true

それぞれのノードにコンテナがデプロイされました。

$ kubectl --kubeconfig .kube_config_cluster.yml apply -f busybox.yaml
deployment "busybox-deployment" created
$ kubectl --kubeconfig .kube_config_cluster.yml get pod -o wide
NAME                                  READY     STATUS    RESTARTS   AGE       IP            NODE
busybox-deployment-85797d4f9b-9qfjj   1/1       Running   0          20s       10.233.96.3   54.65.229.228
busybox-deployment-85797d4f9b-sb2vc   1/1       Running   0          20s       10.233.64.3   172.16.0.25

疎通確認

weave-scopeのWebUIからコンテナにシェル接続をして相互にPINGをうってみましょう!
左下にbusyboxのコンテナができていることがわかります。

test

  1. プライベートノード上のコンテナから、パブリックノード上のコンテナにPING

to_public

  1. パブリックノード上のコンテナから、プライベートノード上のコンテナにPING

to_private

このようにRKEとweaveで簡単にハイブリッドなk8sクラスターを実現することができました! パブリックノードをゲートウェイとし、プライベートノードをリソースとして使うなんてことも比較的簡単にできちゃいますので夢が拡がりますね。

© 2018. SuperSoftware Co., Ltd. All Rights Reserved.