这是在 Scaleway 裸机 ARM 和 x86-64 上设置 Kubernetes 的分步指南。我从事这个项目的主要原因是我想在 ARM 上自动创建 OpenFaaS 和 Weave Net 的测试环境。我一直在寻找一种运行集成测试的廉价解决方案,在试用了多家云提供商后,我决定使用 Scaleway。 Scaleway 是一家法国云提供商,以实惠的价格提供裸机 ARM 和 x86-64 服务器。使用 Terraform Scaleway 提供程序和 kubeadm,您可以在十分钟内拥有一个功能齐全的 Kubernetes 集群 初始设置 克隆存储库并安装依赖项: $ git clone httpsgithub.com/stefanprodan/k8s-scw-baremetal.git $ cd k8s-scw-baremetal $ terraform init 请注意,您需要 Terraform v0.10 或更新版本才能运行此项目 在运行该项目之前,您必须为 Terraform 创建一个访问令牌以连接到 Scaleway API。使用令牌和您的访问密钥,创建两个环境变量: $ export SCALEWAY_ORGANIZATIONACCESS-KEY>"$ export SCALEWAY_TOKENACCESS-TOKEN>"用法 创建一个 ARMv7 裸机 Kubernetes 集群,一主两节点: $ terraform workspace new arm $ terraform apply \ -var region=par1 \ -var arch=arm \ -var server_type=C1 \ -var nodes=2 \ -var weave_passwd=ChangeMe \ -var k8s_version=stable-1.9 \ -var docker_version =17.03.0~ce-0~ubuntu-xenial 这将执行以下操作: - 为每台服务器保留公共IP - 使用 Ubuntu 16.04.1 LTS 提供三个裸机服务器 - 通过 SSH 连接到主服务器并安装 Docker CE 和 kubeadm armhf apt 包 - 在主服务器上运行 kubeadm init 并配置 kubectl - 在本地机器上下载 kubectl 管理配置文件,并将私有 IP 替换为公共 IP - 使用 Wea​​ve Net 密码创建 Kubernetes 秘密 - 安装带有加密覆盖的 Weave Net - 安装集群附加组件(Kubernetes 仪表板、指标服务器和 Heapster) - 并行启动工作节点并安装 Docker CE 和 kubeadm - 使用从主服务器获得的 kubeadm 令牌加入集群中的工作节点 通过增加节点数量进行扩展: $地形应用-var节点=3 拆除整个基础设施: 地形力量 创建一个具有一个主节点和一个节点的 AMD64 裸机 Kubernetes 集群: $ terraform workspace new amd64 $ terraform apply \ -var region=par1 \ -var arch=x86_64 \ -var server_type=C2S \ -var nodes=1 \ -var weave_passwd=ChangeMe \ -var k8s_version=stable-1.9 \ -var docker_version =17.03.0~ce-0~ubuntu-xenial 遥控 应用 Terraform 计划后,您将看到几个输出变量,如主公共 IP、kubeadmn 加入命令和当前工作区管理配置 为了运行 针对 Scaleway 集群的 kubectl 命令,您可以使用 kubectl_config 输出变量: 检查 Heapster 是否工作: $ kubectl --kubeconfig terraform output kubectl_config) top nodes 名称 CPU(cores) CPU% MEMORY(bytes) MEMORY% arm-master-1 655m 16% 873Mi 45% arm-node-1 147m 3% 618Mi 32% arm-node- 2 101米 2% 584米 30% 这 kubectl 配置文件格式为 .conf as in arm.conf or amd64.conf In order to access the dashboard you’ll need to find its cluster IP: $ kubectl --kubeconfig terraform output kubectl_config) \ -n kube-system get svc --selector=k8s-app=kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard ClusterIP 10.107.37.220 80/TCP 6m Open a SSH tunnel: ssh -L 8888::80 [email protected] Now you can access the dashboard on your computer at httplocalhost:8888 Expose services outside the cluster Since we’re running on bare-metal and Scaleway doesn’t offer a load balancer, the easiest way to expose applications outside of Kubernetes is using a NodePort service Let’s deploy the podinfo app in the default namespace. Podinfo has a multi-arch Docker image and it will work on arm, arm64 or amd64 Create the podinfo nodeport service: $ kubectl --kubeconfig terraform output kubectl_config) \ apply -f httpsraw.githubusercontent.com/stefanprodan/k8s-podinfo/master/deploy/auto-scaling/podinfo-svc-nodeport.yaml service "podinfo-nodeport" created Create the podinfo deployment: $ kubectl --kubeconfig terraform output kubectl_config) \ apply -f httpsraw.githubusercontent.com/stefanprodan/k8s-podinfo/master/deploy/auto-scaling/podinfo-dep.yaml deployment "podinfo" created Inspect the podinfo service to obtain the port number: $ kubectl --kubeconfig terraform output kubectl_config) \ get svc --selector=app=podinfo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE podinfo-nodeport NodePort 10.104.132.14 9898:31190/TCP 3m You can access podinfo at httpMASTER_PUBLIC_IP>:31190 or using curl: $ curl httpterraform output k8s_master_public_ip):31190 runtime: arch: arm max_procs: "4" num_cpu: "4" num_goroutine: "12" os: linux version: go1.9.2 labels: app: podinfo pod-template-hash: "1847780700" annotations: kubernetes.io/config.seen: 2018-01-08T00:39:45.580597397Z kubernetes.io/config.source: api environment: HOME: /root HOSTNAME: podinfo-5d8ccd4c44-zrczc KUBERNETES_PORT: tcp10.96.0.1:443 KUBERNETES_PORT_443_TCP: tcp10.96.0.1:443 KUBERNETES_PORT_443_TCP_ADDR: 10.96.0.1 KUBERNETES_PORT_443_TCP_PORT: "443" KUBERNETES_PORT_443_TCP_PROTO: tcp KUBERNETES_SERVICE_HOST: 10.96.0.1 KUBERNETES_SERVICE_PORT: "443" KUBERNETES_SERVICE_PORT_HTTPS: "443" PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin externalIP: IPv4: 163.172.139.112 OpenFaaS You can deploy OpenFaaS on Kubernetes with Helm or by using the YAML files form the faas-netes repository Clone the faas-netes repo: git clone httpsgithub.com/openfaas/faas-netes cd faas-netes Deploy OpenFaaS for ARM: $ kubectl --kubeconfig terraform output kubectl_config) \ apply -f ./namespaces.ymlyaml_armhf Deploy OpenFaaS for AMD64: $ kubectl --kubeconfig terraform output kubectl_config) \ apply -f ./namespaces.ymlyaml You can access the OpenFaaS gateway at httpMASTER_PUBLIC_IP>:31112 Horizontal Pod Autoscaling Starting from Kubernetes 1.9 kube-controller-manager is configured by default with horizontal-pod-autoscaler-use-rest-clients In order to use HPA we need to install the metrics server to enable the new metrics API used by HPA v2 Both Heapster and the metrics server have been deployed from Terraform when the master node was provisioned The metric server collects resource usage data from each node using Kubelet Summary API. Check if the metrics server is running: $ kubectl --kubeconfig terraform output kubectl_config) \ get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq { "kind": "NodeMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes" }, "items": [ { "metadata": { "name": "arm-master-1", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/arm-master-1", "creationTimestamp": "2018-01-08T15:17:09Z" }, "timestamp": "2018-01-08T15:17:00Z", "window": "1m0s", "usage": { "cpu": "384m", "memory": "935792Ki" } }, { "metadata": { "name": "arm-node-1", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/arm-node-1", "creationTimestamp": "2018-01-08T15:17:09Z" }, "timestamp": "2018-01-08T15:17:00Z", "window": "1m0s", "usage": { "cpu": "130m", "memory": "649020Ki" } }, { "metadata": { "name": "arm-node-2", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/arm-node-2", "creationTimestamp": "2018-01-08T15:17:09Z" }, "timestamp": "2018-01-08T15:17:00Z", "window": "1m0s", "usage": { "cpu": "120m", "memory": "614180Ki" } } ] } Let’s define a HPA that will maintain a minimum of two replicas and will scale up to ten if the CPU average is over 80% or if the memory goes over 200Mi apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: podinfo spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: podinfo minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - type: Resource resource: name: memory targetAverageValue: 200Mi Apply the podinfo HPA: $ kubectl --kubeconfig terraform output kubectl_config) \ apply -f httpsraw.githubusercontent.com/stefanprodan/k8s-podinfo/master/deploy/auto-scaling/podinfo-hpa.yaml horizontalpodautoscaler "podinfo" created After a couple of seconds the HPA controller will contact the metrics server and will fetch the CPU and memory usage: $ kubectl --kubeconfig terraform output kubectl_config) get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE podinfo Deployment/podinfo 2826240 / 200Mi, 15% / 80% 2 10 2 5m In order to increase the CPU usage we could run a load test with hey: #install hey go get -u github.com/rakyll/hey #do 10K requests rate limited at 20 QPS hey -n 10000 -q 10 -c 5 httpterraform output k8s_master_public_ip):31190 You can monitor the autoscaler events with: $ kubectl --kubeconfig terraform output kubectl_config) describe hpa Events: Type Reason Age From MessageNormal SuccessfulRescale 7m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target After the load tests finishes the autoscaler will remove replicas until the deployment reaches the initial replica count: Events: Type Reason Age From MessageNormal SuccessfulRescale 20m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 16m horizontal-pod-autoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 12m horizontal-pod-autoscaler New size: 10; reason: cpu resource utilization (percentage of request) above target Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 2; reason: All metrics below target Conclusions Thanks to kubeadm and Terraform, bootstrapping a Kubernetes cluster on bare-metal can be done with a single command and it takes just ten minutes to have a fully functional setup. If you have any suggestion on improving this guide please submit an issue or PR on GitHub at stefanprodan/k8s-scw-baremetal. Contributions are more than welcome!