本教程向您展示如何将 Web 应用程序打包到 Docker 容器映像中,并在 Google Kubernetes Engine (GKE) 集群上运行该容器映像。然后,您将 Web 应用程序部署为一组负载均衡的副本,可以根据用户的需求进行扩展 ## 目标 - 将示例 Web 应用程序打包到 Docker 映像中 - 将 Docker 镜像上传到 Artifact Registry - 创建 GKE 集群 - 将示例应用程序部署到集群 - 管理部署的自动缩放 - 将示例应用程序公开到互联网 - 部署示例应用程序的新版本 ## 费用 本教程使用 Google Cloud 的以下收费组件: 要根据您的预计使用情况生成成本估算, 使用定价计算器 完成本教程后,您可以通过删除您创建的资源来避免继续计费。有关详细信息,请参阅清理 ## 开始之前按照以下步骤启用 Kubernetes Engine API: - 登录您的 Google Cloud 帐户。如果您是 Google Cloud 的新手,请创建一个帐户来评估我们的产品在真实场景中的表现。新客户还可获得 300 美元的免费积分,用于运行、测试和部署工作负载 - 在 Google Cloud 控制台的项目选择器页面上,选择或创建一个 Google Cloud 项目 - 确保为您的 Cloud 项目启用了计费。了解如何检查项目是否启用了计费 - 启用 Artifact Registry 和 Google Kubernetes Engine API - 在 Google Cloud 控制台的项目选择器页面上,选择或创建一个 Google Cloud 项目 - 确保为您的 Cloud 项目启用了计费。了解如何检查项目是否启用了计费 - 启用 Artifact Registry 和 Google Kubernetes Engine API 选项 A:使用 Cloud Shell 您可以使用随附的 Cloud Shell 来学习本教程 预装了 云, 码头工人,和 使用的 kubectl 命令行工具 在本教程中。如果你使用 Cloud Shell,则不需要安装这些 工作站上的命令行工具 要使用 Cloud Shell: - 转到谷歌云控制台 点击 激活 Cloud Shell Google Cloud 控制台窗口顶部的按钮 Cloud Shell 会话在 Google Cloud 控制台底部的新框架内打开并显示命令行提示符 选项 B:在本地使用命令行工具 如果您更喜欢在您的工作站上学习本教程,请按照以下步骤安装必要的工具 安装谷歌云 CLI 使用 gcloud CLI,安装 Kubernetes 命令行工具 kubectlis用于与Kubernetes通信,是GKE集群的集群编排系统: gcloud 组件安装 kubectl 在您的工作站上安装 Docker 社区版 (CE)。您使用它来为应用程序构建容器映像 安装 Git 源代码控制工具以从 GitHub 获取示例应用程序 ## 创建一个仓库 在本教程中,您将图像存储在 Artifact Registry 中并进行部署 从注册表。 Artifact Registry 是推荐的容器注册表 谷歌云。对于本快速入门,您将创建一个名为 你好回购 设置 PROJECT_ID 环境变量到您的 Google Cloud 项目 ID( ).在构建容器映像并将其推送到存储库时,您将使用此环境变量 项目编号 导出 PROJECT_ID= 项目编号 确认 PROJECT_ID 环境变量具有正确的值: 回显 $PROJECT_ID 为 Google Cloud CLI 设置项目 ID: gcloud 配置设置项目 $PROJECT_ID 输出: 更新的属性 [核心/项目] 创建 hello-reporepository 使用以下命令: gcloud 工件存储库创建 hello-repo \ --repository-format=docker \ --location= REGION\ --description="Docker 存储库"代替 存储库的区域,例如 地区 美国西部1。要查看可用位置列表,请运行以下命令: gcloud 工件位置列表 ## 构建容器镜像 在本教程中,您将部署一个示例 Web 应用程序调用 hello-app,一个编写的网络服务器 在 Go 中用消息响应所有请求 你好世界!在端口 8080 上 GKE 接受 Docker 镜像作为应用部署格式 部署前 hello-app到GKE,必须打包 这 作为 Docker 镜像的 hello-app 源代码 要构建 Docker 映像,您需要源代码和 Dockerfile。 Dockerfile 包含有关如何构建图像的说明 下载 hello-appsource 代码和 Dockerfile 通过运行以下命令: git 克隆 httpsgithub.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/hello-app 构建并标记 Docker 镜像 你好应用程序: 码头建设-t REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 此命令指示 Docker 使用 Dockerfile在当前目录下,保存到你的本地环境,并打上名字,比如 us-west1-docker.pkg.dev/my-project/hello-repo/hello-app:v1。该图像将在下一节中推送到 Artifact Registry - 这 PROJECT_ID 变量将容器映像与 您的 Google Cloud 项目中的 hello-reporepository - 这 us-west1-docker.pkg.devprefix 是指 Artifact Registry,您的存储库的区域主机 - 这 跑过 docker images 命令来验证构建是否成功: 码头图像 输出: 存储库标记图像 ID 创建大小 us-west1-docker.pkg.dev/my-project/hello-repo/hello-app v1 25cfadb1bf28 10 秒前 54 MB ## 在本地运行你的容器(可选) 使用本地 Docker 引擎测试容器镜像: 泊坞窗运行--rm -p 8080:8080 REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 如果您使用的是 Cloud Shell,请点击 网页预览按钮 然后选择 8080端口号。 GKE 在新的浏览器窗口中打开其代理服务上的预览 URL 否则,打开一个新的终端窗口(或 Cloud Shell 选项卡)并运行以下命令以验证容器是否正常工作并以“Hello, World”响应请求 卷曲 httplocalhost:8080 在看到成功的响应后,您可以通过按 Ctrl+Cin 所在的选项卡 docker run 命令正在运行 ## 将 Docker 镜像推送到 Artifact Registry 您必须将容器映像上传到注册表,以便您的 GKE 集群可以下载并运行容器映像。在本教程中,您会将容器存储在 Artifact Registry 中 配置 Docker 命令行工具以向 Artifact Registry 进行身份验证: gcloud auth 配置-docker 区域-docker.pkg.dev 将刚刚构建的 Docker 镜像推送到存储库: 码头推 REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 ## 创建 GKE 集群 现在 Docker 镜像存储在 Artifact Registry 中,创建一个 GKE 簇 跑步 你好应用程序。 GKE 集群由 Compute Engine VM 实例池组成 运行 Kubernetes,开源集群编排 为 GKE 提供动力的系统 云壳 设置您的 Compute Engine 地区或区域。根据您选择在 GKE 中使用的操作模式,指定默认地区或区域。如果您使用标准模式,您的集群是区域性的(对于本教程),因此请设置您的默认计算区域。如果您使用 Autopilot 模式,您的集群是区域性的,因此请设置您的默认计算区域。选择最接近您创建的 Artifact Registry 存储库的地区或区域 标准集群,例如 us-west1-a: gcloud 配置设置计算/区域 COMPUTE_ZONE 自动驾驶集群,如 美国西部 1: gcloud 配置设置计算/区域 计算区域 - 创建一个名为 你好集群: 标准集群: gcloud 容器集群创建 hello-cluster 自动驾驶集群: gcloud 容器集群 create-auto hello-cluster 创建 GKE 集群并进行健康检查需要几分钟时间 - 命令完成后,运行以下命令查看集群的三个节点: kubectl 获取节点 输出: NAME STATUS ROLES AGE VERSION gke-hello-cluster-default-pool-229c0700-cbtd Ready 92s v1.18.12-gke.1210 gke-hello-cluster-default-pool-229c0700-fc5j Ready 91s v1.18.12-gke.1210 gke-hello-cluster-default-pool-229c0700-n9l7 Ready 92s v1.18.12-gke.1210 Console Go to the Google Kubernetes Enginepage in the Google Cloud console Go to Google Kubernetes Engine Click Create Choose Standard or Autopilot mode and click Configure In the Namefield, enter the name hello-cluster Select a zone or region: Standardcluster: Under Location type, select Zonaland then select a Compute Engine zone from the Zonedrop-down list, such as us-west1-a Autopilotcluster: Select a Compute Engine region from the Regiondrop-down list, such as us-west1 - Click Create. This creates a GKE cluster Wait for the cluster to be created. When the cluster is ready, a green check mark appears next to the cluster name ## Deploying the sample app to GKE You are now ready to deploy the Docker image you built to your GKE cluster Kubernetes represents applications as Pods, which are scalable units holding one or more containers. The Pod is the smallest deployable unit in Kubernetes. Usually, you deploy Pods as a set of replicas that can be scaled and distributed together across your cluster. One way to deploy a set of replicas is through a Kubernetes Deployment In this section, you create a Kubernetes Deployment to run hello-app on your cluster. This Deployment has replicas (Pods). One Deployment Pod contains only one container: the hello-app Docker image You also create a HorizontalPodAutoscaler resource that scales the number of Pods from 3 to a number between 1 and 5, based on CPU load Cloud Shell Ensure that you are connected to your GKE cluster gcloud container clusters get-credentials hello-cluster --zone COMPUTE_ZONE Create a Kubernetes Deployment for your hello-appDocker image kubectl create deployment hello-app --image= REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 Set the baseline number of Deployment replicas to 3 kubectl scale deployment hello-app --replicas=3 Create a HorizontalPodAutoscalerresource for your Deployment kubectl autoscale deployment hello-app --cpu-percent=80 --min=1 --max=5 To see the Pods created, run the following command: kubectl get pods Output: NAME READY STATUS RESTARTS AGE hello-app-784d7569bc-hgmpx 1/1 Running 0 10s hello-app-784d7569bc-jfkz5 1/1 Running 0 10s hello-app-784d7569bc-mnrrl 1/1 Running 0 15s Console Go to the Workloadspage in the Google Cloud console Click Deploy In the Specify containersection, select Existing container image In the Image pathfield, click Select In the Select container imagepane, select the hello-appimage you pushed to Artifact Registry and click Select In the Containersection, click Done, then click Continue In the Configurationsection, under Labels, enter appfor Keyand hello-appfor Value Under Configuration YAML, click View YAML. This opens a YAML configuration file representing the two Kubernetes API resources about to be deployed into your cluster: one Deployment, and one HorizontalPodAutoscalerfor that Deployment Click Close, then click Deploy When the Deployment Pods are ready, the Deployment detailspage opens Under Managed pods, note the three running Pods for the hello-appDeployment ## Exposing the sample app to the internet While Pods do have individually-assigned IP addresses, those IPs can only be reached from inside your cluster. Also, GKE Pods are designed to be ephemeral, starting or stopping based on scaling needs. And when a Pod crashes due to an error, GKE automatically redeploys that Pod, assigning a new Pod IP address each time What this means is that for any Deployment, the set of IP addresses corresponding to the active set of Pods is dynamic. We need a way to 1) group Pods together into one static hostname, and 2) expose a group of Pods outside the cluster, to the internet Kubernetes Services solve for both of these problems Services group Pods into one static IP address, reachable from any Pod inside the cluster GKE also assigns a DNS hostname to that static IP. For example, hello-app.default.svc.cluster.local The default Service type in GKE is called ClusterIP, where the Service gets an IP address reachable only from inside the cluster To expose a Kubernetes Service outside the cluster, create a Service of type LoadBalancer This type of Service spawns an External Load Balancer IP for a set of Pods, reachable through the internet In this section, you expose the hello-app Deployment to the internet using a Service of type LoadBalancer Cloud Shell Use the kubectl exposecommand to generate a Kubernetes Service for the hello-appdeployment: kubectl expose deployment hello-app --name=hello-app-service --type=LoadBalancer --port 80 --target-port 8080 Here, the --portflag specifies the port number configured on the Load Balancer, and the --target-portflag specifies the port number that the hello-appcontainer is listening on Run the following command to get the Service details for hello-app-service: kubectl get service Output: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app-service 10.3.251.122 203.0.113.0 80:30877/TCP 10s Copy the EXTERNAL_IPaddress to the clipboard (for instance: 203.0.113.0) Console Go to the Workloadspage in the Google Cloud console Click hello-app From the Deployment details page, click Actions > Expose In the Exposedialog, set the Target portto 8080. This is the port the hello-appcontainer listens on From the Service typedrop-down list, select Load balancer Click Exposeto create a Kubernetes Service for hello-app When the Load Balancer is ready, the Service detailspage opens Scroll down to the External endpointsfield, and copy the IP address Now that the hello-app Pods are exposed to the internet through a Kubernetes Service, you can open a new browser tab, and navigate to the Service IP address you copied to the clipboard. A Hello, World! message appears, along with a Hostname field. The Hostname corresponds to one of the three hello-app Pods serving your HTTP request to your browser ## Deploying a new version of the sample app In this section, you upgrade hello-app to a new version by building and deploying a new Docker image to your GKE cluster GKE's rolling update feature lets you update your Deployments without downtime. During a rolling update, your GKE cluster incrementally replaces the existing hello-app Pods with Pods containing the Docker image for the new version During the update, your load balancer service routes traffic only into available Pods Return to Cloud Shell, where you have cloned the hello app source code and Dockerfile. Update the function hello()in the main.gofile to report the new version 2.0.0 Build and tag a new hello-appDocker image docker build -t REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 Push the image to Artifact Registry docker push REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 Now you're ready to update your hello-app Kubernetes Deployment to use a new Docker image Cloud Shell Apply a rolling update to the existing hello-appDeployment with an image update using the kubectl set imagecommand: kubectl set image deployment/hello-app hello-app= REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 Watch the running Pods running the v1image stop, and new Pods running the v2image start watch kubectl get pods Output: NAME READY STATUS RESTARTS AGE hello-app-89dc45f48-5bzqp 1/1 Running 0 2m42s hello-app-89dc45f48-scm66 1/1 Running 0 2m40s In a separate tab, navigate again to the hello-app-serviceExternal IP. You should now see the Versionset to 2.0.0 Console Go to the Workloadspage in the Google Cloud console Click hello-app On the Deployment detailspage, click Actions > Rolling update In the Rolling updatedialog, set the Image of hello-appfield to REGION-docker.pkg.dev/ PROJECT_ID/hello-repo/hello-app:v2 Click Update On the Deployment detailspage, inspect the Active Revisionssection. You should now see two Revisions, 1 and 2. Revision 1 corresponds to the initial Deployment you created earlier. Revision 2 is the rolling update you just started After a few moments, refresh the page. Under Managed pods, all of the replicas of hello-appnow correspond to Revision 2 In a separate tab, navigate again to the Service IP address you copied. The Versionshould be 2.0.0 ## Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources Delete the Service:This deallocates the Cloud Load Balancer created for your Service: kubectl delete service hello-app-service Delete the cluster:This deletes the resources that make up the cluster, such as the compute instances, disks, and network resources: gcloud container clusters delete hello-cluster --zone COMPUTE_ZONE Delete your container images:This deletes the Docker images you pushed to Artifact Registry gcloud artifacts docker images delete REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 \ --delete-tags --quiet gcloud artifacts docker images delete \ REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 \ --delete-tags --quiet ## What's next Learn about Pricing for GKE and use the Pricing Calculator to estimate costs Read the Load Balancers tutorial, which demonstrates advanced load balancing configurations for web applications Configure a static IP and domain name for your application Explore other Kubernetes Engine tutorials Explore reference architectures, diagrams, tutorials, and best practices about Google Cloud. Take a look at our Cloud Architecture Center ## Try it for yourself If you're new to Google Cloud, create an account to evaluate how GKE performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.Try GKE free