This tutorial shows you how to package a web application in a Docker container image, and run that container image on a Google Kubernetes Engine (GKE) cluster. Then, you deploy the web application as a load-balanced set of replicas that can scale to the needs of your users

## Objectives
- Package a sample web application into a Docker image

- Upload the Docker image to Artifact Registry

- Create a GKE cluster

- Deploy the sample app to the cluster

- Manage autoscaling for the deployment

- Expose the sample app to the internet

- Deploy a new version of the sample app

## Costs
This tutorial uses the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Clean up

## Before you beginTake the following steps to enable the Kubernetes Engine API:
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads

-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project

-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project

-
Enable the Artifact Registry and Google Kubernetes Engine APIs

-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project

-
Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project

-
Enable the Artifact Registry and Google Kubernetes Engine APIs

 Option A: Use Cloud Shell
You can follow this tutorial using Cloud Shell, which comes
preinstalled with the
gcloud,
docker, and
kubectl command-line tools used
in this tutorial. If you use Cloud Shell, you don't need to install these
command-line tools on your workstation

To use Cloud Shell:
- Go to the Google Cloud console

Click the
Activate Cloud Shell
button at the top of the Google Cloud console window

A Cloud Shell session opens inside a new frame at the bottom of the Google Cloud console and displays a command-line prompt

 Option B: Use command-line tools locally
If you prefer to follow this tutorial on your workstation, follow these steps to install the necessary tools

Install the Google Cloud CLI

Using the gcloud CLI, install the Kubernetes command-line tool


kubectlis used to communicate with Kubernetes, which is the cluster orchestration system of GKE clusters:
gcloud components install kubectl
Install Docker Community Edition (CE) on your workstation. You use this to build a container image for the application

Install the Git source control tool to fetch the sample application from GitHub

## Create a repository
In this tutorial, you store an image in Artifact Registry and deploy it
from the registry. Artifact Registry is the recommended container registry on
Google Cloud. For this quickstart, you'll create a repository named
hello-repo

Set the
PROJECT_IDenvironment variable to your Google Cloud project ID (
). You'll use this environment variable when you build the container image and push it to your repository

PROJECT_ID
export PROJECT_ID=
PROJECT_ID
Confirm that the
PROJECT_IDenvironment variable has the correct value:
echo $PROJECT_ID
Set your project ID for the Google Cloud CLI:
gcloud config set project $PROJECT_ID
Output:
Updated property [core/project]

Create the
hello-reporepository with the following command:
gcloud artifacts repositories create hello-repo \ --repository-format=docker \ --location=
REGION\ --description="Docker repository"
Replace
with the a region for the repository, such as
REGION
us-west1. To see a list of available locations, run the command:
gcloud artifacts locations list
## Building the container image
In this tutorial, you deploy a sample web
application called
hello-app, a web server written
in Go that responds to all requests with the message
Hello, World! on port 8080

GKE accepts Docker images as the application deployment format

Before deploying
hello-app to GKE, you must package
the
hello-app source code as a Docker image

To build a Docker image, you need source code and a Dockerfile. A Dockerfile contains instructions on how the image is built

Download the
hello-appsource code and Dockerfile by running the following commands:
git clone httpsgithub.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/hello-app
Build and tag the Docker image for
hello-app:
docker build -t
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 

This command instructs Docker to build the image using the
Dockerfilein the current directory, save it to your local environment, and tag it with a name, such as
us-west1-docker.pkg.dev/my-project/hello-repo/hello-app:v1. The image is pushed to Artifact Registry in the next section

- The
PROJECT_IDvariable associates the container image with the
hello-reporepository in your Google Cloud project

- The

us-west1-docker.pkg.devprefix refers to Artifact Registry, regional host for your repository

- The
Run the
docker imagescommand to verify that the build was successful:
docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE us-west1-docker.pkg.dev/my-project/hello-repo/hello-app v1 25cfadb1bf28 10 seconds ago 54 MB
## Running your container locally (optional)
Test your container image using your local Docker engine:
docker run --rm -p 8080:8080
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1
If you're using Cloud Shell, click the
Web Previewbutton
and then select the
8080port number. GKE opens the preview URL on its proxy service in a new browser window

Otherwise, open a new terminal window (or a Cloud Shell tab) and run the following command to verify that the container works and responds to requests with "Hello, World
curl httplocalhost:8080
After you've seen a successful response, you candown the container by pressing
Ctrl+Cin the tab where the
docker runcommand is running

## Pushing the Docker image to Artifact Registry
You must upload the container image to a registry so that your GKE cluster can download and run the container image. In this tutorial, you will store your container in Artifact Registry

Configure the Docker command-line tool to authenticate to Artifact Registry:
gcloud auth configure-docker
REGION-docker.pkg.dev
Push the Docker image that you just built to the repository:
docker push
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1
## Creating a GKE cluster
Now that the Docker image is stored in Artifact Registry, create a GKE
cluster
to run
hello-app. A GKE cluster consists of a pool of Compute Engine VM instances
running Kubernetes, the open source cluster orchestration
system that powers GKE

 Cloud Shell
Set your Compute Engine zone or region. Depending on the mode of operation that you choose to use in GKE, specify a default zone or region. If you use the Standard mode, your cluster is zonal (for this tutorial), so set your default compute zone. If you use the Autopilot mode, your cluster is regional, so set your default compute region. Choose a zone or region that is closest to the Artifact Registry repository you created

Standardcluster, such as
us-west1-a:
gcloud config set compute/zone
COMPUTE_ZONE Autopilotcluster, such as
us-west1:
gcloud config set compute/region
COMPUTE_REGION
-
Create a cluster named
hello-cluster:
Standardcluster:
gcloud container clusters create hello-cluster
Autopilotcluster:
gcloud container clusters create-auto hello-cluster
It takes a few minutes for your GKE cluster to be created and health-checked

-

After the command completes, run the following command to see the cluster's three Nodes:
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION gke-hello-cluster-default-pool-229c0700-cbtd Ready  92s v1.18.12-gke.1210 gke-hello-cluster-default-pool-229c0700-fc5j Ready  91s v1.18.12-gke.1210 gke-hello-cluster-default-pool-229c0700-n9l7 Ready  92s v1.18.12-gke.1210
 Console
Go to the
Google Kubernetes Enginepage in the Google Cloud console

Go to Google Kubernetes Engine
Click
Create

Choose Standard or Autopilot mode and click
Configure

In the
Namefield, enter the name
hello-cluster

Select a zone or region:
Standardcluster: Under Location type, select Zonaland then select a Compute Engine zone from the Zonedrop-down list, such as
us-west1-a

Autopilotcluster: Select a Compute Engine region from the Regiondrop-down list, such as
us-west1

-
Click
Create. This creates a GKE cluster

Wait for the cluster to be created. When the cluster is ready, a green check mark appears next to the cluster name

## Deploying the sample app to GKE
You are now ready to deploy the Docker image you built to your GKE cluster

Kubernetes represents applications as Pods, which are scalable units holding one or more containers. The Pod is the smallest deployable unit in Kubernetes. Usually, you deploy Pods as a set of replicas that can be scaled and distributed together across your cluster. One way to deploy a set of replicas is through a Kubernetes Deployment

In this section, you create a Kubernetes Deployment to run
hello-app on your
cluster. This Deployment has replicas (Pods). One Deployment Pod contains only
one container: the
hello-app Docker image

You also create a HorizontalPodAutoscaler resource that scales the number
of Pods from 3 to a number between 1 and 5, based on CPU load

 Cloud Shell
Ensure that you are connected to your GKE cluster

gcloud container clusters get-credentials hello-cluster --zone
COMPUTE_ZONE
Create a Kubernetes Deployment for your
hello-appDocker image

kubectl create deployment hello-app --image=
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1
Set the baseline number of Deployment replicas to 3

kubectl scale deployment hello-app --replicas=3
Create a
HorizontalPodAutoscalerresource for your Deployment

kubectl autoscale deployment hello-app --cpu-percent=80 --min=1 --max=5
To see the Pods created, run the following command:
kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE hello-app-784d7569bc-hgmpx 1/1 Running 0 10s hello-app-784d7569bc-jfkz5 1/1 Running 0 10s hello-app-784d7569bc-mnrrl 1/1 Running 0 15s
 Console
Go to the
Workloadspage in the Google Cloud console

Click
Deploy

In the
Specify containersection, select Existing container image

In the
Image pathfield, click Select

In the
Select container imagepane, select the
hello-appimage you pushed to Artifact Registry and click
Select

In the
Containersection, click Done, then click Continue

In the
Configurationsection, under Labels, enter
appfor
Keyand
hello-appfor
Value

Under
Configuration YAML, click View YAML. This opens a YAML configuration file representing the two Kubernetes API resources about to be deployed into your cluster: one Deployment, and one
HorizontalPodAutoscalerfor that Deployment

Click
Close, then click Deploy

When the Deployment Pods are ready, the
Deployment detailspage opens

Under
Managed pods, note the three running Pods for the
hello-appDeployment

## Exposing the sample app to the internet
While Pods do have individually-assigned IP addresses, those IPs can only be reached from inside your cluster. Also, GKE Pods are designed to be ephemeral, starting or stopping based on scaling needs. And when a Pod crashes due to an error, GKE automatically redeploys that Pod, assigning a new Pod IP address each time

What this means is that for any Deployment, the set of IP addresses corresponding to the active set of Pods is dynamic. We need a way to 1) group Pods together into one static hostname, and 2) expose a group of Pods outside the cluster, to the internet

Kubernetes Services solve for both of these problems

Services group Pods
into one static IP address, reachable from any Pod inside the cluster

GKE also assigns a DNS hostname
to that static IP. For example,
hello-app.default.svc.cluster.local

The default Service type in GKE is called ClusterIP,
where the Service gets an IP address reachable only from inside the cluster

To expose a Kubernetes Service outside the cluster, create a Service of
type
LoadBalancer

This type of Service spawns an External Load Balancer IP for a set of Pods,
reachable through the internet

In this section, you expose the
hello-app Deployment to the internet using a
Service of type
LoadBalancer

 Cloud Shell
Use the
kubectl exposecommand to generate a Kubernetes Service for the
hello-appdeployment:
kubectl expose deployment hello-app --name=hello-app-service --type=LoadBalancer --port 80 --target-port 8080
Here, the
--portflag specifies the port number configured on the Load Balancer, and the
--target-portflag specifies the port number that the
hello-appcontainer is listening on

Run the following command to get the Service details for
hello-app-service:
kubectl get service
Output:

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-app-service 10.3.251.122 203.0.113.0 80:30877/TCP 10s
Copy the
EXTERNAL_IPaddress to the clipboard (for instance:
203.0.113.0)

 Console
Go to the
Workloadspage in the Google Cloud console

Click
hello-app

From the Deployment details page, click
Actions > Expose

In the
Exposedialog, set the Target portto
8080. This is the port the
hello-appcontainer listens on

From the
Service typedrop-down list, select Load balancer

Click
Exposeto create a Kubernetes Service for
hello-app

When the Load Balancer is ready, the
Service detailspage opens

Scroll down to the
External endpointsfield, and copy the IP address

Now that the
hello-app Pods are exposed to the internet through a Kubernetes Service,
you can open a new browser tab, and navigate to the Service IP address you copied
to the clipboard. A
Hello, World! message appears, along with a
Hostname
field. The
Hostname corresponds to one of the three
hello-app Pods serving your
HTTP request to your browser

## Deploying a new version of the sample app
In this section, you upgrade
hello-app to a new version by building and deploying
a new Docker image to your GKE cluster

GKE's rolling update feature
lets you update your Deployments without downtime. During a rolling update, your GKE cluster
incrementally replaces the existing
hello-app Pods with Pods containing the Docker image for the new version

During the update, your load balancer service routes traffic only into available Pods

Return to Cloud Shell, where you have cloned the hello app source code and Dockerfile. Update the function
hello()in the
main.gofile to report the new version
2.0.0

Build and tag a new
hello-appDocker image

docker build -t
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 

Push the image to Artifact Registry

docker push
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2
Now you're ready to update your
hello-app Kubernetes Deployment to use a new Docker image

 Cloud Shell
Apply a rolling update to the existing
hello-appDeployment with an image update using the
kubectl set imagecommand:
kubectl set image deployment/hello-app hello-app=
REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2
Watch the running Pods running the
v1image stop, and new Pods running the
v2image start

watch kubectl get pods
Output:
NAME READY STATUS RESTARTS AGE hello-app-89dc45f48-5bzqp 1/1 Running 0 2m42s hello-app-89dc45f48-scm66 1/1 Running 0 2m40s
In a separate tab, navigate again to the
hello-app-serviceExternal IP. You should now see the
Versionset to
2.0.0

 Console
Go to the
Workloadspage in the Google Cloud console

Click
hello-app

On the

Deployment detailspage, click Actions > Rolling update

In the
Rolling updatedialog, set the Image of hello-appfield to


REGION-docker.pkg.dev/ PROJECT_ID/hello-repo/hello-app:v2
Click
Update

On the
Deployment detailspage, inspect the Active Revisionssection. You should now see two Revisions, 1 and 2. Revision 1 corresponds to the initial Deployment you created earlier. Revision 2 is the rolling update you just started

After a few moments, refresh the page. Under
Managed pods, all of the replicas of
hello-appnow correspond to Revision 2

In a separate tab, navigate again to the Service IP address you copied. The
Versionshould be
2.0.0

## Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources

Delete the Service:This deallocates the Cloud Load Balancer created for your Service:
kubectl delete service hello-app-service
Delete the cluster:This deletes the resources that make up the cluster, such as the compute instances, disks, and network resources:
gcloud container clusters delete hello-cluster --zone
COMPUTE_ZONE Delete your container images:This deletes the Docker images you pushed to Artifact Registry

gcloud artifacts docker images delete REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v1 \ --delete-tags --quiet gcloud artifacts docker images delete \ REGION-docker.pkg.devPROJECT_ID}/hello-repo/hello-app:v2 \ --delete-tags --quiet
## What's next
Learn about Pricing for GKE and use the Pricing Calculator to estimate costs

Read the Load Balancers tutorial, which demonstrates advanced load balancing configurations for web applications

Configure a static IP and domain name for your application

Explore other Kubernetes Engine tutorials

Explore reference architectures, diagrams, tutorials, and best practices about Google Cloud. Take a look at our Cloud Architecture Center

## Try it for yourself
If you're new to Google Cloud, create an account to evaluate how GKE performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.Try GKE free