Puy Web
Profile Blog
EN TH
Blog Manage Kubernetes in Google Cloud: Challenge Lab
Manage Kubernetes in Google Cloud: Challenge Lab
Technology Apr 11, 2026

Manage Kubernetes in Google Cloud: Challenge Lab

Lab Link: https://www.skills.google/course_templates/783/labs/612117

Challenge scenario

You were onboarded at Cymbal Shops just a few months ago. You have spent a lot of time working with containers in Docker and Artifact Registry and have learned the ropes of managing new and existing deployments on GKE. You've had practice updating manifests as well as scaling, monitoring, and debugging applications running on your clusters.

Your team would like you to start managing their Kubernetes deployments to ensure smooth rollouts and application updates to their new e-commerce website. Before you switch to this new role, the development team would like you to demonstrate your new skills. As part of this demonstration, they have a list of tasks they would like to see you do in an allotted period of time in a sandbox environment.

Preparation

1. Open Cloud Shell .

2. Copy the block below into a text editor (like Notepad), replace the bracketed [VALUES] with the specific lab details, and then paste it into Cloud Shell. (The value will fine at the yellow highlight in the lab instruction.)

# The value will fine at the yellow highlight in the lab instruction.
export CLUSTER_NAME="[YOUR_CLUSTER_NAME]"
export ZONE="[YOUR_ZONE]"
export NAMESPACE="[YOUR_NAMESPACE]"
export INTERVAL="[YOUR_INTERVAL]"       # E.g., 15s or 30s
export REPO_NAME="[YOUR_REPO_NAME]"
export SERVICE_NAME="[YOUR_SERVICE_NAME]"

# These will set automatically
export PROJECT_ID=$(gcloud config get-value project)
export REGION="${ZONE%-*}"
gcloud config set compute/zone $ZONE

Task 1: Create a GKE Cluster

We will build a cluster with specific sizing, release channels, and autoscaling configured.

1. Run the following command to create the cluster. (This will take 5-8 minutes to complete)

gcloud container clusters create $CLUSTER_NAME \
   --zone=$ZONE \
   --release-channel=regular \
   --cluster-version=latest \
   --enable-autoscaling \
   --num-nodes=3 \
   --min-nodes=2 \
   --max-nodes=6 \
   --no-enable-ip-alias

2. Once the cluster is built, authenticate kubectl to it.

gcloud container clusters get-credentials $CLUSTER_NAME --zone=$ZONE

Task 2: Enable Managed Prometheus

We need to enable managed metrics, create the designated namespace, and deploy a Prometheus sample application with a Pod Monitoring resource.

1. Enable Managed Prometheus on the cluster.

gcloud container clusters update $CLUSTER_NAME --zone=$ZONE --enable-managed-prometheus

2. Create the required Kubernetes namespace.

kubectl create ns $NAMESPACE

3. Create and apply the prometheus-app deployment and the pod-monitoring resource. We will build the files directly in Cloud Shell.

cat <<EOF > prometheus-app.yaml
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: prometheus-test
     labels:
       app: prometheus-test
   spec:
     selector:
       matchLabels:
         app: prometheus-test
     replicas: 3
     template:
       metadata:
         labels:
           app: prometheus-test
       spec:
         nodeSelector:
           kubernetes.io/os: linux
           kubernetes.io/arch: amd64
         containers:
         - image: nilebox/prometheus-example-app:latest
           name: prometheus-test
           ports:
           - name: metrics
             containerPort: 1234
           command:
           - "/main"
           - "--process-metrics"
           - "--go-metrics"
EOF #Type to end of file
kubectl -n $NAMESPACE apply -f prometheus-app.yaml

4. Now apply the Pod Monitoring definition.

cat <<EOF > pod-monitoring.yaml
   apiVersion: monitoring.googleapis.com/v1alpha1
   kind: PodMonitoring
   metadata:
     name: prometheus-test
     labels:
       app.kubernetes.io/name: prometheus-test
   spec:
     selector:
       matchLabels:
         app: prometheus-test
     endpoints:
     - port: metrics
       interval: $INTERVAL
EOF #Type to end of file
kubectl -n $NAMESPACE apply -f pod-monitoring.yaml

Task 3: Deploy an Application (with errors)

The lab provides a set of files in a Cloud Storage bucket containing an intentional error that will need to deploy.

1. Copy the files from the lab's storage bucket and navigate into the folder.

gsutil cp -r gs://spls/gsp510/hello-app/ .
cd ~/hello-app

2. Apply the broken manifest.

kubectl -n $NAMESPACE apply -f manifests/helloweb-deployment.yaml

Note: If We run kubectl get pods -n $NAMESPACE, We will see them failing with errors. This is intentional for the next task!

Task 4: Create Logs-Based Metrics and Alerting

We must find the error pattern in Cloud Logging and create an alert based on it.

1. In the Cloud Console search bar, type Logging and click Logs Explorer.

2. Click the Create Metric button (under Actions menu).

3. Configure the metric parameters:

* Metric Type: Counter

* Log metric name: pod-image-errors (or whatever specific metric name the lab instructions ask for).

4. Under the Filter selection area, paste the following query to catch the Kubernetes warnings.

resource.type="k8s_pod" severity=WARNING

5. Click Create Metric.

6. Now, go to Monitoring > Alerting in the left menu.

7. Click + Create Policy.

8. Select the metric you just made. (Search for pod-image-errors (Don't forgot to uncheck active to find the metric name), it will be under logging/user/pod-image-errors). Click Apply.

9. Under Transform data, set Rolling window to 10 min, Rolling window function to count, Across time series to sum (Check these values that match the lab specific).

10. Under Condition (Configure alert trigger), set Threshold position to Above threshold and Threshold value to 0. Click Next.

11. Disable notification channel.

12. Give the alert policy the exact name required by the lab (e.g., Pod Error Alert) and click Create Policy.

Task 5: Update and Redeploy

Now we must fix the broken deployment by fixing the image name in the YAML file.

1. Back in Cloud Shell, delete the broken deployment first.

kubectl delete deployments helloweb -n $NAMESPACE

2. Open the deployment file in the text editor.

nano manifests/helloweb-deployment.yaml

3. Locate the image: line. It currently has a typo. Fix it so it points to the correct sample image (the lab will state what it should be, typically us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0).

4. Save and exit Ctrl+O, Enter, Ctrl+X).

5. Redeploy the fixed application.

kubectl -n $NAMESPACE apply -f manifests/helloweb-deployment.yaml

Task 6: Containerize and Deploy New Code

Finally, We must build a Version 2 v2) image, push it to your Artifact Registry, do a rolling update on the cluster, and expose it via a LoadBalancer.

1. Authenticate Docker to your region's Artifact Registry.

gcloud auth configure-docker $REGION-docker.pkg.dev --quiet

2. Build the new Docker image directly from the ~/hello-app folder.

docker build -t $REGION-docker.pkg.dev/$PROJECT_ID/$REPO_NAME/hello-app:v2 .

3. Push the newly built image to the pre-created Artifact Registry.

docker push $REGION-docker.pkg.dev/$PROJECT_ID/$REPO_NAME/hello-app:v2

4. Perform a rolling update on your running deployment to swap it to v2.

kubectl set image deployment/helloweb -n $NAMESPACE hello-app=$REGION-docker.pkg.dev/$PROJECT_ID/$REPO_NAME/hello-app:v2

5. Finally, expose the deployment using a LoadBalancer service to make it publicly accessible.

kubectl expose deployment helloweb \
   -n $NAMESPACE \
   --name=$SERVICE_NAME \
   --type=LoadBalancer \
   --target-port 8080 \
   --port 8080

Once we run this final command, Kubernetes will provision an External IP. We can monitor its creation by running kubectl get services -n $NAMESPACE.

Congratulations! Complete the lab.

Share this article:

Related Articles

Journey Log: Setup & Deploy Web App / API / Let's Encrypt on VPS with Dockers and Auto Deploy from GitLab Container Registry
Technology
Apr 20, 2026

Journey Log: Setup & Deploy Web App / API / Let's Encrypt on VPS with Dockers and Auto Deploy from GitLab Container Registry

This blog is a content for setup & deploy web application and api service with dockerize on VPS, with auto deploy if found new image registry.

Read More
Engineer Data for Predictive Modeling with BigQuery ML: Challenge Lab
Technology
Apr 14, 2026

Engineer Data for Predictive Modeling with BigQuery ML: Challenge Lab

Let learn through the lab.

Read More
Implement Multimodal Vector Search with BigQuery: Challenge Lab
Technology
Apr 13, 2026

Implement Multimodal Vector Search with BigQuery: Challenge Lab

Let learn through the challenge lab.

Read More
Perform Predictive Data Analysis in BigQuery: Challenge Lab
Technology
Apr 12, 2026

Perform Predictive Data Analysis in BigQuery: Challenge Lab

Let learn throught the challenge lab.

Read More