Deploy Microservice Applications in GKE
Deploy Microservice Applications in GKE

How to Deploy Microservice Applications in GKE Cluster

Step-by-Step Guide to Deploy a Microservice in GKE

Introduction

After setting up your Kubernetes cluster in Google Kubernetes Engine (GKE), the next step is to deploy microservice applications in GKE. This blog will guide you through deploying an Nginx-based microservice in your GKE cluster using Cloud Shell commands.

Step-by-Step Guide to Deploy a Microservice Applications in GKE

If you haven’t created a GKE cluster yet, please refer to my previous blog for step-by-step instructions.. Now, let’s get started with deploying your microservice.


1. Access Your GKE Cluster to deploy a Microservice Applications in GKE

  • Navigate to the Google Kubernetes Engine dashboard.
  • Locate your cluster (e.g., my-cluster) and click the CONNECT button.
  • Next, click “RUN IN CLOUD SHELL” to open the Cloud Shell console.
Access GKE cluster
Run in cloud shell

2. Connect to Your Cluster

  • Copy and paste the connection command from the GKE dashboard into the Cloud Shell.
  • After running the command, you will see:
    kubeconfig entry generated for my-cluster.
  • The cluster configuration file is stored at:
    ~/.kube/config

3. Deploy a Microservice Applications in GKE

Now that you are connected to your cluster, you can deploy an Nginx-based microservice. Follow the steps below:

Step 1: Create a Deployment

A deployment ensures that your application runs as expected. Execute the following command to create an Nginx deployment:

kubectl create deployment nginx --image=nginx
create deployment

To check if the pods are running, execute:

kubectl get pods
Deploy Microservice in GKE

Alternatively, go to the Workloads section in the GCP console and verify the pod status.

  • Navigate to Workloads in Kubernetes Engine to see the pod status.
Deploy Microservice in GKE

4. Expose the Deployment

To make the microservice accessible, you need to expose the deployment as a service:

Run the following command to expose it:

kubectl expose deployment nginx --type=LoadBalancer --port=80

To verify the service status, use:

kubectl get service

5. Test the Microservice

Once the service is running, you can check if the microservice is accessible using the curl command:

curl <EXTERNAL-IP>
Deploy Microservice in GKE

Replace <EXTERNAL-IP> with the external IP address of your service, which you can find in the service details.


6. Set Up Horizontal Autoscaling

To handle increased traffic, enable autoscaling for your deployment. Use the following command:

kubectl autoscale deployment nginx --max=2 --cpu-percent=70
autoscale

This will create an autoscaler that ensures the CPU usage doesn’t exceed 70% and scales up to two pods if needed.


Conclusion
In this guide, we successfully deployed an Nginx-based microservice to a GKE cluster. Additionally, we exposed the deployment, verified its accessibility, and configured horizontal autoscaling to handle traffic spikes.

In the next blog, we will explore advanced features and best practices for using Google Kubernetes Engine (GKE). Stay tuned for more insights!

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply