Application deployment, Debugging, and Performance on the GCP
Application deployment, Debugging, and Performance on the GCP
17 March 2021
In the application deployment, debugging, and performance, We will cover topics that every developer needs to know, in order to increase release velocity and to build highly reliable applications.
Here, You will learn how to create repeatable and reliable deployments by treating infrastructure as code. We introduce you to the principles of continuous integration and delivery.
You learn how to use Container Builder, Container Registry, and Deployment Manager to automatically create application images in standup GCP infrastructure.
Application Deployment Using Container Builder, Container Registry, and Deployment Manager
Container Image for your application is a complete package that contains the application binary and all the software that’s required for the application to run. When you deploy the same Container Image on your development, test, and production environments, you can be assured that your application will perform exactly the same way in each of these environments.
Google Cloud Container Builder is a fully managed service that enables you to set up build pipelines to create a Docker container image for your application and automatically push the build container image to Google Cloud Container Registry.
In the container registry, we can create a build trigger that is executed based on the trigger type. You can also create a build configuration file that specifies the steps in the build pipeline.
Continuous Integration Pipeline:
Code → Build → store artifacts by container registry → Deploy → Test
Code(Feature Branch) on cloud source repository, GitHub, BitBucket→ Build using Container Builder, Jenkins and CircleCI → Deploy using Deployment manager, Spinnaker, Chef → Test.
Post passing all the tests code, merge feature from the feature branch to the master branch. The build process builds new application images by using Container Builder. In the deployment system, spinnaker deploys the artifacts in your cloud environment.
Continuous Delivery Pipeline:
Code → Build → store artifacts by container registry → Deploy → Test → Release→ Monitor
Code(Feature Branch) on cloud source repository, GitHub, BitBucket→ Build using Container Builder, Jenkins and CircleCI → Deploy using Deployment manager, Spinnaker, Chef here staging → Test (Post passing all the tests code, merge feature from the feature branch to the master branch) → Release(Canary, Blue/Green) here production → Monitor (stackdriver)
Use the Deployment Manager to launch the GCP resources. It’s used to create infrastructure,
manage deployed infrastructure, and delete deployments.
// YAML is a configuration file nano deployment.yaml gcloud deployment-manager deployments create my-deployment --config deployment.yaml gcloud deployment-manager deployments delete my-deployment
Container Builder and Container Registry:
Here, we will see how Docker, Google Kubernetes Engine, Container Builder, Container Registry, and Compute Engine work.
To do this, We will write a simple web application using Node.js code. We will run it on Google Cloud Shell, containerize our code using Docker, save the container to a Container Registry, deploy the container to Container Engine/GKE, expose the container via a Load Balancer. Automate the container build and save it using Container Builder, automate, and deploy using a deployment file.
Containerize our code using Docker, save the container to container registry(Commands Involved)
nano app.js node app.js // this will move app.js file from cloud shell to vm local gcloud compute sep app.js vm-machine:~/ nano dockerfile // build docker image docker build -t my-web-server . docker run -d -p 8080:8080 my-web-server // to view the currently running docker image docker ps Docker stop <docker-id> // save docker image to container registry after tagging Docker tag my-web-server gcr.io/<project-id>/<docker-image name> gcloud docker -- push gcr.io/<project-id>/<docker-image name> // We can combine these two steps by one create a docker image and save to the Container registry gcloud container builds submit -t gcr.io/<project-id>/<my-cb-web-server>
Deploy the container to container engine/GKE:
Kubectl run my-web-server-gke --image gcr.io/<project-id>/<my-cb-web-server> --port=8080 Kubectl gets pods Kubectl gets deployments
Set up a load balancer to GKE:
Kubectl expose deployments my-web-server-gke --type=Loadbalancer --port=8080 --target-port=8080 Kubectl get services //Now copy the external IP from the load balancer and browse. Kubectl get pods Kubectl scale deployments my-web-server-gke --replicas=3
Introduction to Execution Environments in GCP
GCP provides a list of execution environments to run the Application.
- Google Compute Engine (Highly customizable)
- Google Kubernetes Engine
- GAE Flexible Environment
- Google Cloud Function
- Google Cloud Dataflow(Fully managed)
Cloud Dataflow is a serverless execution engine or runner for executing parallel data processing pipelines that are developed using Apache Beam SDKs. Cloud Dataflow supports pipeline development by using Python and Java APIs in the Apache Beam SDK.
Cloud Functions you can develop an application that’s event-driven, serverless, and highly scalable. It is a lightweight microservice that enables you to integrate application components and data sources. Cloud Functions are ideal for microservices that require a small piece of code to quickly process data related to an event.
You can also use Cloud Functions to process IoT streaming data or other application messages that are published to a Cloud Pub/Sub topic. Cloud Functions can serve as Webhooks.
App Engine Flexible Environment:
The App Engine flexible environment is an excellent option for deploying web applications, backends for mobile applications, HTTP APIs, and internal business applications.
App Engine flexible environment provides default settings for infrastructure components. You can customize settings such as network, subnetwork, port forwarding, and instance tags. SSH access to the VM instance in the flexible environment is disabled by default. You can enable root access to the underlying VM instances.
App Engine flexible environment runs a Docker image for your application. If needed, you can generate the Docker image of the application and run it on other container-based environments such as Google Container Engine.
You can go from code to production with a single command. After you develop your application, you can deploy it to your test, staging or production environment with a single command, gcloud app deploy.
When you run this command, the App Engine flexible environment automatically uploads your source code to cloud storage, builds a Docker image for your application with the runtime environment, and pushes the image to the Google Container Registry.
gcloud app deploy → Source code(Cloud storage) → Docker Build (Container build) → Docker push (Container registry)
App Engine’s flexible environment, behind the scene:
- App Engine sets up a load balancer and runs your application in three zones to ensure that your application is always up and running.
- App Engine launches and auto-scales Google Compute Engine instances to run your application and ensure that your application can scale up or down, depending on traffic volume.
- App Engine also sets up other services that are crucial for application monitoring and management such as monitoring, logging, error reporting, health checks, and SSL.
- App Engine flexible environment enables you to quickly deploy your application without manual infrastructure set-up.
- It automatically applies critical backward compatible updates and security updates to the underlying operating system. If you need more control, you can directly SSH to the VM instances to work with custom runtimes and more.
- With the App Engine flexible environment, you can deploy safely with zero downtime.
- App Engine flexible environment provides you to perform canary testing. Here, you can verify a production release before serving any external traffic to your application.
- You can perform AB testing of your application and deploy updates by easily splitting traffic between the current version and the new version. After you verify that the new version works, you can migrate all traffic to the new version without any downtime.
- App Engine flexible environment is ideal for highly scalable web-focused applications. You can use an App Engine flexible environment for HTTP or HTTPS, request-responses and applications that deploy public endpoints.
- You can also implement CI/CD pipelines that use Jenkins or Spinnaker to deploy applications to App Engine.
When is an App Engine flexible environment not the right choice for your application?
- Consider other compute environments such as GKE or Compute Engine if your application needs to support network protocols other than HTTP or HTTPS. Here, you can’t write data to the persistent disks using App Engine. App Engine flexible environment enables you to add volumes to temp-fs, these files are in memory only.
- Currently, an App Engine flexible environment is not ideal for applications with spiky or very low traffic or at least two instances that run at all times to solve traffic. App Engine has another flavor called the App Engine standard environment. If you do have applications that have spiky or very low traffic, the App Engine standard environment may be an option for you. App Engine standard environment is another flavor of App Engine and is a possibility in case of spiky or low traffic apps.
- Applications that run on an App Engine standard environment though must use App Engine standard APIs. App Engine standard APIs are supported only in the App Engine standard environment. So if you build an app using App Engine standard APIs, you cannot run it on another platform such as App Engine flexible environment, GKE, or Compute Engine.
- Applications that are developed using Google Cloud client libraries can be moved to other computing environments such as Cloud Functions, Container Engine, or Compute Engine if the needs of the application change.
Stackdriver, a Multi-cloud service
- Error reporting: Error notification, Error dashboard
- Debugger: Production debug snapshot, Conditional snapshot, IDE integration
- Logging: Platform, system, and app logs, log search/view/filter, logs based metrics
- Monitoring: Platform, system and app metrics, Uptime/health check, dashboard, alert
- Trace: Latency reporting, Pre-URL latency sampling
Monitoring and Tuning Performance
We can create dashboards to view metrics for our application. Create dashboards that include the four golden signals:
- Latency is the amount of time it takes to serve a request. For example, an HTTP error that occurs due to a loss of connection to a database or another back end service might be served really quickly. However, because an HTTP 500 error in your overall latency might result in misleading metrics.
- Traffic measures how much demand is placed on your system. It’s measured as a system-specific metric. Take an example, Web server traffic measured as the number of HTTP or HTTPS requests per second. Traffic to a no sequel database is measured as the number of reads or write operations per second.
- Errors indicate the number of failed requests, criteria for failure might be anything like an explicit error such as an HTTP 500 error. A successful HTTP 200 response but with incorrect content, or it might be a policy error. For example, if your application promises a response time of one second but some requests take over a second.
- Saturation indicates how full your application is, what resources are being stretched and reaching target capacity. Systems might degrade in performance before they achieve 100% utilization. So make sure to set utilization targets carefully.
Identifying and Troubleshooting Performance Issue
Performance issues may be a result of multiple watchpoints. Review metrics related to incoming requests, and check areas such as the ones shown here. Review the design and implementation of your web pages. You can use Pagespeed Insights to view information about the lack of caching headers, the lack of compression, too many HTTP browser requests, slow DNS response, and the lack of minification.
Review application code and logs:
- Application error ( HTTP error and other exceptions)
- Runtime code generation (Aspect-oriented programming)
- Static Resources (Static web pages, images)
- Caching (Database retrieval and computation)
- On-at-a-Time Retrieval (Multiple serial requests)
- Error-Handling (Exponential backoff)
Hence, using the different GCP services we can deploy our web applications on different execution environments depending on the nature of the application. From the different GCP tools, you can debug your application code, identify issues, and troubleshoot.