What is Cloud Memorystore?
What is Cloud Memorystore?
27 October 2020
What is Cloud Memorystore?
Cloud MemoryStore is a fully managed Google Cloud Redis program. Google Cloud-based applications can achieve significant productivity by leveraging the highly scalable, open, and secure Redis infrastructure, without the burden of managing complex Redis deployments. Using the import/export function, you can raise and transfer your applications from open source Redis to Memorystore without any code changes. Since all current software and database libraries function, there is no need to learn new resources.
It uses VPC networks and to protect your data from the internet and comes with IAM integration — all designed to protect your privacy.
Pros of Cloud Memorystore
- Google actively handles administrative tasks for Redis instances such as equipment provisioning, startup and configuration management, program patching, failover, testing and other aspects involving considerable effort for device owners who merely want to use Redis as a memory store or a cache.
- It’s highly available and provides a standard Cloud Memorystore tier, in which we fully manage replication and failover to provide high availability. Also, we can keep the replica in another zone.
- It is modular. We can conveniently scale memory that is supplied for instances of Redis. It also provides instance high network capacity which can be scaled on requests.
Cons of Cloud Memorystore
- There is still some functionality not available: Redis Server, backup, and restore.
- It lacks the choice of replicas. Cloud MemoryStore for Redis has a single-tier master/replica setup, and master and copy are distributed through zones. For a definition, there is only one replica.
How to deploy Redis on GCP using Cloud Memorystore?
Here are the commands below that we need to execute in the cloud shell to set up multiple Cloud Memory Stores for Redis instances using Twemproxy and an internal load balancer.
- Build nine new cloud memory stores in the Asia-Northeast1 area for Redis instances.
$ for i in {1..9}; do gcloud redis instances create redis${i} --size=1 --region=asia-northeast1 --tier=STANDARD; done
- Create a Twemproxy-container for use.
$ mkdir twemproxy $ cd twemproxy $ cat <<EOF > nutcracker.yml alpha: listen: 0.0.0.0:26379 hash: fnv1a_64 distribution: ketama timeout: 1000 backlog: 512 preconnect: true redis: true auto_eject_hosts: true server_retry_timeout: 2000 server_failure_limit: 2 servers: EOF $ gcloud redis instances list --region=asia-northeast1 | awk ‘{ printf “ - %s:%s:1\n”, $5, $6 }’ | tail -n +2 >> nutcracker.yml $ cat <<EOF > Dockerfile FROM gliderlabs/alpine:3.3 RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories RUN apk add --update twemproxy EXPOSE 26379 COPY nutcracker.yml /etc/nutcracker/ ENTRYPOINT ["/usr/sbin/nutcracker"] CMD ["-c", "/etc/nutcracker/nutcracker.yml"] EOF
- Create a docker image with Twemproxy.
$ gcloud builds submit --tag gcr.io/<your-project>/twemproxy
NOTE: Replace the project ID with <your-project>.
- Build an instance template based on a Docker image.
$ gcloud compute instance-templates create-with-container twemproxy --machine-type=n1-standard-8 --tags=twemproxy-26379,allow-health-checks-tcp --container-image gcr.io/<your-project>/twemproxy:latest
NOTE: Replace the project ID with <your-project>.
- Build a managed instances group.
$ gcloud compute instance-groups managed create ig-twemproxy --base-instance-name ig-twemproxy --size 4 --template twemproxy --region asia-northeast1 $ gcloud compute instance-groups managed set-autoscaling ig-twemproxy --max-num-replicas 10 --min-num-replicas 3 --target-cpu-utilization 0.6 --cool-down-period 60 --region asia-northeast1
- Build a health check for the internal load balancer.
$ gcloud compute health-checks create tcp hc-twemproxy --port 26379 --check-interval 5 --healthy-threshold 2
- Build a back-end service for the internal load balancer.
$ gcloud compute backend-services create ilb-twemproxy --load-balancing-scheme internal --session-affinity client_ip_port_proto --region asia-northeast1 --health-checks hc-twemproxy --protocol tcp
- Adding instance groups to the back-end.
$ gcloud compute backend-services add-backend ilb-twemproxy --instance-group ig-twemproxy --instance-group-region asia-northeast1 --region asia-northeast1
- Build a forwarding rule for the load balancer.
$ gcloud compute forwarding-rules create fr-ilb-twemproxy --load-balancing-scheme internal --ip-protocol tcp --ports 26379 --backend-service ilb-twemproxy --region asia-northeast1
- Configure firewall rules to allow the load balancer to access instances.
$ gcloud compute firewall-rules create allow-twemproxy --action allow --direction INGRESS --source-ranges 10.128.0.0/20 --target-tags twemproxy-26379 --rules tcp:26379 $ gcloud compute firewall-rules create allow-health-checks-tcp --action allow --direction INGRESS --source-ranges 130.211.0.0/22,35.191.0.0/16 --target-tags allow-health-checks-tcp --rules tcp